NASA Astrophysics Data System (ADS)
Al-Jumaili, Safaa Kh.; Pearson, Matthew R.; Holford, Karen M.; Eaton, Mark J.; Pullin, Rhys
2016-05-01
An easy to use, fast to apply, cost-effective, and very accurate non-destructive testing (NDT) technique for damage localisation in complex structures is key for the uptake of structural health monitoring systems (SHM). Acoustic emission (AE) is a viable technique that can be used for SHM and one of the most attractive features is the ability to locate AE sources. The time of arrival (TOA) technique is traditionally used to locate AE sources, and relies on the assumption of constant wave speed within the material and uninterrupted propagation path between the source and the sensor. In complex structural geometries and complex materials such as composites, this assumption is no longer valid. Delta T mapping was developed in Cardiff in order to overcome these limitations; this technique uses artificial sources on an area of interest to create training maps. These are used to locate subsequent AE sources. However operator expertise is required to select the best data from the training maps and to choose the correct parameter to locate the sources, which can be a time consuming process. This paper presents a new and improved fully automatic delta T mapping technique where a clustering algorithm is used to automatically identify and select the highly correlated events at each grid point whilst the "Minimum Difference" approach is used to determine the source location. This removes the requirement for operator expertise, saving time and preventing human errors. A thorough assessment is conducted to evaluate the performance and the robustness of the new technique. In the initial test, the results showed excellent reduction in running time as well as improved accuracy of locating AE sources, as a result of the automatic selection of the training data. Furthermore, because the process is performed automatically, this is now a very simple and reliable technique due to the prevention of the potential source of error related to manual manipulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohd, Shukri; Holford, Karen M.; Pullin, Rhys
2014-02-12
Source location is an important feature of acoustic emission (AE) damage monitoring in nuclear piping. The ability to accurately locate sources can assist in source characterisation and early warning of failure. This paper describe the development of a novelAE source location technique termed 'Wavelet Transform analysis and Modal Location (WTML)' based on Lamb wave theory and time-frequency analysis that can be used for global monitoring of plate like steel structures. Source location was performed on a steel pipe of 1500 mm long and 220 mm outer diameter with nominal thickness of 5 mm under a planar location test setup usingmore » H-N sources. The accuracy of the new technique was compared with other AE source location methods such as the time of arrival (TOA) techniqueand DeltaTlocation. Theresults of the study show that the WTML method produces more accurate location resultscompared with TOA and triple point filtering location methods. The accuracy of the WTML approach is comparable with the deltaT location method but requires no initial acoustic calibration of the structure.« less
Probabilistic location estimation of acoustic emission sources in isotropic plates with one sensor
NASA Astrophysics Data System (ADS)
Ebrahimkhanlou, Arvin; Salamone, Salvatore
2017-04-01
This paper presents a probabilistic acoustic emission (AE) source localization algorithm for isotropic plate structures. The proposed algorithm requires only one sensor and uniformly monitors the entire area of such plates without any blind zones. In addition, it takes a probabilistic approach and quantifies localization uncertainties. The algorithm combines a modal acoustic emission (MAE) and a reflection-based technique to obtain information pertaining to the location of AE sources. To estimate confidence contours for the location of sources, uncertainties are quantified and propagated through the two techniques. The approach was validated using standard pencil lead break (PLB) tests on an Aluminum plate. The results demonstrate that the proposed source localization algorithm successfully estimates confidence contours for the location of AE sources.
A Bayesian framework for infrasound location
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.; Arrowsmith, Stephen J.; Anderson, Dale N.
2010-04-01
We develop a framework for location of infrasound events using backazimuth and infrasonic arrival times from multiple arrays. Bayesian infrasonic source location (BISL) developed here estimates event location and associated credibility regions. BISL accounts for unknown source-to-array path or phase by formulating infrasonic group velocity as random. Differences between observed and predicted source-to-array traveltimes are partitioned into two additive Gaussian sources, measurement error and model error, the second of which accounts for the unknown influence of wind and temperature on path. By applying the technique to both synthetic tests and ground-truth events, we highlight the complementary nature of back azimuths and arrival times for estimating well-constrained event locations. BISL is an extension to methods developed earlier by Arrowsmith et al. that provided simple bounds on location using a grid-search technique.
Damage source identification of reinforced concrete structure using acoustic emission technique.
Panjsetooni, Alireza; Bunnori, Norazura Muhamad; Vakili, Amir Hossein
2013-01-01
Acoustic emission (AE) technique is one of the nondestructive evaluation (NDE) techniques that have been considered as the prime candidate for structural health and damage monitoring in loaded structures. This technique was employed for investigation process of damage in reinforced concrete (RC) frame specimens. A number of reinforced concrete RC frames were tested under loading cycle and were simultaneously monitored using AE. The AE test data were analyzed using the AE source location analysis method. The results showed that AE technique is suitable to identify the sources location of damage in RC structures.
Damage Source Identification of Reinforced Concrete Structure Using Acoustic Emission Technique
Panjsetooni, Alireza; Bunnori, Norazura Muhamad; Vakili, Amir Hossein
2013-01-01
Acoustic emission (AE) technique is one of the nondestructive evaluation (NDE) techniques that have been considered as the prime candidate for structural health and damage monitoring in loaded structures. This technique was employed for investigation process of damage in reinforced concrete (RC) frame specimens. A number of reinforced concrete RC frames were tested under loading cycle and were simultaneously monitored using AE. The AE test data were analyzed using the AE source location analysis method. The results showed that AE technique is suitable to identify the sources location of damage in RC structures. PMID:23997681
Relating to monitoring ion sources
Orr, Christopher Henry; Luff, Craig Janson; Dockray, Thomas; Macarthur, Duncan Whittemore; Bounds, John Alan
2002-01-01
The apparatus and method provide techniques for monitoring the position on alpha contamination in or on items or locations. The technique is particularly applicable to pipes, conduits and other locations to which access is difficult. The technique uses indirect monitoring of alpha emissions by detecting ions generated by the alpha emissions. The medium containing the ions is moved in a controlled manner frog in proximity with the item or location to the detecting unit and the signals achieved over time are used to generate alpha source position information.
Ryberg, T.; Haberland, C.H.; Fuis, G.S.; Ellsworth, W.L.; Shelly, D.R.
2010-01-01
Non-volcanic tremor (NVT) has been observed at several subduction zones and at the San Andreas Fault (SAF). Tremor locations are commonly derived by cross-correlating envelope-transformed seismic traces in combination with source-scanning techniques. Recently, they have also been located by using relative relocations with master events, that is low-frequency earthquakes that are part of the tremor; locations are derived by conventional traveltime-based methods. Here we present a method to locate the sources of NVT using an imaging approach for multiple array data. The performance of the method is checked with synthetic tests and the relocation of earthquakes. We also applied the method to tremor occurring near Cholame, California. A set of small-aperture arrays (i.e. an array consisting of arrays) installed around Cholame provided the data set for this study. We observed several tremor episodes and located tremor sources in the vicinity of SAF. During individual tremor episodes, we observed a systematic change of source location, indicating rapid migration of the tremor source along SAF. ?? 2010 The Authors Geophysical Journal International ?? 2010 RAS.
The origin of infrasonic ionosphere oscillations over tropospheric thunderstorms
NASA Astrophysics Data System (ADS)
Shao, Xuan-Min; Lay, Erin H.
2016-07-01
Thunderstorms have been observed to introduce infrasonic oscillations in the ionosphere, but it is not clear what processes or which parts of the thunderstorm generate the oscillations. In this paper, we present a new technique that uses an array of ground-based GPS total electron content (TEC) measurements to locate the source of the infrasonic oscillations and compare the source locations with thunderstorm features to understand the possible source mechanisms. The location technique utilizes instantaneous phase differences between pairs of GPS-TEC measurements and an algorithm to best fit the measured and the expected phase differences for assumed source positions and other related parameters. In this preliminary study, the infrasound waves are assumed to propagate along simple geometric raypaths from the source to the measurement locations to avoid extensive computations. The located sources are compared in time and space with thunderstorm development and lightning activity. Sources are often found near the main storm cells, but they are more likely related to the downdraft process than to the updraft process. The sources are also commonly found in the convectively quiet stratiform regions behind active cells and are in good coincidence with extensive lightning discharges and inferred high-altitude sprites discharges.
Lightning Location Using Acoustic Signals
NASA Astrophysics Data System (ADS)
Badillo, E.; Arechiga, R. O.; Thomas, R. J.
2013-05-01
In the summer of 2011 and 2012 a network of acoustic arrays was deployed in the Magdalena mountains of central New Mexico to locate lightning flashes. A Times-Correlation (TC) ray-tracing-based-technique was developed in order to obtain the location of lightning flashes near the network. The TC technique, locates acoustic sources from lightning. It was developed to complement the lightning location of RF sources detected by the Lightning Mapping Array (LMA) developed at Langmuir Laboratory, in New Mexico Tech. The network consisted of four arrays with four microphones each. The microphones on each array were placed in a triangular configuration with one of the microphones in the center of the array. The distance between the central microphone and the rest of them was about 30 m. The distance between centers of the arrays ranged from 500 m to 1500 m. The TC technique uses times of arrival (TOA) of acoustic waves to trace back the location of thunder sources. In order to obtain the times of arrival, the signals were filtered in a frequency band of 2 to 20 hertz and cross-correlated. Once the times of arrival were obtained, the Levenberg-Marquardt algorithm was applied to locate the spatial coordinates (x,y, and z) of thunder sources. Two techniques were used and contrasted to compute the accuracy of the TC method: Nearest-Neighbors (NN), between acoustic and LMA located sources, and standard deviation from the curvature matrix of the system as a measure of dispersion of the results. For the best case scenario, a triggered lightning event, the TC method applied with four microphones, located sources with a median error of 152 m and 142.9 m using nearest-neighbors and standard deviation respectively.; Results of the TC method in the lightning event recorded at 18:47:35 UTC, August 6, 2012. Black dots represent the results computed. Light color dots represent the LMA data for the same event. The results were obtained with the MGTM station (four channels). This figure shows a map of Altitude vs Longitude (in km).
The origin of infrasonic ionosphere oscillations over tropospheric thunderstorms
Shao, Xuan -Min; Lay, Erin Hoffmann
2016-07-01
Thunderstorms have been observed to introduce infrasonic oscillations in the ionosphere, but it is not clear what processes or which parts of the thunderstorm generate the oscillations. In this paper, we present a new technique that uses an array of ground-based GPS total electron content (TEC) measurements to locate the source of the infrasonic oscillations and compare the source locations with thunderstorm features to understand the possible source mechanisms. The location technique utilizes instantaneous phase differences between pairs of GPS-TEC measurements and an algorithm to best fit the measured and the expected phase differences for assumed source positions and othermore » related parameters. In this preliminary study, the infrasound waves are assumed to propagate along simple geometric raypaths from the source to the measurement locations to avoid extensive computations. The located sources are compared in time and space with thunderstorm development and lightning activity. Sources are often found near the main storm cells, but they are more likely related to the downdraft process than to the updraft process. As a result, the sources are also commonly found in the convectively quiet stratiform regions behind active cells and are in good coincidence with extensive lightning discharges and inferred high-altitude sprites discharges.« less
The origin of infrasonic ionosphere oscillations over tropospheric thunderstorms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Xuan -Min; Lay, Erin Hoffmann
Thunderstorms have been observed to introduce infrasonic oscillations in the ionosphere, but it is not clear what processes or which parts of the thunderstorm generate the oscillations. In this paper, we present a new technique that uses an array of ground-based GPS total electron content (TEC) measurements to locate the source of the infrasonic oscillations and compare the source locations with thunderstorm features to understand the possible source mechanisms. The location technique utilizes instantaneous phase differences between pairs of GPS-TEC measurements and an algorithm to best fit the measured and the expected phase differences for assumed source positions and othermore » related parameters. In this preliminary study, the infrasound waves are assumed to propagate along simple geometric raypaths from the source to the measurement locations to avoid extensive computations. The located sources are compared in time and space with thunderstorm development and lightning activity. Sources are often found near the main storm cells, but they are more likely related to the downdraft process than to the updraft process. As a result, the sources are also commonly found in the convectively quiet stratiform regions behind active cells and are in good coincidence with extensive lightning discharges and inferred high-altitude sprites discharges.« less
NASA Technical Reports Server (NTRS)
Allen, C. S.; Jaeger, S. M.
1999-01-01
The goal of our efforts is to extrapolate nearfield jet noise measurements to the geometric far field where the jet noise sources appear to radiate from a single point. To accomplish this, information about the location of noise sources in the jet plume, the radiation patterns of the noise sources and the sound pressure level distribution of the radiated field must be obtained. Since source locations and radiation patterns can not be found with simple single microphone measurements, a more complicated method must be used.
Remote multi-position information gathering system and method
Hirschfeld, Tomas B.
1986-01-01
A technique for gathering specific information from various remote locations, especially fluorimetric information characteristic of particular materials at the various locations is disclosed herein. This technique uses a single source of light disposed at still a different, central location and an overall optical network including an arrangement of optical fibers cooperating with the light source for directing individual light beams into the different information bearing locations. The incoming light beams result in corresponding displays of light, e.g., fluorescent light, containing the information to be obtained. The optical network cooperates with these light displays at the various locations for directing outgoing light beams containing the same information as their cooperating displays from these locations to the central location. Each of these outgoing beams is applied to a detection arrangement, e.g., a fluorescence spectroscope, for retrieving the information contained thereby.
Remote multi-position information gathering system and method
Hirschfeld, Tomas B.
1986-01-01
A technique for gathering specific information from various remote locations, especially fluorimetric information characteristic of particular materials at the various locations is disclosed herein. This technique uses a single source of light disposed at still a different, central location and an overall optical network including an arrangement of optical fibers cooperating with the light source for directing individual light beams into the different information bearing locations. The incoming light beams result in corresponding displays of light, e.g., fluorescent light, containing the information to be obtained. The optical network cooperates with these light displays at the various locations for directing ongoing light beams containing the same information as their cooperating displays from these locations to the central location. Each of these outgoing beams is applied to a detection arrangement, e.g., a fluorescence spectroscope, for retrieving the information contained thereby.
Remote multi-position information gathering system and method
Hirschfeld, T.B.
1989-01-24
A technique for gathering specific information from various remote locations, especially fluorimetric information characteristic of particular materials at the various locations is disclosed herein. This technique uses a single source of light disposed at still a different, central location and an overall optical network including an arrangement of optical fibers cooperating with the light source for directing individual light beams into the different information bearing locations. The incoming light beams result in corresponding displays of light, e.g., fluorescent light, containing the information to be obtained. The optical network cooperates with these light displays at the various locations for directing outgoing light beams containing the same information as their cooperating displays from these locations to the central location. Each of these outgoing beams is applied to a detection arrangement, e.g., a fluorescence spectroscope, for retrieving the information contained thereby. 9 figs.
Remote multi-position information gathering system and method
Hirschfeld, T.B.
1986-12-02
A technique for gathering specific information from various remote locations, especially fluorimetric information characteristic of particular materials at the various locations is disclosed herein. This technique uses a single source of light disposed at still a different, central location and an overall optical network including an arrangement of optical fibers cooperating with the light source for directing individual light beams into the different information bearing locations. The incoming light beams result in corresponding displays of light, e.g., fluorescent light, containing the information to be obtained. The optical network cooperates with these light displays at the various locations for directing outgoing light beams containing the same information as their cooperating displays from these locations to the central location. Each of these outgoing beams is applied to a detection arrangement, e.g., a fluorescence spectroscope, for retrieving the information contained thereby. 9 figs.
Remote multi-position information gathering system and method
Hirschfeld, Tomas B.
1989-01-01
A technique for gathering specific information from various remote locations, especially fluorimetric information characteristic of particular materials at the various locations is disclosed herein. This technique uses a single source of light disposed at still a different, central location and an overall optical network including an arrangement of optical fibers cooperating with the light source for directing individual light beams into the different information bearing locations. The incoming light beams result in corresponding displays of light, e.g., fluorescent light, containing the information to be obtained. The optical network cooperates with these light displays at the various locations for directing outgoing light beams containing the same information as their cooperating displays from these locations to the central location. Each of these outgoing beams is applied to a detection arrangement, e.g., a fluorescence spectroscope, for retrieving the information contained thereby.
Sen, Novonil; Kundu, Tribikram
2018-07-01
Estimating the location of an acoustic source in a structure is an important step towards passive structural health monitoring. Techniques for localizing an acoustic source in isotropic structures are well developed in the literature. Development of similar techniques for anisotropic structures, however, has gained attention only in the recent years and has a scope of further improvement. Most of the existing techniques for anisotropic structures either assume a straight line wave propagation path between the source and an ultrasonic sensor or require the material properties to be known. This study considers different shapes of the wave front generated during an acoustic event and develops a methodology to localize the acoustic source in an anisotropic plate from those wave front shapes. An elliptical wave front shape-based technique was developed first, followed by the development of a parametric curve-based technique for non-elliptical wave front shapes. The source coordinates are obtained by minimizing an objective function. The proposed methodology does not assume a straight line wave propagation path and can predict the source location without any knowledge of the elastic properties of the material. A numerical study presented here illustrates how the proposed methodology can accurately estimate the source coordinates. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Acoustic Location of Lightning Using Interferometric Techniques
NASA Astrophysics Data System (ADS)
Erives, H.; Arechiga, R. O.; Stock, M.; Lapierre, J. L.; Edens, H. E.; Stringer, A.; Rison, W.; Thomas, R. J.
2013-12-01
Acoustic arrays have been used to accurately locate thunder sources in lightning flashes. The acoustic arrays located around the Magdalena mountains of central New Mexico produce locations which compare quite well with source locations provided by the New Mexico Tech Lightning Mapping Array. These arrays utilize 3 outer microphones surrounding a 4th microphone located at the center, The location is computed by band-passing the signal to remove noise, and then computing the cross correlating the outer 3 microphones with respect the center reference microphone. While this method works very well, it works best on signals with high signal to noise ratios; weaker signals are not as well located. Therefore, methods are being explored to improve the location accuracy and detection efficiency of the acoustic location systems. The signal received by acoustic arrays is strikingly similar to th signal received by radio frequency interferometers. Both acoustic location systems and radio frequency interferometers make coherent measurements of a signal arriving at a number of closely spaced antennas. And both acoustic and interferometric systems then correlate these signals between pairs of receivers to determine the direction to the source of the received signal. The primary difference between the two systems is the velocity of propagation of the emission, which is much slower for sound. Therefore, the same frequency based techniques that have been used quite successfully with radio interferometers should be applicable to acoustic based measurements as well. The results presented here are comparisons between the location results obtained with current cross correlation method and techniques developed for radio frequency interferometers applied to acoustic signals. The data were obtained during the summer 2013 storm season using multiple arrays sensitive to both infrasonic frequency and audio frequency acoustic emissions from lightning. Preliminary results show that interferometric techniques have good potential for improving the lightning location accuracy and detection efficiency of acoustic arrays.
Single-view 3D reconstruction of correlated gamma-neutron sources
Monterial, Mateusz; Marleau, Peter; Pozzi, Sara A.
2017-01-05
We describe a new method of 3D image reconstruction of neutron sources that emit correlated gammas (e.g. Cf- 252, Am-Be). This category includes a vast majority of neutron sources important in nuclear threat search, safeguards and non-proliferation. Rather than requiring multiple views of the source this technique relies on the source’s intrinsic property of coincidence gamma and neutron emission. As a result only a single-view measurement of the source is required to perform the 3D reconstruction. In principle, any scatter camera sensitive to gammas and neutrons with adequate timing and interaction location resolution can perform this reconstruction. Using a neutronmore » double scatter technique, we can calculate a conical surface of possible source locations. By including the time to a correlated gamma we further constrain the source location in three-dimensions by solving for the source-to-detector distance along the surface of said cone. As a proof of concept we applied these reconstruction techniques on measurements taken with the the Mobile Imager of Neutrons for Emergency Responders (MINER). Two Cf-252 sources measured at 50 and 60 cm from the center of the detector were resolved in their varying depth with average radial distance relative resolution of 26%. To demonstrate the technique’s potential with an optimized system we simulated the measurement in MCNPX-PoliMi assuming timing resolution of 200 ps (from 2 ns in the current system) and source interaction location resolution of 5 mm (from 3 cm). Furthermore, these simulated improvements in scatter camera performance resulted in radial distance relative resolution decreasing to an average of 11%.« less
Single-view 3D reconstruction of correlated gamma-neutron sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monterial, Mateusz; Marleau, Peter; Pozzi, Sara A.
We describe a new method of 3D image reconstruction of neutron sources that emit correlated gammas (e.g. Cf- 252, Am-Be). This category includes a vast majority of neutron sources important in nuclear threat search, safeguards and non-proliferation. Rather than requiring multiple views of the source this technique relies on the source’s intrinsic property of coincidence gamma and neutron emission. As a result only a single-view measurement of the source is required to perform the 3D reconstruction. In principle, any scatter camera sensitive to gammas and neutrons with adequate timing and interaction location resolution can perform this reconstruction. Using a neutronmore » double scatter technique, we can calculate a conical surface of possible source locations. By including the time to a correlated gamma we further constrain the source location in three-dimensions by solving for the source-to-detector distance along the surface of said cone. As a proof of concept we applied these reconstruction techniques on measurements taken with the the Mobile Imager of Neutrons for Emergency Responders (MINER). Two Cf-252 sources measured at 50 and 60 cm from the center of the detector were resolved in their varying depth with average radial distance relative resolution of 26%. To demonstrate the technique’s potential with an optimized system we simulated the measurement in MCNPX-PoliMi assuming timing resolution of 200 ps (from 2 ns in the current system) and source interaction location resolution of 5 mm (from 3 cm). Furthermore, these simulated improvements in scatter camera performance resulted in radial distance relative resolution decreasing to an average of 11%.« less
Radionuclide counting technique for measuring wind velocity and direction
NASA Technical Reports Server (NTRS)
Singh, J. J. (Inventor)
1984-01-01
An anemometer utilizing a radionuclide counting technique for measuring both the velocity and the direction of wind is described. A pendulum consisting of a wire and a ball with a source of radiation on the lower surface of the ball is positioned by the wind. Detectors and are located in a plane perpendicular to pendulum (no wind). The detectors are located on the circumferene of a circle and are equidistant from each other as well as the undisturbed (no wind) source ball position.
NASA Astrophysics Data System (ADS)
Zhang, Yubo; Deng, Muhan; Yang, Rui; Jin, Feixiang
2017-09-01
The location technique of acoustic emission (AE) source for deformation damage of 16Mn steel in high temperature environment is studied by using linear time-difference-of-arrival (TDOA) location method. The distribution characteristics of strain induced acoustic emission source signals at 20°C and 400°C of tensile specimens were investigated. It is found that the near fault has the location signal of the cluster, which can judge the stress concentration and cause the fracture.
Patrzyk, M; Schreiber, A; Heidecke, C D; Glitsch, A
2009-12-01
Development of an innovative method of endoscopic laser-supported diaphanoscopy, for precise demonstration of the location of gastrointestinal stromal tumors (GISTs) at laparoscopy is described. The equipment consists of a light transmission cable with an anchoring system for the gastric mucosa, a connecting system for the light source, and the laser light source itself. During surgery, transillumination by laser is used to show the shape of the tumor. The resection margins are then marked by electric coagulation. Ten patients have been successfully treated using this technique in laparoscopic-endoscopic rendezvous procedures. Average time of surgery was 123 minutes. The time for marking the shape of the tumor averaged 16 minutes. Depending on tumor location and size, 4-7 marks were used, and resection margins were 4-15 mm. This new and effective technique facilitates precise locating of gastric GISTs leading to exact and tissue-sparing transmural laparoscopic resections. Georg Thieme Verlag KG Stuttgart New York.
40 CFR 52.1168 - Certification of no sources.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Massachusetts § 52.1168... sources are located in the Commonwealth which are covered by the following Control Techniques Guidelines...
40 CFR 52.1168 - Certification of no sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Massachusetts § 52.1168... sources are located in the Commonwealth which are covered by the following Control Techniques Guidelines...
40 CFR 52.1168 - Certification of no sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Massachusetts § 52.1168... sources are located in the Commonwealth which are covered by the following Control Techniques Guidelines...
40 CFR 52.1168 - Certification of no sources.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Massachusetts § 52.1168... sources are located in the Commonwealth which are covered by the following Control Techniques Guidelines...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roussel-Dupre, R.; Symbalisty, E.; Fox, C.
2009-08-01
The location of a radiating source can be determined by time-tagging the arrival of the radiated signal at a network of spatially distributed sensors. The accuracy of this approach depends strongly on the particular time-tagging algorithm employed at each of the sensors. If different techniques are used across the network, then the time tags must be referenced to a common fiducial for maximum location accuracy. In this report we derive the time corrections needed to temporally align leading-edge, time-tagging techniques with peak-picking algorithms. We focus on broadband radio frequency (RF) sources, an ionospheric propagation channel, and narrowband receivers, but themore » final results can be generalized to apply to any source, propagation environment, and sensor. Our analytic results are checked against numerical simulations for a number of representative cases and agree with the specific leading-edge algorithm studied independently by Kim and Eng (1995) and Pongratz (2005 and 2007).« less
Locating an atmospheric contamination source using slow manifolds
NASA Astrophysics Data System (ADS)
Tang, Wenbo; Haller, George; Baik, Jong-Jin; Ryu, Young-Hee
2009-04-01
Finite-size particle motion in fluids obeys the Maxey-Riley equations, which become singular in the limit of infinitesimally small particle size. Because of this singularity, finding the source of a dispersed set of small particles is a numerically ill-posed problem that leads to exponential blowup. Here we use recent results on the existence of a slow manifold in the Maxey-Riley equations to overcome this difficulty in source inversion. Specifically, we locate the source of particles by projecting their dispersed positions on a time-varying slow manifold, and by advecting them on the manifold in backward time. We use this technique to locate the source of a hypothetical anthrax release in an unsteady three-dimensional atmospheric wind field in an urban street canyon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.
2015-01-19
Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry ofmore » response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search« less
Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong
2016-06-06
We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.
Gangadharan, R; Prasanna, G; Bhat, M R; Murthy, C R L; Gopalakrishnan, S
2009-11-01
Conventional analytical/numerical methods employing triangulation technique are suitable for locating acoustic emission (AE) source in a planar structure without structural discontinuities. But these methods cannot be extended to structures with complicated geometry, and, also, the problem gets compounded if the material of the structure is anisotropic warranting complex analytical velocity models. A geodesic approach using Voronoi construction is proposed in this work to locate the AE source in a composite structure. The approach is based on the fact that the wave takes minimum energy path to travel from the source to any other point in the connected domain. The geodesics are computed on the meshed surface of the structure using graph theory based on Dijkstra's algorithm. By propagating the waves in reverse virtually from these sensors along the geodesic path and by locating the first intersection point of these waves, one can get the AE source location. In this work, the geodesic approach is shown more suitable for a practicable source location solution in a composite structure with arbitrary surface containing finite discontinuities. Experiments have been conducted on composite plate specimens of simple and complex geometry to validate this method.
Locating and Quantifying Broadband Fan Sources Using In-Duct Microphones
NASA Technical Reports Server (NTRS)
Dougherty, Robert P.; Walker, Bruce E.; Sutliff, Daniel L.
2010-01-01
In-duct beamforming techniques have been developed for locating broadband noise sources on a low-speed fan and quantifying the acoustic power in the inlet and aft fan ducts. The NASA Glenn Research Center's Advanced Noise Control Fan was used as a test bed. Several of the blades were modified to provide a broadband source to evaluate the efficacy of the in-duct beamforming technique. Phased arrays consisting of rings and line arrays of microphones were employed. For the imaging, the data were mathematically resampled in the frame of reference of the rotating fan. For both the imaging and power measurement steps, array steering vectors were computed using annular duct modal expansions, selected subsets of the cross spectral matrix elements were used, and the DAMAS and CLEAN-SC deconvolution algorithms were applied.
Explosion localization and characterization via infrasound using numerical modeling
NASA Astrophysics Data System (ADS)
Fee, D.; Kim, K.; Iezzi, A. M.; Matoza, R. S.; Jolly, A. D.; De Angelis, S.; Diaz Moreno, A.; Szuberla, C.
2017-12-01
Numerous methods have been applied to locate, detect, and characterize volcanic and anthropogenic explosions using infrasound. Far-field localization techniques typically use back-azimuths from multiple arrays (triangulation) or Reverse Time Migration (RTM, or back-projection). At closer ranges, networks surrounding a source may use Time Difference of Arrival (TDOA), semblance, station-pair double difference, etc. However, at volcanoes and regions with topography or obstructions that block the direct path of sound, recent studies have shown that numerical modeling is necessary to provide an accurate source location. A heterogeneous and moving atmosphere (winds) may also affect the location. The time reversal mirror (TRM) application of Kim et al. (2015) back-propagates the wavefield using a Finite Difference Time Domain (FDTD) algorithm, with the source corresponding to the location of peak convergence. Although it provides high-resolution source localization and can account for complex wave propagation, TRM is computationally expensive and limited to individual events. Here we present a new technique, termed RTM-FDTD, which integrates TRM and FDTD. Travel time and transmission loss information is computed from each station to the entire potential source grid from 3-D Green's functions derived via FDTD. The wave energy is then back-projected and stacked at each grid point, with the maximum corresponding to the likely source. We apply our method to detect and characterize thousands of explosions from Yasur Volcano, Vanuatu and Etna Volcano, Italy, which both provide complex wave propagation and multiple source locations. We compare our results with those from more traditional methods (e.g. semblance), and suggest our method is preferred as it is computationally less expensive than TRM but still integrates numerical modeling. RTM-FDTD could be applied to volcanic other anthropogenic sources at a wide variety of ranges and scenarios. Kim, K., Lees, J.M., 2015. Imaging volcanic infrasound sources using time reversal mirror algorithm. Geophysical Journal International 202, 1663-1676.
Fiber fault location utilizing traffic signal in optical network.
Zhao, Tong; Wang, Anbang; Wang, Yuncai; Zhang, Mingjiang; Chang, Xiaoming; Xiong, Lijuan; Hao, Yi
2013-10-07
We propose and experimentally demonstrate a method for fault location in optical communication network. This method utilizes the traffic signal transmitted across the network as probe signal, and then locates the fault by correlation technique. Compared with conventional techniques, our method has a simple structure and low operation expenditure, because no additional device is used, such as light source, modulator and signal generator. The correlation detection in this method overcomes the tradeoff between spatial resolution and measurement range in pulse ranging technique. Moreover, signal extraction process can improve the location result considerably. Experimental results show that we achieve a spatial resolution of 8 cm and detection range of over 23 km with -8-dBm mean launched power in optical network based on synchronous digital hierarchy protocols.
Monitoring Seismo-volcanic and Infrasonic Signals at Volcanoes: Mt. Etna Case Study
NASA Astrophysics Data System (ADS)
Cannata, Andrea; Di Grazia, Giuseppe; Aliotta, Marco; Cassisi, Carmelo; Montalto, Placido; Patanè, Domenico
2013-11-01
Volcanoes generate a broad range of seismo-volcanic and infrasonic signals, whose features and variations are often closely related to volcanic activity. The study of these signals is hence very useful in the monitoring and investigation of volcano dynamics. The analysis of seismo-volcanic and infrasonic signals requires specifically developed techniques due to their unique characteristics, which are generally quite distinct compared with tectonic and volcano-tectonic earthquakes. In this work, we describe analysis methods used to detect and locate seismo-volcanic and infrasonic signals at Mt. Etna. Volcanic tremor sources are located using a method based on spatial seismic amplitude distribution, assuming propagation in a homogeneous medium. The tremor source is found by calculating the goodness of the linear regression fit ( R 2) of the log-linearized equation of the seismic amplitude decay with distance. The location method for long-period events is based on the joint computation of semblance and R 2 values, and the location method of very long-period events is based on the application of radial semblance. Infrasonic events and tremor are located by semblance-brightness- and semblance-based methods, respectively. The techniques described here can also be applied to other volcanoes and do not require particular network geometries (such as arrays) but rather simple sparse networks. Using the source locations of all the considered signals, we were able to reconstruct the shallow plumbing system (above sea level) during 2011.
Leak detection utilizing analog binaural (VLSI) techniques
NASA Technical Reports Server (NTRS)
Hartley, Frank T. (Inventor)
1995-01-01
A detection method and system utilizing silicon models of the traveling wave structure of the human cochlea to spatially and temporally locate a specific sound source in the presence of high noise pandemonium. The detection system combines two-dimensional stereausis representations, which are output by at least three VLSI binaural hearing chips, to generate a three-dimensional stereausis representation including both binaural and spectral information which is then used to locate the sound source.
Cataloging tremor at Kilauea Volcano, Hawaii
NASA Astrophysics Data System (ADS)
Thelen, W. A.; Wech, A.
2013-12-01
Tremor is a ubiquitous seismic feature on Kilauea volcano, which emanates from at least three distinct sources. At depth, intermittent tremor and earthquakes thought to be associated with the underlying plumbing system of Kilauea (Aki and Koyanagi, 1981) occurs approximately 40 km below and 40 km SW of the summit. At the summit of the volcano, nearly continuous tremor is recorded close to a persistently degassing lava lake, which has been present since 2008. Much of this tremor is correlated with spattering at the lake surface, but tremor also occurs in the absence of spattering, and was observed at the summit of the volcano prior to the appearance of the lava lake, predominately in association with inflation/deflation events. The third known source of tremor is in the area of Pu`u `O`o, a vent that has been active since 1983. The exact source location and depth is poorly constrained for each of these sources. Consistently tracking the occurrence and location of tremor in these areas through time will improve our understanding of the plumbing geometry beneath Kilauea volcano and help identify precursory patterns in tremor leading to changes in eruptive activity. The continuous and emergent nature of tremor precludes the use of traditional earthquake techniques for automatic detection and location of seismicity. We implement the method of Wech and Creager (2008) to both detect and localize tremor seismicity in the three regions described above. The technique uses an envelope cross-correlation method in 5-minute windows that maximizes tremor signal coherency among seismic stations. The catalog is currently being built in near-realtime, with plans to extend the analysis to the past as time and continuous data availability permits. This automated detection and localization method has relatively poor depth constraints due to the construction of the envelope function. Nevertheless, the epicenters distinguish activity among the different source regions and serve as starting points for more sophisticated location techniques using cross-correlation and/or amplitude-based locations. The resulting timelines establish a quantitative baseline of behavior for each source to better understand and forecast Kilauea activity.
NASA Astrophysics Data System (ADS)
Nooshiri, N.; Saul, J.; Heimann, S.; Tilmann, F. J.; Dahm, T.
2015-12-01
The use of a 1D velocity model for seismic event location is often associated with significant travel-time residuals. Particularly for regional stations in subduction zones, where the velocity structure strongly deviates from the assumed 1D model, residuals of up to ±10 seconds are observed even for clear arrivals, which leads to strongly biased locations. In fact, due to mostly regional travel-time anomalies, arrival times at regional stations do not match the location obtained with teleseismic picks, and vice versa. If the earthquake is weak and only recorded regionally, or if fast locations based on regional stations are needed, the location may be far off the corresponding teleseismic location. In this case, implementation of travel-time corrections may leads to a reduction of the travel-time residuals at regional stations and, in consequence, significantly improve the relative location accuracy. Here, we have extended the source-specific station terms (SSST) technique to regional and teleseismic distances and adopted the algorithm for probabilistic, non-linear, global-search earthquake location. The method has been applied to specific test regions using P and pP phases from the GEOFON bulletin data for all available station networks. By using this method, a set of timing corrections has been calculated for each station varying as a function of source position. In this way, an attempt is made to correct for the systematic errors, introduced by limitations and inaccuracies in the assumed velocity structure, without solving for a new earth model itself. In this presentation, we draw on examples of the application of this global SSST technique to relocate earthquakes from the Tonga-Fiji subduction zone and from the Chilean margin. Our results have been showing a considerable decrease of the root-mean-square (RMS) residual in earthquake location final catalogs, a major reduction of the median absolute deviation (MAD) of the travel-time residuals at regional stations and sharper images of the seismicity compared to the initial locations.
NASA Astrophysics Data System (ADS)
Barthod, Louise; Lobb, David; Owens, Philip; Martinez-Carreras, Nuria; Koiter, Alexander; Petticrew, Ellen; McCullough, Gregory
2014-05-01
The development of beneficial management practises to minimize adverse impacts of agriculture on soil and water quality requires information on the sources of sediment at the watershed scale. Sediment fingerprinting allows for the determination of sediment sources and apportionment of their contribution within a watershed, using unique physical, radiochemical or biogeochemical properties, or fingerprints, of the potential sediment sources. The use of sediment colour as a fingerprint is an emerging technique that can provide a rapid and inexpensive means of investigating sediment sources. This technique is currently being utilized to determine sediment sources within the South Tobacco Creek Watershed, an agricultural watershed located in the Canadian prairies (south-central Manitoba). Suspended sediment and potential source (topsoil, channel bank and shale bedrock material) samples were collected between 2009 and 2011 at six locations along the main stem of the creek. Sample colour was quantified from diffuse reflectance spectrometry measurements over the visible wavelength range using a spectroradiometer (ASD Field Spec Pro, 400-2500 nm). Sixteen colour coefficients were derived from several colour space models (CIE XYZ, CIE xyY, CIE Lab, CIE Luv, CIE Lch, Landsat RGB, Redness Index). The individual discrimination power of the colour coefficients, after passing several prerequisite tests (e.g., linearly additive behaviour), was assessed using discriminant function analysis. A stepwise discriminant analysis, based on the Wilk's lambda criterion, was then performed in order to determine the best-suited colour coefficient fingerprints which maximized the discrimination between the potential sources. The selected fingerprints classified the source samples in the correct category 86% of the time. The misclassification is due to intra-source variability and source overlap which can lead to higher uncertainty in sediment source apportionment. The selected fingerprints were then included in a Bayesian mixing model using Monte-Carlo simulation (Stable Isotope Analysis in R, SIAR) in order to apportion the contribution of the different sources to the sediment collected at each location. A switch in the dominant sediment source between the headwaters and the outlet of the watershed is observed. Sediment is enriched with shale bedrock and depleted of topsoil sources as the stream crosses and down-cuts through the Manitoba Escarpment. The switch in sources highlights the importance of the sampling location in relation to the scale and geomorphic connectivity of the watershed. Although the results include considerable uncertainty, they are consistent with the findings from a classical fingerprinting approach undertaken in the same watershed (i.e., geochemical and radionuclide fingerprints), and confirm the potential of sediment colour parameters as suitable fingerprints.
PARTITIONING INTERWELL TRACER TEST FOR NAPL SOURCE CHARACTERIZATION: A GENERAL OVERVIEW
Innovative and nondestructive characterization techniques have been developed to locate and quantify nonaqueous phase liquids (NAPLs) in the vadose and saturated zones in the subsurface environment. One such technique is the partitioning interwell tracer test (PITT). The PITT i...
Psychophysical investigation of an auditory spatial illusion in cats: the precedence effect.
Tollin, Daniel J; Yin, Tom C T
2003-10-01
The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a "phantom" sound located between the sources, for delays < +/-400 micros for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 micros to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.
Acoustic Network Localization and Interpretation of Infrasonic Pulses from Lightning
NASA Astrophysics Data System (ADS)
Arechiga, R. O.; Johnson, J. B.; Badillo, E.; Michnovicz, J. C.; Thomas, R. J.; Edens, H. E.; Rison, W.
2011-12-01
We improve on the localization accuracy of thunder sources and identify infrasonic pulses that are correlated across a network of acoustic arrays. We attribute these pulses to electrostatic charge relaxation (collapse of the electric field) and attempt to model their spatial extent and acoustic source strength. Toward this objective we have developed a single audio range (20-15,000 Hz) acoustic array and a 4-station network of broadband (0.01-500 Hz) microphone arrays with aperture of ~45 m. The network has an aperture of 1700 m and was installed during the summers of 2009-2011 in the Magdalena mountains of New Mexico, an area that is subject to frequent lightning activity. We are exploring a new technique based on inverse theory that integrates information from the audio range and the network of broadband acoustic arrays to locate thunder sources more accurately than can be achieved with a single array. We evaluate the performance of the technique by comparing the location of thunder sources with RF sources located by the lightning mapping array (LMA) of Langmuir Laboratory at New Mexico Tech. We will show results of this technique for lightning flashes that occurred in the vicinity of our network of acoustic arrays and over the LMA. We will use acoustic network detection of infrasonic pulses together with LMA data and electric field measurements to estimate the spatial distribution of the charge (within the cloud) that is used to produce a lightning flash, and will try to quantify volumetric charges (charge magnitude) within clouds.
Apportionment of urban aerosol sources in Chongqing (China) using synergistic on-line techniques
NASA Astrophysics Data System (ADS)
Chen, Yang; Yang, Fumo
2016-04-01
The sources of ambient fine particulate matter (PM2.5) during wintertime at a background urban location in Chongqing (southwestern China) have been determined. Aerosol chemical composition analyses were performed using multiple on-line techniques, such as single particle aerosol mass spectrometer (SPAMS) for single particle chemical composition, on-line elemental carbon-organic carbon analyzer (on-line OC-EC), on-line X-ray fluorescence (XRF) for elements, and in-situ Gas and Aerosol Compositions monitor (IGAC) for water-soluble ions in PM2.5. All the datasets from these techniques have been adjusted to a 1-h time resolution for receptor model input. Positive matrix factorization (PMF) has been used for resolving aerosol sources. At least six sources, including domestic coal burning, biomass burning, dust, traffic, industrial and secondary/aged factors have been resolved and interpreted. The synergistic on-line techniques were helpful for identifying aerosol sources more clearly than when only employing the results from the individual techniques. This results are useful for better understanding of aerosol sources and atmospheric processes.
Near Real-Time Imaging of the Galactic Plane with BATSE
NASA Technical Reports Server (NTRS)
Harmon, B. A.; Zhang, S. N.; Robinson, C. R.; Paciesas, W. S.; Barret, D.; Grindlay, J.; Bloser, P.; Monnelly, C.
1997-01-01
The discovery of new transient or persistent sources in the hard X-ray regime with the BATSE Earth occultation Technique has been limited previously to bright sources of about 200 mCrab or more. While monitoring known source locations is not a problem to a daily limiting sensitivity of about 75 mCrab, the lack of a reliable background model forces us to use more intensive computer techniques to find weak, previously unknown emission from hard X-ray/gamma sources. The combination of Radon transform imaging of the galactic plane in 10 by 10 degree fields and the Harvard/CFA-developed Image Search (CBIS) allows us to straightforwardly search the sky for candidate sources in a +/- 20 degree latitude band along the plane. This procedure has been operating routinely on a weekly basis since spring 1997. We briefly describe the procedure, then concentrate on the performance aspects of the technique and candidate source results from the search.
NASA Astrophysics Data System (ADS)
Horstmann, T.; Harrington, R. M.; Cochran, E.; Shelly, D. R.
2013-12-01
Observations of non-volcanic tremor have become ubiquitous in recent years. In spite of the abundance of observations, locating tremor remains a difficult task because of the lack of distinctive phase arrivals. Here we use time-reverse-imaging techniques that do not require identifying phase arrivals to locate individual low-frequency-earthquakes (LFEs) within tremor episodes on the San Andreas fault near Cholame, California. Time windows of 1.5-second duration containing LFEs are selected from continuously recorded waveforms of the local seismic network filtered between 1-5 Hz. We propagate the time-reversed seismic signal back through the subsurface using a staggered-grid finite-difference code. Assuming all rebroadcasted waveforms result from similar wave fields at the source origin, we search for wave field coherence in time and space to obtain the source location and origin time where the constructive interference is a maximum. We use an interpolated velocity model with a grid spacing of 100 m and a 5 ms time step to calculate the relative curl field energy amplitudes for each rebroadcasted seismogram every 50 ms for each grid point in the model. Finally, we perform a grid search for coherency in the curl field using a sliding time window, and taking the absolute value of the correlation coefficient to account for differences in radiation pattern. The highest median cross-correlation coefficient value over at a given grid point indicates the source location for the rebroadcasted event. Horizontal location errors based on the spatial extent of the highest 10% cross-correlation coefficient are on the order of 4 km, and vertical errors on the order of 3 km. Furthermore, a test of the method using earthquake data shows that the method produces an identical hypocentral location (within errors) as that obtained by standard ray-tracing methods. We also compare the event locations to a LFE catalog that locates the LFEs from stacked waveforms of repeated LFEs identified by cross-correlation techniques [Shelly and Hardebeck, 2010]. The LFE catalog uses stacks of at least several hundred templates to identify phase arrivals used to estimate the location. We find epicentral locations for individual LFEs based on the time-reverse-imaging technique are within ~4 km relative to the LFE catalog [Shelly and Hardebeck, 2010]. LFEs locate between 15-25 km depth, and have similar focal depths found in previous studies of the region. Overall, the method can provide robust locations of individual LFEs without identifying and stacking hundreds of LFE templates; the locations are also more accurate than envelope location methods, which have errors on the order of tens of km [Horstmann et al., 2013].
NASA Astrophysics Data System (ADS)
Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung
2016-07-01
A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.
Identification of sources of aerosol particles in three locations in eastern Botswana
NASA Astrophysics Data System (ADS)
Chimidza, S.; Moloi, K.
2000-07-01
Airborne particles have been collected using a dichotomous virtual impactor at three different locations in the eastern part of Botswana: Serowe, Selibe-Phikwe, and Francistown. The particles were separated into two fractions (fine and coarse). Sampling at the three locations was done consecutively during the months of July and August, which are usually dry and stable. The sampling time for each sample was 12 hours during the day. For elemental composition, energy-dispersive x-ray fluorescence technique was used. Correlations and principal component analysis with varimax rotation were used to identify major sources of aerosol particles. In all the three places, soil was found to be the main source of aerosol particles. A copper-nickel mine and smelter at Selibe-Phikwe was found to be not only a source of copper and nickel particles in Selibe-Phikwe but also a source of these particles in far places like Serowe. In Selibe-Phikwe and Francistown, car exhaust was found to be the major source of fine particles of lead and bromine.
Electromagnetic geophysical tunnel detection experiments---San Xavier Mine Facility, Tucson, Arizona
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wayland, J.R.; Lee, D.O.; Shope, S.M.
1991-02-01
The objective of this work is to develop a general method for remotely sensing the presence of tunneling activities using one or more boreholes and a combination of surface sources. New techniques for tunnel detection and location of tunnels containing no metal and of tunnels containing only a small diameter wire have been experimentally demonstrated. A downhole magnetic dipole and surface loop sources were used as the current sources. The presence of a tunnel causes a subsurface scattering of the field components created by the source. Ratioing of the measured responses enhanced the detection and location capability over that producedmore » by each of the sources individually. 4 refs., 18 figs., 2 tabs.« less
Three dimensional volcano-acoustic source localization at Karymsky Volcano, Kamchatka, Russia
NASA Astrophysics Data System (ADS)
Rowell, Colin
We test two methods of 3-D acoustic source localization on volcanic explosions and small-scale jetting events at Karymsky Volcano, Kamchatka, Russia. Recent infrasound studies have provided evidence that volcanic jets produce low-frequency aerodynamic sound (jet noise) similar to that from man-made jet engines. Man-made jets are known to produce sound through turbulence along the jet axis, but discrimination of sources along the axis of a volcanic jet requires a network of sufficient topographic relief to attain resolution in the vertical dimension. At Karymsky Volcano, the topography of an eroded edifice adjacent to the active cone provided a platform for the atypical deployment of five infrasound sensors with intra-network relief of ˜600 m in July 2012. A novel 3-D inverse localization method, srcLoc, is tested and compared against a more common grid-search semblance technique. Simulations using synthetic signals indicate that srcLoc is capable of determining vertical source locations for this network configuration to within +/-150 m or better. However, srcLoc locations for explosions and jetting at Karymsky Volcano show a persistent overestimation of source elevation and underestimation of sound speed by an average of ˜330 m and 25 m/s, respectively. The semblance method is able to produce more realistic source locations by fixing the sound speed to expected values of 335 - 340 m/s. The consistency of location errors for both explosions and jetting activity over a wide range of wind and temperature conditions points to the influence of topography. Explosion waveforms exhibit amplitude relationships and waveform distortion strikingly similar to those theorized by modeling studies of wave diffraction around the crater rim. We suggest delay of signals and apparent elevated source locations are due to altered raypaths and/or crater diffraction effects. Our results suggest the influence of topography in the vent region must be accounted for when attempting 3-D volcano acoustic source localization. Though the data presented here are insufficient to resolve noise sources for these jets, which are much smaller in scale than those of previous volcanic jet noise studies, similar techniques may be successfully applied to large volcanic jets in the future.
A phase coherence approach to identifying co-located earthquakes and tremor
NASA Astrophysics Data System (ADS)
Hawthorne, J. C.; Ampuero, J.-P.
2018-05-01
We present and use a phase coherence approach to identify seismic signals that have similar path effects but different source time functions: co-located earthquakes and tremor. The method used is a phase coherence-based implementation of empirical matched field processing, modified to suit tremor analysis. It works by comparing the frequency-domain phases of waveforms generated by two sources recorded at multiple stations. We first cross-correlate the records of the two sources at a single station. If the sources are co-located, this cross-correlation eliminates the phases of the Green's function. It leaves the relative phases of the source time functions, which should be the same across all stations so long as the spatial extent of the sources are small compared with the seismic wavelength. We therefore search for cross-correlation phases that are consistent across stations as an indication of co-located sources. We also introduce a method to obtain relative locations between the two sources, based on back-projection of interstation phase coherence. We apply this technique to analyse two tremor-like signals that are thought to be composed of a number of earthquakes. First, we analyse a 20 s long seismic precursor to a M 3.9 earthquake in central Alaska. The analysis locates the precursor to within 2 km of the mainshock, and it identifies several bursts of energy—potentially foreshocks or groups of foreshocks—within the precursor. Second, we examine several minutes of volcanic tremor prior to an eruption at Redoubt Volcano. We confirm that the tremor source is located close to repeating earthquakes identified earlier in the tremor sequence. The amplitude of the tremor diminishes about 30 s before the eruption, but the phase coherence results suggest that the tremor may persist at some level through this final interval.
A study on locating the sonic source of sinusoidal magneto-acoustic signals using a vector method.
Zhang, Shunqi; Zhou, Xiaoqing; Ma, Ren; Yin, Tao; Liu, Zhipeng
2015-01-01
Methods based on the magnetic-acoustic effect are of great significance in studying the electrical imaging properties of biological tissues and currents. The continuous wave method, which is commonly used, can only detect the current amplitude without the sound source position. Although the pulse mode adopted in magneto-acoustic imaging can locate the sonic source, the low measuring accuracy and low SNR has limited its application. In this study, a vector method was used to solve and analyze the magnetic-acoustic signal based on the continuous sine wave mode. This study includes theory modeling of the vector method, simulations to the line model, and experiments with wire samples to analyze magneto-acoustic (MA) signal characteristics. The results showed that the amplitude and phase of the MA signal contained the location information of the sonic source. The amplitude and phase obeyed the vector theory in the complex plane. This study sets a foundation for a new technique to locate sonic sources for biomedical imaging of tissue conductivity. It also aids in studying biological current detecting and reconstruction based on the magneto-acoustic effect.
Toward a probabilistic acoustic emission source location algorithm: A Bayesian approach
NASA Astrophysics Data System (ADS)
Schumacher, Thomas; Straub, Daniel; Higgins, Christopher
2012-09-01
Acoustic emissions (AE) are stress waves initiated by sudden strain releases within a solid body. These can be caused by internal mechanisms such as crack opening or propagation, crushing, or rubbing of crack surfaces. One application for the AE technique in the field of Structural Engineering is Structural Health Monitoring (SHM). With piezo-electric sensors mounted to the surface of the structure, stress waves can be detected, recorded, and stored for later analysis. An important step in quantitative AE analysis is the estimation of the stress wave source locations. Commonly, source location results are presented in a rather deterministic manner as spatial and temporal points, excluding information about uncertainties and errors. Due to variability in the material properties and uncertainty in the mathematical model, measures of uncertainty are needed beyond best-fit point solutions for source locations. This paper introduces a novel holistic framework for the development of a probabilistic source location algorithm. Bayesian analysis methods with Markov Chain Monte Carlo (MCMC) simulation are employed where all source location parameters are described with posterior probability density functions (PDFs). The proposed methodology is applied to an example employing data collected from a realistic section of a reinforced concrete bridge column. The selected approach is general and has the advantage that it can be extended and refined efficiently. Results are discussed and future steps to improve the algorithm are suggested.
Method and system for determining radiation shielding thickness and gamma-ray energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klann, Raymond T.; Vilim, Richard B.; de la Barrera, Sergio
2015-12-15
A system and method for determining the shielding thickness of a detected radiation source. The gamma ray spectrum of a radiation detector is utilized to estimate the shielding between the detector and the radiation source. The determination of the shielding may be used to adjust the information from known source-localization techniques to provide improved performance and accuracy of locating the source of radiation.
NASA Astrophysics Data System (ADS)
Upton, D. W.; Saeed, B. I.; Mather, P. J.; Lazaridis, P. I.; Vieira, M. F. Q.; Atkinson, R. C.; Tachtatzis, C.; Garcia, M. S.; Judd, M. D.; Glover, I. A.
2018-03-01
Monitoring of partial discharge (PD) activity within high-voltage electrical environments is increasingly used for the assessment of insulation condition. Traditional measurement techniques employ technologies that either require off-line installation or have high power consumption and are hence costly. A wireless sensor network is proposed that utilizes only received signal strength to locate areas of PD activity within a high-voltage electricity substation. The network comprises low-power and low-cost radiometric sensor nodes which receive the radiation propagated from a source of PD. Results are reported from several empirical tests performed within a large indoor environment and a substation environment using a network of nine sensor nodes. A portable PD source emulator was placed at multiple locations within the network. Signal strength measured by the nodes is reported via WirelessHART to a data collection hub where it is processed using a location algorithm. The results obtained place the measured location within 2 m of the actual source location.
Ghost Images in Helioseismic Holography? Toy Models in a Uniform Medium
NASA Astrophysics Data System (ADS)
Yang, Dan
2018-02-01
Helioseismic holography is a powerful technique used to probe the solar interior based on estimations of the 3D wavefield. The Porter-Bojarski holography, which is a well-established method used in acoustics to recover sources and scatterers in 3D, is also an estimation of the wavefield, and hence it has the potential of being applied to helioseismology. Here we present a proof-of-concept study, where we compare helioseismic holography and Porter-Bojarski holography under the assumption that the waves propagate in a homogeneous medium. We consider the problem of locating a point source of wave excitation inside a sphere. Under these assumptions, we find that the two imaging methods have the same capability of locating the source, with the exception that helioseismic holography suffers from "ghost images" ( i.e. artificial peaks away from the source location). We conclude that Porter-Bojarski holography may improve the method currently used in helioseismology.
Detection, Source Location, and Analysis of Volcano Infrasound
NASA Astrophysics Data System (ADS)
McKee, Kathleen F.
The study of volcano infrasound focuses on low frequency sound from volcanoes, how volcanic processes produce it, and the path it travels from the source to our receivers. In this dissertation we focus on detecting, locating, and analyzing infrasound from a number of different volcanoes using a variety of analysis techniques. These works will help inform future volcano monitoring using infrasound with respect to infrasonic source location, signal characterization, volatile flux estimation, and back-azimuth to source determination. Source location is an important component of the study of volcano infrasound and in its application to volcano monitoring. Semblance is a forward grid search technique and common source location method in infrasound studies as well as seismology. We evaluated the effectiveness of semblance in the presence of significant topographic features for explosions of Sakurajima Volcano, Japan, while taking into account temperature and wind variations. We show that topographic obstacles at Sakurajima cause a semblance source location offset of 360-420 m to the northeast of the actual source location. In addition, we found despite the consistent offset in source location semblance can still be a useful tool for determining periods of volcanic activity. Infrasonic signal characterization follows signal detection and source location in volcano monitoring in that it informs us of the type of volcanic activity detected. In large volcanic eruptions the lowermost portion of the eruption column is momentum-driven and termed the volcanic jet or gas-thrust zone. This turbulent fluid-flow perturbs the atmosphere and produces a sound similar to that of jet and rocket engines, known as jet noise. We deployed an array of infrasound sensors near an accessible, less hazardous, fumarolic jet at Aso Volcano, Japan as an analogue to large, violent volcanic eruption jets. We recorded volcanic jet noise at 57.6° from vertical, a recording angle not normally feasible in volcanic environments. The fumarolic jet noise was found to have a sustained, low amplitude signal with a spectral peak between 7-10 Hz. From thermal imagery we measure the jet temperature ( 260 °C) and estimate the jet diameter ( 2.5 m). From the estimated jet diameter, an assumed Strouhal number of 0.19, and the jet noise peak frequency, we estimated the jet velocity to be 79 - 132 m/s. We used published gas data to then estimate the volatile flux at 160 - 270 kg/s (14,000 - 23,000 t/d). These estimates are typically difficult to obtain in volcanic environments, but provide valuable information on the eruption. At regional and global length scales we use infrasound arrays to detect signals and determine their source back-azimuths. A ground coupled airwave (GCA) occurs when an incident acoustic pressure wave encounters the Earth's surface and part of the energy of the wave is transferred to the ground. GCAs are commonly observed from sources such as volcanic eruptions, bolides, meteors, and explosions. They have been observed to have retrograde particle motion. When recorded on collocated seismo-acoustic sensors, the phase between the infrasound and seismic signals is 90°. If the sensors are separated wind noise is usually incoherent and an additional phase is added due to the sensor separation. We utilized the additional phase and the characteristic particle motion to determine a unique back-azimuth solution to an acoustic source. The additional phase will be different depending on the direction from which a wave arrives. Our technique was tested using synthetic seismo-acoustic data from a coupled Earth-atmosphere 3D finite difference code and then applied to two well-constrained datasets: Mount St. Helens, USA, and Mount Pagan, Commonwealth of the Northern Mariana Islands Volcanoes. The results from our method are within <1° - 5° of the actual and traditional infrasound array processing determined back-azimuths. Ours is a new method to detect and determine the back-azimuth to infrasonic signals, which will be useful when financial and spatial resources are limited.
NASA Technical Reports Server (NTRS)
Chrest, Anne; Daprato, Rebecca; Burcham, Michael; Johnson, Jill
2018-01-01
The National Aeronautics and Space Administration (NASA), Kennedy Space Center (KSC), has adopted high-resolution site characterization (HRSC) sampling techniques during baseline sampling prior to implementation of remedies to confirm and refine the conceptual site model (CSM). HRSC sampling was performed at Contractors Road Heavy Equipment Area (CRHE) prior to bioremediation implementation to verify the extent of the trichloroethene (TCE) dense non-aqueous phase liquid (DNAPL) source area (defined as the area with TCE concentrations above 1% solubility) and its daughter product dissolved plume that had been identified during previous HRSC events. The results of HRSC pre-bioremediation implementation sampling suggested that the TCE source area was larger than originally identified during initial site characterization activities, leading to a design refinement to improve electron donor distribution and increase the likelihood of achieving remedial objectives. Approach/Activities: HRSC was conducted from 2009 through 2014 to delineate the vertical and horizontal extent of chlorinated volatile organic compounds (CVOCs) in the groundwater. Approximately 2,340 samples were collected from 363 locations using direct push technology (DPT) groundwater sampling techniques. Samples were collected from up to 14 depth intervals at each location using a 4-foot sampling screen. This HRSC approach identified a narrow (approx. 5 to 30 feet wide), approximately 3,000 square foot TCE DNAPL source area (maximum detected TCE concentration of 160,000 micrograms per liter [micro-g/L] at DPT sampling location DPT0225). Prior to implementation of a bioremediation interim measure, HRSC baseline sampling was conducted using DPT groundwater sampling techniques. Concentrations of TCE were an order of magnitude lower than previous reported (12,000 micro-g/L maximum at DPT sampling location DPT0225) at locations sampled adjacent to previous sampling locations. To further evaluate the variability in concentrations observed additional sampling was conducted in 2016. The results identified higher concentrations than originally detected within the previously defined source area and the presence of source zone concentrations upgradient of the previously defined source area (maximum concentration observed 570,000 micro-g/L). The HRSC baseline sampling data allowed for a revision of the bioremediation design prior to implementation. Bioremediation was implemented within the eastern portion of the source area in November and December 2016 and quarterly performance monitoring was completed in March and June 2017. Reductions in CVOC concentrations from baseline were observed at all performance monitoring wells in the treatment area, and by June 2017, an approximate 95% CVOC mass reduction was observed based on monitoring well sampling results. Results/Lessons Learned: The results of this project suggest that, due to the complexity of DNAPL source zones, HRSC during pre-implementation baseline sampling in the TCE source zone was an essential strategy for verifying the treatment area and depth prior to remedy implementation. If the upgradient source zone mass was not identified prior to bioremediation implementation, the mass would have served as a long-term source for the dissolved plume.
NASA Technical Reports Server (NTRS)
Jaeck, C. L.
1976-01-01
A test was conducted in the Boeing Large Anechoic Chamber to determine static jet noise source locations of six baseline and suppressor nozzle models, and establish a technique for extrapolating near field data into the far field. The test covered nozzle pressure ratios from 1.44 to 2.25 and jet velocities from 412 to 594 m/s at a total temperature of 844 K.
Virtual shelves in a digital library: a framework for access to networked information sources.
Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E
1995-01-01
Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources.
Source Identification and Location Techniques
NASA Technical Reports Server (NTRS)
Weir, Donald; Bridges, James; Agboola, Femi; Dougherty, Robert
2001-01-01
Mr. Weir presented source location results obtained from an engine test as part of the Engine Validation of Noise Reduction Concepts program. Two types of microphone arrays were used in this program to determine the jet noise source distribution for the exhaust from a 4.3 bypass ratio turbofan engine. One was a linear array of 16 microphones located on a 25 ft. sideline and the other was a 103 microphone 3-D "cage" array in the near field of the jet. Data were obtained from a baseline nozzle and from numerous nozzle configuration using chevrons and/or tabs to reduce the jet noise. Mr. Weir presented data from two configurations: the baseline nozzle and a nozzle configuration with chevrons on both the core and bypass nozzles. This chevron configuration had achieved a jet noise reduction of 4 EPNdB in small scale tests conducted at the Glenn Research Center. IR imaging showed that the chevrons produced significant improvements in mixing and greatly reduced the length of the jet potential core. Comparison of source location data from the 1-D phased array showed a shift of the noise sources towards the nozzle and clear reductions of the sources due to the noise reduction devices. Data from the 3-D array showed a single source at a frequency of 125 Hz. located several diameters downstream from the nozzle exit. At 250 and 400 Hz., multiple sources, periodically spaced, appeared to exist downstream of the nozzle. The trend of source location moving toward the nozzle exit with increasing frequency was also observed. The 3-D array data also showed a reduction in source strength with the addition of chevrons. The overall trend of source location with frequency was compared for the two arrays and with classical experience. Similar trends were observed. Although overall trends with frequency and addition of suppression devices were consistent between the data from the 1-D and the 3-D arrays, a comparison of the details of the inferred source locations did show differences. A flight test is planned to determine if the hardware tested statically will achieve similar reductions in flight.
NASA Astrophysics Data System (ADS)
Zhang, Yong; Sun, HongGuang; Lu, Bingqing; Garrard, Rhiannon; Neupauer, Roseanna M.
2017-09-01
Backward models have been applied for four decades by hydrologists to identify the source of pollutants undergoing Fickian diffusion, while analytical tools are not available for source identification of super-diffusive pollutants undergoing decay. This technical note evaluates analytical solutions for the source location and release time of a decaying contaminant undergoing super-diffusion using backward probability density functions (PDFs), where the forward model is the space fractional advection-dispersion equation with decay. Revisit of the well-known MADE-2 tracer test using parameter analysis shows that the peak backward location PDF can predict the tritium source location, while the peak backward travel time PDF underestimates the tracer release time due to the early arrival of tracer particles at the detection well in the maximally skewed, super-diffusive transport. In addition, the first-order decay adds additional skewness toward earlier arrival times in backward travel time PDFs, resulting in a younger release time, although this impact is minimized at the MADE-2 site due to tritium's half-life being relatively longer than the monitoring period. The main conclusion is that, while non-trivial backward techniques are required to identify pollutant source location, the pollutant release time can and should be directly estimated given the speed of the peak resident concentration for super-diffusive pollutants with or without decay.
Quantification of Methane Source Locations and Emissions in AN Urban Setting
NASA Astrophysics Data System (ADS)
Crosson, E.; Richardson, S.; Tan, S. M.; Whetstone, J.; Bova, T.; Prasad, K. R.; Davis, K. J.; Phillips, N. G.; Turnbull, J. C.; Shepson, P. B.; Cambaliza, M. L.
2011-12-01
The regulation of methane emissions from urban sources such as landfills and waste-water treatment facilities is currently a highly debated topic in the US and in Europe. This interest is fueled, in part, by recent measurements indicating that urban emissions are a significant source of Methane (CH4) and in fact may be substantially higher than current inventory estimates(1). As a result, developing methods for locating and quantifying emissions from urban methane sources is of great interest to industries such as landfill and wastewater treatment facility owners, watchdog groups, and the governmental agencies seeking to evaluate or enforce regulations. In an attempt to identify major methane source locations and emissions in Boston, Indianapolis, and the Bay Area, systematic measurements of CH4 concentrations and meteorology data were made at street level using a vehicle mounted cavity ringdown analyzer. A number of discrete sources were detected at concentration levels in excess of 15 times background levels. Using Gaussian plume models as well as tomographic techniques, methane source locations and emission rates will be presented. In addition, flux chamber measurements of discrete sources such as those found in natural gas leaks will also be presented. (1) Wunch, D., P.O. Wennberg, G.C. Toon, G. Keppel-Aleks, and Y.G. Yavin, Emissions of Greenhouse Gases from a North American Megacity, Geophysical Research Letters, Vol. 36, L15810, doi:10.1029/2009GL)39825, 2009.
Towards an accurate real-time locator of infrasonic sources
NASA Astrophysics Data System (ADS)
Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.
2017-11-01
Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability distributions of the phase arrival time picks. To illustrate the improvements in both computation time and location accuracy achieved, we compare location results for the new algorithms, previously published BISL-type algorithms and the least-squares location technique. This comparison is provided via a case study of different typical spatial data distributions and statistical experiment using the database of 36 ground-truth explosions from the Utah Test and Training Range (UTTR) recorded during the US summer season at USArray transportable seismic stations when they were near the site between 2006 and 2008.
Adaptive near-field beamforming techniques for sound source imaging.
Cho, Yong Thung; Roan, Michael J
2009-02-01
Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.
3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment
NASA Astrophysics Data System (ADS)
Kim, Yong Guk; Jo, Sungdong; Kim, Hong Kook; Jang, Sei-Jin; Lee, Seok-Pil
In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane.
Combined mine tremors source location and error evaluation in the Lubin Copper Mine (Poland)
NASA Astrophysics Data System (ADS)
Leśniak, Andrzej; Pszczoła, Grzegorz
2008-08-01
A modified method of mine tremors location used in Lubin Copper Mine is presented in the paper. In mines where an intensive exploration is carried out a high accuracy source location technique is usually required. The effect of the flatness of the geophones array, complex geological structure of the rock mass and intense exploitation make the location results ambiguous in such mines. In the present paper an effective method of source location and location's error evaluations are presented, combining data from two different arrays of geophones. The first consists of uniaxial geophones spaced in the whole mine area. The second is installed in one of the mining panels and consists of triaxial geophones. The usage of the data obtained from triaxial geophones allows to increase the hypocenter vertical coordinate precision. The presented two-step location procedure combines standard location methods: P-waves directions and P-waves arrival times. Using computer simulations the efficiency of the created algorithm was tested. The designed algorithm is fully non-linear and was tested on the multilayered rock mass model of the Lubin Copper Mine, showing a computational better efficiency than the traditional P-wave arrival times location algorithm. In this paper we present the complete procedure that effectively solves the non-linear location problems, i.e. the mine tremor location and measurement of the error propagation.
Impedance Eduction in Ducts with Higher-Order Modes and Flow
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Jones, Michael G.
2009-01-01
An impedance eduction technique, previously validated for ducts with plane waves at the source and duct termination planes, has been extended to support higher-order modes at these locations. Inputs for this method are the acoustic pressures along the source and duct termination planes, and along a microphone array located in a wall either adjacent or opposite to the test liner. A second impedance eduction technique is then presented that eliminates the need for the microphone array. The integrity of both methods is tested using three sound sources, six Mach numbers, and six selected frequencies. Results are presented for both a hardwall and a test liner (with known impedance) consisting of a perforated plate bonded to a honeycomb core. The primary conclusion of the study is that the second method performs well in the presence of higher-order modes and flow. However, the first method performs poorly when most of the microphones are located near acoustic pressure nulls. The negative effects of the acoustic pressure nulls can be mitigated by a judicious choice of the mode structure in the sound source. The paper closes by using the first impedance eduction method to design a rectangular array of 32 microphones for accurate impedance eduction in the NASA LaRC Curved Duct Test Rig in the presence of expected measurement uncertainties, higher order modes, and mean flow.
Joint Inversion of Source Location and Source Mechanism of Induced Microseismics
NASA Astrophysics Data System (ADS)
Liang, C.
2014-12-01
Seismic source mechanism is a useful property to indicate the source physics and stress and strain distribution in regional, local and micro scales. In this study we jointly invert source mechanisms and locations for microseismics induced in fluid fracturing treatment in the oil and gas industry. For the events that are big enough to see waveforms, there are quite a few techniques can be applied to invert the source mechanism including waveform inversion, first polarity inversion and many other methods and variants based on these methods. However, for events that are too small to identify in seismic traces such as the microseismics induced by the fluid fracturing in the Oil and Gas industry, a source scanning algorithms (SSA for short) with waveform stacking are usually applied. At the same time, a joint inversion of location and source mechanism are possible but at a cost of high computation budget. The algorithm is thereby called Source Location and Mechanism Scanning Algorithm, SLMSA for short. In this case, for given velocity structure, all possible combinations of source locations (X,Y and Z) and source mechanism (Strike, Dip and Rake) are used to compute travel-times and polarities of waveforms. Correcting Normal moveout times and polarities, and stacking all waveforms, the (X, Y, Z , strike, dip, rake) combination that gives the strongest stacking waveform is identified as the solution. To solve the problem of high computation problem, CPU-GPU programing is applied. Numerical datasets are used to test the algorithm. The SLMSA has also been applied to a fluid fracturing datasets and reveal several advantages against the location only method: (1) for shear sources, the source only program can hardly locate them because of the canceling out of positive and negative polarized traces, but the SLMSA method can successfully pick up those events; (2) microseismic locations alone may not be enough to indicate the directionality of micro-fractures. The statistics of source mechanisms can certainly provide more knowledges on the orientation of fractures; (3) in our practice, the joint inversion method almost always yield more events than the source only method and for those events that are also picked by the SSA method, the stacking power of SLMSA are always higher than the ones obtained in SSA.
NASA Technical Reports Server (NTRS)
1974-01-01
NASA technology contributions to create energy sources include direct solar heating and cooling systems, wind generation of electricity, solar thermal energy turbine drives, solar cells, and techniques for locating, producing, and collecting organic materials for conversion into fuel.
Back-Projection Imaging of extended, diffuse seismic sources in volcanic and hydrothermal systems
NASA Astrophysics Data System (ADS)
Kelly, C. L.; Lawrence, J. F.; Beroza, G. C.
2017-12-01
Volcanic and hydrothermal systems exhibit a wide range of seismicity that is directly linked to fluid and volatile activity in the subsurface and that can be indicative of imminent hazardous activity. Seismograms recorded near volcanic and hydrothermal systems typically contain "noisy" records, but in fact, these complex signals are generated by many overlapping low-magnitude displacements and pressure changes at depth. Unfortunately, excluding times of high-magnitude eruptive activity that typically occur infrequently relative to the length of a system's entire eruption cycle, these signals often have very low signal-to-noise ratios and are difficult to identify and study using established seismic analysis techniques (i.e. phase-picking, template matching). Arrays of short-period and broadband seismic sensors are proven tools for monitoring short- and long-term changes in volcanic and hydrothermal systems. Time-reversal techniques (i.e. back-projection) that are improved by additional seismic observations have been successfully applied to locating volcano-seismic sources recorded by dense sensor arrays. We present results from a new computationally efficient back-projection method that allows us to image the evolution of extended, diffuse sources of volcanic and hydrothermal seismicity. We correlate short time-window seismograms from receiver-pairs to find coherent signals and propagate them back in time to potential source locations in a 3D subsurface model. The strength of coherent seismic signal associated with any potential source-receiver-receiver geometry is equal to the correlation of the short time-windows of seismic records at appropriate time lags as determined by the velocity structure and ray paths. We stack (sum) all short time-window correlations from all receiver-pairs to determine the cumulative coherence of signals at each potential source location. Through stacking, coherent signals from extended and/or repeating sources of short-period energy radiation interfere constructively while background noise signals interfere destructively, such that the most likely source locations of the observed seismicity are illuminated. We compile results to analyze changes in the distribution and prevalence of these sources throughout a systems entire eruptive cycle.
Microseismic imaging using Geometric-mean Reverse-Time Migration in Hydraulic Fracturing Monitoring
NASA Astrophysics Data System (ADS)
Yin, J.; Ng, R.; Nakata, N.
2017-12-01
Unconventional oil and gas exploration techniques such as hydraulic fracturing are associated with microseismic events related to the generation and development of fractures. For example, hydraulic fracturing, which is popular in Southern Oklahoma, produces earthquakes that are greater than magnitude 2.0. Finding the accurate locations, and mechanisms, of these events provides important information of local stress conditions, fracture distribution, hazard assessment, and economical impact. The accurate source location is also important to separate fracking-induced and wastewater disposal induced seismicity. Here, we implement a wavefield-based imaging method called Geometric-mean Reverse-Time Migration (GmRTM), which takes the advantage of accurate microseismic location based on wavefield back projection. We apply GmRTM to microseismic data collected during hydraulic fracturing for imaging microseismic source locations, and potentially, fractures. Assuming an accurate velocity model, GmRTM can improve the spatial resolution of source locations compared to HypoDD or P/S travel-time based methods. We will discuss the results from GmRTM and HypoDD using this field dataset and synthetic data.
Simulation of the visual effects of power plant plumes
Evelyn F. Treiman; David B. Champion; Mona J. Wecksung; Glenn H. Moore; Andrew Ford; Michael D. Williams
1979-01-01
The Los Alamos Scientific Laboratory has developed a computer-assisted technique that can predict the visibility effects of potential energy sources in advance of their construction. This technique has been employed in an economic and environmental analysis comparing a single 3000 MW coal-fired power plant with six 500 MW coal-fired power plants located at hypothetical...
Remote sensing techniques aid in preattack planning for fire management
Lucy Anne Salazar
1982-01-01
Remote sensing techniques were investigated as an alternative for documenting selected prettack fire planning information. Locations of fuel models, road systems, and water sources were recorded by Landsat satellite imagery and aerial photography for a portion of the Six Rivers National Forest in northwestern California. The two fuel model groups used were from the...
A review of recent developments in the speciation and location of arsenic and selenium in rice grain
Carey, Anne-Marie; Lombi, Enzo; Donner, Erica; de Jonge, Martin D.; Punshon, Tracy; Jackson, Brian P.; Guerinot, Mary Lou; Price, Adam H.; Meharg, Andrew A.
2014-01-01
Rice is a staple food yet is a significant dietary source of inorganic arsenic, a class 1, nonthreshold carcinogen. Establishing the location and speciation of arsenic within the edible rice grain is essential for understanding the risk and for developing effective strategies to reduce grain arsenic concentrations. Conversely, selenium is an essential micronutrient and up to 1 billion people worldwide are selenium-deficient. Several studies have suggested that selenium supplementation can reduce the risk of some cancers, generating substantial interest in biofortifying rice. Knowledge of selenium location and speciation is important, because the anti-cancer effects of selenium depend on its speciation. Germanic acid is an arsenite/silicic acid analogue, and location of germanium may help elucidate the mechanisms of arsenite transport into grain. This review summarises recent discoveries in the location and speciation of arsenic, germanium, and selenium in rice grain using state-of-the-art mass spectrometry and synchrotron techniques, and illustrates both the importance of high-sensitivity and high-resolution techniques and the advantages of combining techniques in an integrated quantitative and spatial approach. PMID:22159463
NASA Astrophysics Data System (ADS)
Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar
2017-11-01
Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.
NASA Astrophysics Data System (ADS)
Yin, Jiuxun; Denolle, Marine A.; Yao, Huajian
2018-01-01
We develop a methodology that combines compressive sensing backprojection (CS-BP) and source spectral analysis of teleseismic P waves to provide metrics relevant to earthquake dynamics of large events. We improve the CS-BP method by an autoadaptive source grid refinement as well as a reference source adjustment technique to gain better spatial and temporal resolution of the locations of the radiated bursts. We also use a two-step source spectral analysis based on (i) simple theoretical Green's functions that include depth phases and water reverberations and on (ii) empirical P wave Green's functions. Furthermore, we propose a source spectrogram methodology that provides the temporal evolution of dynamic parameters such as radiated energy and falloff rates. Bridging backprojection and spectrogram analysis provides a spatial and temporal evolution of these dynamic source parameters. We apply our technique to the recent 2015 Mw 8.3 megathrust Illapel earthquake (Chile). The results from both techniques are consistent and reveal a depth-varying seismic radiation that is also found in other megathrust earthquakes. The low-frequency content of the seismic radiation is located in the shallow part of the megathrust, propagating unilaterally from the hypocenter toward the trench while most of the high-frequency content comes from the downdip part of the fault. Interpretation of multiple rupture stages in the radiation is also supported by the temporal variations of radiated energy and falloff rates. Finally, we discuss the possible mechanisms, either from prestress, fault geometry, and/or frictional properties to explain our observables. Our methodology is an attempt to bridge kinematic observations with earthquake dynamics.
NASA Astrophysics Data System (ADS)
Kuseler, Torben; Lami, Ihsan A.
2012-06-01
This paper proposes a new technique to obfuscate an authentication-challenge program (named LocProg) using randomly generated data together with a client's current location in real-time. LocProg can be used to enable any handsetapplication on mobile-devices (e.g. mCommerce on Smartphones) that requires authentication with a remote authenticator (e.g. bank). The motivation of this novel technique is to a) enhance the security against replay attacks, which is currently based on using real-time nonce(s), and b) add a new security factor, which is location verified by two independent sources, to challenge / response methods for authentication. To assure a secure-live transaction, thus reducing the possibility of replay and other remote attacks, the authors have devised a novel technique to obtain the client's location from two independent sources of GPS on the client's side and the cellular network on authenticator's side. The algorithm of LocProg is based on obfuscating "random elements plus a client's data" with a location-based key, generated on the bank side. LocProg is then sent to the client and is designed so it will automatically integrate into the target application on the client's handset. The client can then de-obfuscate LocProg if s/he is within a certain range around the location calculated by the bank and if the correct personal data is supplied. LocProg also has features to protect against trial/error attacks. Analysis of LocAuth's security (trust, threat and system models) and trials based on a prototype implementation (on Android platform) prove the viability and novelty of LocAuth.
The electromagnetic radiation from simple sources in the presence of a homogeneous dielectric sphere
NASA Technical Reports Server (NTRS)
Mason, V. B.
1973-01-01
In this research, the effect of a homogeneous dielectric sphere on the electromagnetic radiation from simple sources is treated as a boundary value problem, and the solution is obtained by the technique of dyadic Green's functions. Exact representations of the electric fields in the various regions due to a source located inside, outside, or on the surface of a dielectric sphere are formulated. Particular attention is given to the effect of sphere size, source location, dielectric constant, and dielectric loss on the radiation patterns and directivity of small spheres (less than 5 wavelengths in diameter) using the Huygens' source excitation. The computed results are found to closely agree with those measured for waveguide-excited plexiglas spheres. Radiation patterns for an extended Huygens' source and for curved electric dipoles located on the sphere's surface are also presented. The resonance phenomenon associated with the dielectric sphere is studied in terms of the modal representation of the radiated fields. It is found that when the sphere is excited at certain frequencies, much of the energy is radiated into the sidelobes. The addition of a moderate amount of dielectric loss, however, quickly attenuates this resonance effect. A computer program which may be used to calculate the directivity and radiation pattern of a Huygens' source located inside or on the surface of a lossy dielectric sphere is listed.
Acoustic emission non-destructive testing of structures using source location techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beattie, Alan G.
2013-09-01
The technology of acoustic emission (AE) testing has been advanced and used at Sandia for the past 40 years. AE has been used on structures including pressure vessels, fire bottles, wind turbines, gas wells, nuclear weapons, and solar collectors. This monograph begins with background topics in acoustics and instrumentation and then focuses on current acoustic emission technology. It covers the overall design and system setups for a test, with a wind turbine blade as the object. Test analysis is discussed with an emphasis on source location. Three test examples are presented, two on experimental wind turbine blades and one onmore » aircraft fire extinguisher bottles. Finally, the code for a FORTRAN source location program is given as an example of a working analysis program. Throughout the document, the stress is on actual testing of real structures, not on laboratory experiments.« less
Virtual shelves in a digital library: a framework for access to networked information sources.
Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E
1995-01-01
OBJECTIVE: Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. DESIGN: This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. RESULTS: The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. CONCLUSIONS: This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources. PMID:8581554
Wan, Shixiang; Duan, Yucong; Zou, Quan
2017-09-01
Predicting the subcellular localization of proteins is an important and challenging problem. Traditional experimental approaches are often expensive and time-consuming. Consequently, a growing number of research efforts employ a series of machine learning approaches to predict the subcellular location of proteins. There are two main challenges among the state-of-the-art prediction methods. First, most of the existing techniques are designed to deal with multi-class rather than multi-label classification, which ignores connections between multiple labels. In reality, multiple locations of particular proteins imply that there are vital and unique biological significances that deserve special focus and cannot be ignored. Second, techniques for handling imbalanced data in multi-label classification problems are necessary, but never employed. For solving these two issues, we have developed an ensemble multi-label classifier called HPSLPred, which can be applied for multi-label classification with an imbalanced protein source. For convenience, a user-friendly webserver has been established at http://server.malab.cn/HPSLPred. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Freeman, Simon E; Buckingham, Michael J; Freeman, Lauren A; Lammers, Marc O; D'Spain, Gerald L
2015-01-01
A seven element, bi-linear hydrophone array was deployed over a coral reef in the Papahãnaumokuãkea Marine National Monument, Northwest Hawaiian Islands, in order to investigate the spatial, temporal, and spectral properties of biological sound in an environment free of anthropogenic influences. Local biological sound sources, including snapping shrimp and other organisms, produced curved-wavefront acoustic arrivals at the array, allowing source location via focusing to be performed over an area of 1600 m(2). Initially, however, a rough estimate of source location was obtained from triangulation of pair-wise cross-correlations of the sound. Refinements to these initial source locations, and source frequency information, were then obtained using two techniques, conventional and adaptive focusing. It was found that most of the sources were situated on or inside the reef structure itself, rather than over adjacent sandy areas. Snapping-shrimp-like sounds, all with similar spectral characteristics, originated from individual sources predominantly in one area to the east of the array. To the west, the spectral and spatial distributions of the sources were more varied, suggesting the presence of a multitude of heterogeneous biological processes. In addition to the biological sounds, some low-frequency noise due to distant breaking waves was received from end-fire north of the array.
Directional analysis and filtering for dust storm detection in NOAA-AVHRR imagery
NASA Astrophysics Data System (ADS)
Janugani, S.; Jayaram, V.; Cabrera, S. D.; Rosiles, J. G.; Gill, T. E.; Rivera Rivera, N.
2009-05-01
In this paper, we propose spatio-spectral processing techniques for the detection of dust storms and automatically finding its transport direction in 5-band NOAA-AVHRR imagery. Previous methods that use simple band math analysis have produced promising results but have drawbacks in producing consistent results when low signal to noise ratio (SNR) images are used. Moreover, in seeking to automate the dust storm detection, the presence of clouds in the vicinity of the dust storm creates a challenge in being able to distinguish these two types of image texture. This paper not only addresses the detection of the dust storm in the imagery, it also attempts to find the transport direction and the location of the sources of the dust storm. We propose a spatio-spectral processing approach with two components: visualization and automation. Both approaches are based on digital image processing techniques including directional analysis and filtering. The visualization technique is intended to enhance the image in order to locate the dust sources. The automation technique is proposed to detect the transport direction of the dust storm. These techniques can be used in a system to provide timely warnings of dust storms or hazard assessments for transportation, aviation, environmental safety, and public health.
Distribution functions of air-scattered gamma rays above isotropic plane sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael, J A; Lamonds, H A
1967-06-01
Using the moments method of Spencer and Fano and a reconstruction technique suggested by Berger, the authors have calculated energy and angular distribution functions for air-scattered gamma rays emitied from infinite-plane isotropic monoenergetic sources as iunctions of source energy, radiation incidence angle at the detector, and detector altitude. Incremental and total buildup factors have been calculated for both number and exposure. The results are presented in tabular form for a detector located at altitudes of 3, 50, 100, 200, 300, 400, 500, and 1000 feet above source planes of 15 discrete energies spanning the range of 0.1 to 3.0 MeV.more » Calculational techniques including results of sensitivity studies are discussed and plots of typical results are presented. (auth)« less
Modeling and analysis of CSAMT field source effect and its characteristics
NASA Astrophysics Data System (ADS)
Da, Lei; Xiaoping, Wu; Qingyun, Di; Gang, Wang; Xiangrong, Lv; Ruo, Wang; Jun, Yang; Mingxin, Yue
2016-02-01
Controlled-source audio-frequency magnetotellurics (CSAMT) has been a highly successful geophysical tool used in a variety of geological exploration studies for many years. However, due to the artificial source used in the CSAMT technique, two important factors are considered during interpretation: non-plane-wave or geometric effects and source overprint effects. Hence, in this paper we simulate the source overprint effects and analyzed the rule and characteristics of its influence on CSAMT applications. Two-dimensional modeling was carried out using an adaptive unstructured finite element method to simulate several typical models. Also, we summarized the characteristics and rule of the source overprint effects and analyzed its influence on the data taken over several mining areas. The results obtained from the study shows that the occurrence and strength of the source overprint effect is dependent on the location of the source dipole, in relation to the receiver and the subsurface geology. In order to avoid source overprint effects, three principle were suggested to determine the best location for the grounded dipole source in the field.
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Rani, Raj
2015-10-01
The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.
An autonomous structural health monitoring solution
NASA Astrophysics Data System (ADS)
Featherston, Carol A.; Holford, Karen M.; Pullin, Rhys; Lees, Jonathan; Eaton, Mark; Pearson, Matthew
2013-05-01
Combining advanced sensor technologies, with optimised data acquisition and diagnostic and prognostic capability, structural health monitoring (SHM) systems provide real-time assessment of the integrity of bridges, buildings, aircraft, wind turbines, oil pipelines and ships, leading to improved safety and reliability and reduced inspection and maintenance costs. The implementation of power harvesting, using energy scavenged from ambient sources such as thermal gradients and sources of vibration in conjunction with wireless transmission enables truly autonomous systems, reducing the need for batteries and associated maintenance in often inaccessible locations, alongside bulky and expensive wiring looms. The design and implementation of such a system however presents numerous challenges. A suitable energy source or multiple sources capable of meeting the power requirements of the system, over the entire monitoring period, in a location close to the sensor must be identified. Efficient power management techniques must be used to condition the power and deliver it, as required, to enable appropriate measurements to be taken. Energy storage may be necessary, to match a continuously changing supply and demand for a range of different monitoring states including sleep, record and transmit. An appropriate monitoring technique, capable of detecting, locating and characterising damage and delivering reliable information, whilst minimising power consumption, must be selected. Finally a wireless protocol capable of transmitting the levels of information generated at the rate needed in the required operating environment must be chosen. This paper considers solutions to some of these challenges, and in particular examines SHM in the context of the aircraft environment.
Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem
Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...
2016-12-12
In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less
Solution of the three-dimensional Helmholtz equation with nonlocal boundary conditions
NASA Technical Reports Server (NTRS)
Hodge, Steve L.; Zorumski, William E.; Watson, Willie R.
1995-01-01
The Helmholtz equation is solved within a three-dimensional rectangular duct with a nonlocal radiation boundary condition at the duct exit plane. This condition accurately models the acoustic admittance at an arbitrarily-located computational boundary plane. A linear system of equations is constructed with second-order central differences for the Helmholtz operator and second-order backward differences for both local admittance conditions and the gradient term in the nonlocal radiation boundary condition. The resulting matrix equation is large, sparse, and non-Hermitian. The size and structure of the matrix makes direct solution techniques impractical; as a result, a nonstationary iterative technique is used for its solution. The theory behind the nonstationary technique is reviewed, and numerical results are presented for radiation from both a point source and a planar acoustic source. The solutions with the nonlocal boundary conditions are invariant to the location of the computational boundary, and the same nonlocal conditions are valid for all solutions. The nonlocal conditions thus provide a means of minimizing the size of three-dimensional computational domains.
Absolute reactivity calibration of accelerator-driven systems after RACE-T experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jammes, C. C.; Imel, G. R.; Geslot, B.
2006-07-01
The RACE-T experiments that were held in november 2005 in the ENEA-Casaccia research center near Rome allowed us to improve our knowledge of the experimental techniques for absolute reactivity calibration at either startup or shutdown phases of accelerator-driven systems. Various experimental techniques for assessing a subcritical level were inter-compared through three different subcritical configurations SC0, SC2 and SC3, about -0.5, -3 and -6 dollars, respectively. The area-ratio method based of the use of a pulsed neutron source appears as the most performing. When the reactivity estimate is expressed in dollar unit, the uncertainties obtained with the area-ratio method were lessmore » than 1% for any subcritical configuration. The sensitivity to measurement location was about slightly more than 1% and always less than 4%. Finally, it is noteworthy that the source jerk technique using a transient caused by the pulsed neutron source shutdown provides results in good agreement with those obtained from the area-ratio technique. (authors)« less
How Different EEG References Influence Sensor Level Functional Connectivity Graphs
Huang, Yunzhi; Zhang, Junpeng; Cui, Yuan; Yang, Gang; He, Ling; Liu, Qi; Yin, Guangfu
2017-01-01
Highlights: Hamming Distance is applied to distinguish the difference of functional connectivity networkThe orientations of sources are testified to influence the scalp Functional Connectivity Graph (FCG) from different references significantlyREST, the reference electrode standardization technique, is proved to have an overall stable and excellent performance in variable situations. The choice of an electroencephalograph (EEG) reference is a practical issue for the study of brain functional connectivity. To study how EEG reference influence functional connectivity estimation (FCE), this study compares the differences of FCE resulting from the different references such as REST (the reference electrode standardization technique), average reference (AR), linked mastoids (LM), and left mastoid references (LR). Simulations involve two parts. One is based on 300 dipolar pairs, which are located on the superficial cortex with a radial source direction. The other part is based on 20 dipolar pairs. In each pair, the dipoles have various orientation combinations. The relative error (RE) and Hamming distance (HD) between functional connectivity matrices of ideal recordings and that of recordings obtained with different references, are metrics to compare the differences of the scalp functional connectivity graph (FCG) derived from those two kinds of recordings. Lower RE and HD values imply more similarity between the two FCGs. Using the ideal recording (IR) as a standard, the results show that AR, LM and LR perform well only in specific conditions, i.e., AR performs stable when there is no upward component in sources' orientation. LR achieves desirable results when the sources' locations are away from left ear. LM achieves an indistinct difference with IR, i.e., when the distribution of source locations is symmetric along the line linking the two ears. However, REST not only achieves excellent performance for superficial and radial dipolar sources, but also achieves a stable and robust performance with variable source locations and orientations. Benefitting from the stable and robust performance of REST vs. other reference methods, REST might best recover the real FCG of EEG. Thus, REST based FCG may be a good candidate to compare the FCG of EEG based on different references from different labs. PMID:28725175
NASA Astrophysics Data System (ADS)
Sato, Mitsuteru; Mihara, Masahiro; Ushio, Tomoo; Morimoto, Takeshi; Kikuchi, Hiroshi; Adachi, Toru; Suzuki, Makoto; Yamazaki, Atsushi; Takahashi, Yukihiro
2015-04-01
JEM-GLIMS is continuing the comprehensive nadir observations of lightning and TLEs using optical instruments and electromagnetic wave receivers since November 2012. For the period between November 20, 2012 and November 30, 2014, JEM-GLIMS succeeded in detecting 5,048 lightning events. A total of 567 events in 5,048 lightning events were TLEs, which were mostly elves events. To identify the sprite occurrences from the transient optical flash data, it is necessary to perform the following data analysis: (1) a subtraction of the appropriately scaled wideband camera data from the narrowband camera data; (2) a calculation of intensity ratio between different spectrophotometer channels; and (3) an estimation of the polarization and CMC for the parent CG discharges using ground-based ELF measurement data. From a synthetic comparison of these results, it is confirmed that JEM-GLISM succeeded in detecting sprite events. The VHF receiver (VITF) onboard JEM-GLIMS uses two patch-type antennas separated by a 1.6-m interval and can detect VHF pulses emitted by lightning discharges in the 70-100 MHz frequency range. Using both an interferometric technique and a group delay technique, we can estimate the source locations of VHF pulses excited by lightning discharges. In the event detected at 06:41:15.68565 UT on June 12, 2014 over central North America, sprite was distributed with a horizontal displacement of 20 km from the peak location of the parent lightning emission. In this event, a total of 180 VHF pulses were simultaneously detected by VITF. From the detailed data analysis of these VHF pulse data, it is found that the majority of the source locations were placed near the area of the dim lightning emission, which may imply that the VHF pulses were associated with the in-cloud lightning current. At the presentation, we will show detailed comparison between the spatiotemporal characteristics of sprite emission and source locations of VHF pulses excited by the parent lightning discharges of sprites.
Development of lightweight structural health monitoring systems for aerospace applications
NASA Astrophysics Data System (ADS)
Pearson, Matthew
This thesis investigates the development of structural health monitoring systems (SHM) for aerospace applications. The work focuses on each aspect of a SHM system covering novel transducer technologies and damage detection techniques to detect and locate damage in metallic and composite structures. Secondly the potential of energy harvesting and power arrangement methodologies to provide a stable power source is assessed. Finally culminating in the realisation of smart SHM structures. 1. Transducer Technology A thorough experimental study of low profile, low weight novel transducers not normally used for acoustic emission (AE) and acousto-ultrasonics (AU) damage detection was conducted. This included assessment of their performance when exposed to aircraft environments and feasibility of embedding these transducers in composites specimens in order to realise smart structures. 2. Damage Detection An extensive experimental programme into damage detection utilising AE and AU were conducted in both composites and metallic structures. These techniques were used to assess different damage mechanism within these materials. The same transducers were used for novel AE location techniques coupled with AU similarity assessment to successfully detect and locate damage in a variety of structures. 3. Energy Harvesting and Power Management Experimental investigations and numerical simulations were undertaken to assess the power generation levels of piezoelectric and thermoelectric generators for typical vibration and temperature differentials which exist in the aerospace environment. Furthermore a power management system was assessed to demonstrate the ability of the system to take the varying nature of the input power and condition it to a stable power source for a system. 4. Smart Structures The research conducted is brought together into a smart carbon fibre wing showcasing the novel embedded transducers for AE and AU damage detection and location, as well as vibration energy harvesting. A study into impact damage detection using the techniques showed the successful detection and location of damage. Also the feasibility of the embedded transducers for power generation was assessed..
A graphic technique for identifying superior seed sources for central hardwoods
Fan H. Kung; George Rink
1993-01-01
To maximize forest production, foresters need to plant the best genotypes provided by forest geneticists. Where should the forest geneticist search for the best seed sources? How far can one go south, or north to find them? The answer may rely on the species and the location of the test plantation. For example, when black walnut trees were tested in Illinois, Indiana,...
Fire detection behind a wall by using microwave techniques
NASA Astrophysics Data System (ADS)
Alkurt, Fatih Özkan; Baǧmancı, Mehmet; Karaaslan, Muharrem; Bakır, Mehmet; Altıntaş, Olcay; Karadaǧ, Faruk; Akgöl, Oǧuzhan; Ünal, Emin
2018-02-01
In this work, detection of the fire location behind a wall by using microwave techniques is illustrated. According to Planck's Law, Blackbody emits electromagnetic radiation in the microwave region of the electromagnetic spectrum. This emitted waves penetrates all materials except that metals. These radiated waves can be detected by using directional and high gain antennas. The proposed antenna consists of a simple microstrip patch antenna and a 2×2 microstrip patch antenna array. FIT based simulation results show that 2×2 array antenna can absorb emitted power from a fire source which is located behind a wall. This contribution can be inspirational for further works.
Study of optical techniques for the Ames unitary wind tunnels. Part 2: Light sheet and vapor screen
NASA Technical Reports Server (NTRS)
Lee, George
1992-01-01
Light sheet and vapor screen methods have been studied with particular emphasis on those systems that have been used in large transonic and supersonic wind tunnels. The various fluids and solids used as tracers or light scatters and the methods for tracing generation have been studied. Light sources from high intensity lamps and various lasers have been surveyed. Light sheet generation and projection methods were considered. Detectors and location of detectors were briefly studied. A vapor screen system and a technique for location injection of tracers for the NASA Ames 9 by 7 foot Supersonic Wind Tunnel were proposed.
Lepper, Paul A; D'Spain, Gerald L
2007-08-01
The performance of traditional techniques of passive localization in ocean acoustics such as time-of-arrival (phase differences) and amplitude ratios measured by multiple receivers may be degraded when the receivers are placed on an underwater vehicle due to effects of scattering. However, knowledge of the interference pattern caused by scattering provides a potential enhancement to traditional source localization techniques. Results based on a study using data from a multi-element receiving array mounted on the inner shroud of an autonomous underwater vehicle show that scattering causes the localization ambiguities (side lobes) to decrease in overall level and to move closer to the true source location, thereby improving localization performance, for signals in the frequency band 2-8 kHz. These measurements are compared with numerical modeling results from a two-dimensional time domain finite difference scheme for scattering from two fluid-loaded cylindrical shells. Measured and numerically modeled results are presented for multiple source aspect angles and frequencies. Matched field processing techniques quantify the source localization capabilities for both measurements and numerical modeling output.
NASA Astrophysics Data System (ADS)
Longting, M.; Ye, S.; Wu, J.
2014-12-01
Identification and removing the DNAPL source in aquifer system is vital in rendering remediation successful and lowering the remediation time and cost. Our work is to apply an optimal search strategy introduced by Zoi and Pinder[1], with some modifications, to a field site in Nanjing City, China to define the strength, and location of DNAPL sources using the least samples. The overall strategy uses Monte Carlo stochastic groundwater flow and transport modeling, incorporates existing sampling data into the search strategy, and determines optimal sampling locations that are selected according to the reduction in overall uncertainty of the field and the proximity to the source locations. After a sample is taken, the plume is updated using a Kalman filter. The updated plume is then compared to the concentration fields that emanate from each individual potential source using fuzzy set technique. The comparison followed provides weights that reflect the degree of truth regarding the location of the source. The above steps are repeated until the optimal source characteristics are determined. Considering our site case, some specific modifications and work have been done as follows. K random fields are generated after fitting the measurement K data to the variogram model. The locations of potential sources that are given initial weights are targeted based on the field survey, with multiple potential source locations around the workshops and wastewater basin. Considering the short history (1999-2010) of manufacturing optical brightener PF at the site, and the existing sampling data, a preliminary source strength is then estimated, which will be optimized by simplex method or GA later. The whole algorithm then will guide us for optimal sampling and update as the investigation proceeds, until the weights finally stabilized. Reference [1] Dokou Zoi, and George F. Pinder. "Optimal search strategy for the definition of a DNAPL source." Journal of Hydrology 376.3 (2009): 542-556. Acknowledgement: Funding supported by National Natural Science Foundation of China (No. 41030746, 40872155) and DuPont Company is appreciated.
NASA Astrophysics Data System (ADS)
Khomenko, Anton; Cloud, Gary Lee; Haq, Mahmoodul
2015-12-01
Multilayered transparent composites having laminates with polymer interlayers and backing sheets are commonly used in a wide range of applications where visibility, transparency, impact resistance, and safety are essential. Manufacturing flaws or damage during operation can seriously compromise both safety and performance. Most fabrication defects are not discernible until after the entire multilayered transparent composite assembly has been completed, and in-the-field inspection for damage is a problem not yet solved. A robust and reliable nondestructive evaluation (NDE) technique is needed to evaluate structural integrity and identify defects that result from manufacturing issues as well as in-service damage arising from extreme environmental conditions in addition to normal mechanical and thermal loads. Current optical techniques have limited applicability for NDE of such structures. This work presents a technique that employs a modified interferometer utilizing a laser diode or femtosecond fiber laser source to acquire in situ defect depth location inside a thin or thick multilayered transparent composite, respectively. The technique successfully located various defects inside examined composites. The results show great potential of the technique for defect detection, location, and identification in multilayered transparent composites.
Orr, Christopher Henry; Luff, Craig Janson; Dockray, Thomas; Macarthur, Duncan Whittemore
2001-01-01
The apparatus and method provide a technique for improving detection of alpha and/or beta emitting sources on items or in locations using indirect means. The emission forms generate ions in a medium surrounding the item or location and the medium is then moved to a detecting location where the ions are discharged to give a measure of the emission levels. To increase the level of ions generated and render the system particularly applicable for narrow pipes and other forms of conduits, the medium pressure is increased above atmospheric pressure. STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
NASA Astrophysics Data System (ADS)
Kaufman, Lloyd; Williamson, Samuel J.; Costaribeiro, P.
1988-02-01
Recently developed small arrays of SQUID-based magnetic sensors can, if appropriately placed, locate the position of a confined biomagnetic source without moving the array. The authors present a technique with a relative accuracy of about 2 percent for calibrating such sensors having detection coils with the geometry of a second-order gradiometer. The effects of calibration error and magnetic noise on the accuracy of locating an equivalent current dipole source in the human brain are investigated for 5- and 7-sensor probes and for a pair of 7-sensor probes. With a noise level of 5 percent of peak signal, uncertainties of about 20 percent in source strength and depth for a 5-sensor probe are reduced to 8 percent for a pair of 7-sensor probes, and uncertainties of about 15 mm in lateral position are reduced to 1 mm, for the configuration considered.
Waveform Based Acoustic Emission Detection and Location of Matrix Cracking in Composites
NASA Technical Reports Server (NTRS)
Prosser, W. H.
1995-01-01
The operation of damage mechanisms in a material or structure under load produces transient acoustic waves. These acoustic waves are known as acoustic emission (AE). In composites they can be caused by a variety of sources including matrix cracking, fiber breakage, and delamination. AE signals can be detected and analyzed to determine the location of the acoustic source by triangulation. Attempts are also made to analyze the signals to determine the type and severity of the damage mechanism. AE monitoring has been widely used for both laboratory studies of materials, and for testing the integrity of structures in the field. In this work, an advanced, waveform based AE system was used in a study of transverse matrix cracking in cross-ply graphite/epoxy laminates. This AE system featured broad band, high fidelity sensors, and high capture rate digital acquisition and storage of acoustic signals. In addition, analysis techniques based on plate wave propagation models were employed. These features provided superior source location and noise rejection capabilities.
Autonomous robotic platforms for locating radio sources buried under rubble
NASA Astrophysics Data System (ADS)
Tasu, A. S.; Anchidin, L.; Tamas, R.; Paun, M.; Danisor, A.; Petrescu, T.
2016-12-01
This paper deals with the use of autonomous robotic platforms able to locate radio signal sources such as mobile phones, buried under collapsed buildings as a result of earthquakes, natural disasters, terrorism, war, etc. This technique relies on averaging position data resulting from a propagation model implemented on the platform and the data acquired by robotic platforms at the disaster site. That allows us to calculate the approximate position of radio sources buried under the rubble. Based on measurements, a radio map of the disaster site is made, very useful for locating victims and for guiding specific rubble lifting machinery, by assuming that there is a victim next to a mobile device detected by the robotic platform; by knowing the approximate position, the lifting machinery does not risk to further hurt the victims. Moreover, by knowing the positions of the victims, the reaction time is decreased, and the chances of survival for the victims buried under the rubble, are obviously increased.
Spatio-temporal distribution of energy radiation from low frequency tremor
NASA Astrophysics Data System (ADS)
Maeda, T.; Obara, K.
2007-12-01
Recent fine-scale hypocenter locations of low frequency tremors (LFTs) estimated by cross-correlation technique (Shelly et al. 2006; Maeda et al. 2006) and new finding of very low frequency earthquake (Ito et al. 2007) suggest that these slow events occur at the plate boundary associated with slow slip events (Obara and Hirose, 2006). However, the number of tremor detected by above technique is limited since continuous tremor waveforms are too complicated. Although an envelope correlation method (ECM) (Obara, 2002) enables us to locate epicenters of LFT without arrival time picks, however, ECM fails to locate LFTs precisely especially on the most active stage of tremor activity because of the low-correlation of envelope amplitude. To reveal total energy release of LFT, here we propose a new method for estimating the location of LFTs together with radiated energy from the tremor source by using envelope amplitude. The tremor amplitude observed at NIED Hi-net stations in western Shikoku simply decays in proportion to the reciprocal of the source-receiver distance after the correction of site- amplification factor even though the phases of the tremor are very complicated. So, we model the observed mean square envelope amplitude by time-dependent energy radiation with geometrical spreading factor. In the model, we do not have origin time of the tremor since we assume that the source of the tremor continuously radiates the energy. Travel-time differences between stations estimated by the ECM technique also incorporated in our locating algorithm together with the amplitude information. Three-component 1-hour Hi-net velocity continuous waveforms with a pass-band of 2-10 Hz are used for the inversion after the correction of site amplification factors at each station estimated by coda normalization method (Takahashi et al. 2005) applied to normal earthquakes in the region. The source location and energy are estimated by applying least square inversion to the 1-min window iteratively. As a first application of our method, we estimated the spatio-temporal distribution of energy radiation for 2006 May episodic tremor and slip event occurred in western Shikoku, Japan, region. Tremor location and their radiated energy are estimated for every 1 minute. We counted the number of located LFTs and summed up their total energy at each grid having 0.05-degree spacing at each day to figure out the spatio-temporal distribution of energy release of tremors. The resultant spatial distribution of radiated energy is concentrated at a specific region. Additionally, we see the daily change of released energy, both of location and amount, which corresponds to the migration of tremor activity. The spatio-temporal distribution of energy radiation of tremors is in good agreement with a spatio-temporal slip distribution of slow slip event estimated from Hi-net tiltmeter record (Hirose et al. 2007). This suggests that small continuous tremors occur associated with a rupture process of slow slip.
NASA Technical Reports Server (NTRS)
Turso, James; Lawrence, Charles; Litt, Jonathan
2004-01-01
The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.
NASA Technical Reports Server (NTRS)
Turso, James A.; Lawrence, Charles; Litt, Jonathan S.
2007-01-01
The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/ health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite-element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.
Cooperative Opportunistic Pressure Based Routing for Underwater Wireless Sensor Networks.
Javaid, Nadeem; Muhammad; Sher, Arshad; Abdul, Wadood; Niaz, Iftikhar Azim; Almogren, Ahmad; Alamri, Atif
2017-03-19
In this paper, three opportunistic pressure based routing techniques for underwater wireless sensor networks (UWSNs) are proposed. The first one is the cooperative opportunistic pressure based routing protocol (Co-Hydrocast), second technique is the improved Hydrocast (improved-Hydrocast), and third one is the cooperative improved Hydrocast (Co-improved Hydrocast). In order to minimize lengthy routing paths between the source and the destination and to avoid void holes at the sparse networks, sensor nodes are deployed at different strategic locations. The deployment of sensor nodes at strategic locations assure the maximum monitoring of the network field. To conserve the energy consumption and minimize the number of hops, greedy algorithm is used to transmit data packets from the source to the destination. Moreover, the opportunistic routing is also exploited to avoid void regions by making backward transmissions to find reliable path towards the destination in the network. The relay cooperation mechanism is used for reliable data packet delivery, when signal to noise ratio (SNR) of the received signal is not within the predefined threshold then the maximal ratio combining (MRC) is used as a diversity technique to improve the SNR of the received signals at the destination. Extensive simulations validate that our schemes perform better in terms of packet delivery ratio and energy consumption than the existing technique; Hydrocast.
Cooperative Opportunistic Pressure Based Routing for Underwater Wireless Sensor Networks
Javaid, Nadeem; Muhammad; Sher, Arshad; Abdul, Wadood; Niaz, Iftikhar Azim; Almogren, Ahmad; Alamri, Atif
2017-01-01
In this paper, three opportunistic pressure based routing techniques for underwater wireless sensor networks (UWSNs) are proposed. The first one is the cooperative opportunistic pressure based routing protocol (Co-Hydrocast), second technique is the improved Hydrocast (improved-Hydrocast), and third one is the cooperative improved Hydrocast (Co-improved Hydrocast). In order to minimize lengthy routing paths between the source and the destination and to avoid void holes at the sparse networks, sensor nodes are deployed at different strategic locations. The deployment of sensor nodes at strategic locations assure the maximum monitoring of the network field. To conserve the energy consumption and minimize the number of hops, greedy algorithm is used to transmit data packets from the source to the destination. Moreover, the opportunistic routing is also exploited to avoid void regions by making backward transmissions to find reliable path towards the destination in the network. The relay cooperation mechanism is used for reliable data packet delivery, when signal to noise ratio (SNR) of the received signal is not within the predefined threshold then the maximal ratio combining (MRC) is used as a diversity technique to improve the SNR of the received signals at the destination. Extensive simulations validate that our schemes perform better in terms of packet delivery ratio and energy consumption than the existing technique; Hydrocast. PMID:28335494
Rupture Propagation Imaging of Fluid Induced Events at the Basel EGS Project
NASA Astrophysics Data System (ADS)
Folesky, Jonas; Kummerow, Jörn; Shapiro, Serge A.
2014-05-01
The analysis of rupture properties using rupture propagation imaging techniques is a fast developing field of research in global seismology. Usually rupture fronts of large to megathrust earthquakes are subject of recent studies, like e.g. the 2004 Sumatra-Andaman earthquake or the 2011 Tohoku, Japan earthquake. The back projection technique is the most prominent technique in this field. Here the seismograms recorded at an array or at a seismic network are back shifted to a grid of possible source locations via a special stacking procedure. This can provide information on the energy release and energy distribution of the rupture which then can be used to find estimates of event properties like location, rupture direction, rupture speed or length. The procedure is fast and direct and it only relies on a reasonable velocity model. Thus it is a good way to rapidly estimate the rupture properties and it can be used to confirm independently achieved event information. We adopted the back projection technique and put it in a microseismic context. We demonstrated its usage for multiple synthetic ruptures within a reservoir model of microseismic scale in earlier works. Our motivation hereby is the occurrence of relatively large, induced seismic events at a number of stimulated geothermal reservoirs or waste disposal sites, having magnitudes ML ≥ 3.4 and yielding rupture lengths of several hundred meters. We use the configuration of the seismic network and reservoir properties of the Basel Geothermal Site to build a synthetic model of a rupture by modeling the wave field of multiple spatio-temporal separated single sources using Finite-Difference modeling. The focus of this work is the application of the Back Projection technique and the demonstration of its feasibility to retrieve the rupture properties of real fluid induced events. We take four microseismic events with magnitudes from ML 3.1 to 3.4 and reconstruct source parameters like location, orientation and length. By comparison with our synthetic results as well as independent localization studies and source mechanism studies in this area we can show, that the obtained results are reasonable and that the application of back projection imaging is not only possible for microseismic datasets of respective quality, but that it provides important additional insights in the rupture process.
Stephen, Julia M; Ranken, Doug M; Aine, Cheryl J; Weisend, Michael P; Shih, Jerry J
2005-12-01
Previous studies have shown that magnetoencephalography (MEG) can measure hippocampal activity, despite the cylindrical shape and deep location in the brain. The current study extended this work by examining the ability to differentiate the hippocampal subfields, parahippocampal cortex, and neocortical temporal sources using simulated interictal epileptic activity. A model of the hippocampus was generated on the MRIs of five subjects. CA1, CA3, and dentate gyrus of the hippocampus were activated as well as entorhinal cortex, presubiculum, and neocortical temporal cortex. In addition, pairs of sources were activated sequentially to emulate various hypotheses of mesial temporal lobe seizure generation. The simulated MEG activity was added to real background brain activity from the five subjects and modeled using a multidipole spatiotemporal modeling technique. The waveforms and source locations/orientations for hippocampal and parahippocampal sources were differentiable from neocortical temporal sources. In addition, hippocampal and parahippocampal sources were differentiated to varying degrees depending on source. The sequential activation of hippocampal and parahippocampal sources was adequately modeled by a single source; however, these sources were not resolvable when they overlapped in time. These results suggest that MEG has the sensitivity to distinguish parahippocampal and hippocampal spike generators in mesial temporal lobe epilepsy.
Wave-equation migration velocity inversion using passive seismic sources
NASA Astrophysics Data System (ADS)
Witten, B.; Shragge, J. C.
2015-12-01
Seismic monitoring at injection sites (e.g., CO2 sequestration, waste water disposal, hydraulic fracturing) has become an increasingly important tool for hazard identification and avoidance. The information obtained from this data is often limited to seismic event properties (e.g., location, approximate time, moment tensor), the accuracy of which greatly depends on the estimated elastic velocity models. However, creating accurate velocity models from passive array data remains a challenging problem. Common techniques rely on picking arrivals or matching waveforms requiring high signal-to-noise data that is often not available for the magnitude earthquakes observed over injection sites. We present a new method for obtaining elastic velocity information from earthquakes though full-wavefield wave-equation imaging and adjoint-state tomography. The technique exploits the fact that the P- and S-wave arrivals originate at the same time and location in the subsurface. We generate image volumes by back-propagating P- and S-wave data through initial Earth models and then applying a correlation-based extended-imaging condition. Energy focusing away from zero lag in the extended image volume is used as a (penalized) residual in an adjoint-state tomography scheme to update the P- and S-wave velocity models. We use an acousto-elastic approximation to greatly reduce the computational cost. Because the method requires neither an initial source location or origin time estimate nor picking of arrivals, it is suitable for low signal-to-noise datasets, such as microseismic data. Synthetic results show that with a realistic distribution of microseismic sources, P- and S-velocity perturbations can be recovered. Although demonstrated at an oil and gas reservoir scale, the technique can be applied to problems of all scales from geologic core samples to global seismology.
Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M
2016-08-01
One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.
Evaluation of substitution monopole models for tire noise sound synthesis
NASA Astrophysics Data System (ADS)
Berckmans, D.; Kindt, P.; Sas, P.; Desmet, W.
2010-01-01
Due to the considerable efforts in engine noise reduction, tire noise has become one of the major sources of passenger car noise nowadays and the demand for accurate prediction models is high. A rolling tire is therefore experimentally characterized by means of the substitution monopole technique, suiting a general sound synthesis approach with a focus on perceived sound quality. The running tire is substituted by a monopole distribution covering the static tire. All monopoles have mutual phase relationships and a well-defined volume velocity distribution which is derived by means of the airborne source quantification technique; i.e. by combining static transfer function measurements with operating indicator pressure measurements close to the rolling tire. Models with varying numbers/locations of monopoles are discussed and the application of different regularization techniques is evaluated.
Abbott, M.; Einerson, J.; Schuster, P.; Susong, D.; Taylor, Howard E.; ,
2004-01-01
Snow sampling and analysis methods which produce accurate and ultra-low measurements of trace elements and common ion concentration in southeastern Idaho snow, were developed. Snow samples were collected over two winters to assess trace elements and common ion concentrations in air pollutant fallout across the southeastern Idaho. The area apportionment of apportionment of fallout concentrations measured at downwind location were investigated using pattern recognition and multivariate statistical technical techniques. Results show a high level of contribution from phosphates processing facilities located outside Pocatello in the southern portion of the Eastern Snake River Plain, and no obvious source area profiles other than at Pocatello.
NASA Technical Reports Server (NTRS)
Stuart, J. R.
1984-01-01
The evolution of NASA's planetary navigation techniques is traced, and radiometric and optical data types are described. Doppler navigation; the Deep Space Network; differenced two-way range techniques; differential very long base interferometry; and optical navigation are treated. The Doppler system enables a spacecraft in cruise at high absolute declination to be located within a total angular uncertainty of 1/4 microrad. The two-station range measurement provides a 1 microrad backup at low declinations. Optical data locate the spacecraft relative to the target to an angular accuracy of 5 microrad. Earth-based radio navigation and its less accurate but target-relative counterpart, optical navigation, thus form complementary measurement sources, which provide a powerful sensory system to produce high-precision orbit estimates.
NASA Astrophysics Data System (ADS)
Mabit, Lionel; Gibbs, Max; Chen, Xu; Meusburger, Katrin; Toloza, Arsenio; Resch, Christian; Klik, Andreas; Eder, Alexander; Strauss, Peter; Alewell, Christine
2015-04-01
The overall impacts of climate change on agriculture are expected to be negative, threatening global food security. In the agricultural areas of the European Union, water erosion risk is expected to increase by about 80% by the year 2050. Reducing soil erosion and sedimentation-related environmental problems represent a key requirement for mitigating the impact of climate change. A new forensic stable isotope technique, using the compound specific stable isotope (CSSI) signatures of inherent soil organic biomarkers, can discriminate and apportion the source soil contribution from different land uses. Plant communities label the soil where they grow by exuding organic biomarkers. Although all plants produce the same biomarkers, the stable isotopic signature of those biomarkers is different for each plant species. For agri-environmental investigation, the CSSI technique is based on the measurement of carbon-13 (13-C) natural abundance signatures of specific organic compounds such as natural fatty acids (FAs) in the soil. By linking fingerprints of land use to the sediment in deposition zones, this approach has been shown to be a useful technique for determining the source of eroded soil and thereby identifying areas prone to soil degradation. The authors have tested this innovative stable isotopic approach in a small Austrian agricultural catchment located 60 km north of Vienna. A previous fallout radionuclide (i.e. 137-Cs) based investigation established a sedimentation rate of 4 mm/yr in the lowest part of the study site. To gain knowledge about the origin of these sediments, the CSSI technique was then tested using representative samples from the different land-uses of the catchment as source material. Values of 13-C signatures of specific FAs (i.e. C22:0 = Behenic Acid ; C24:0 = Lignoceric Acid) and the bulk 13-C of the sediment mixture and potential landscape sources were analyzed with the mixing models IsoSource and CSSIAR v1.00. Using both mixing models, preliminary results highlighted that about 50-55% of the sediment located in the deposition area originated from the main grassed waterway of the catchment.
Scholey, J J; Wilcox, P D; Wisnom, M R; Friswell, M I
2009-06-01
A model for quantifying the performance of acoustic emission (AE) systems on plate-like structures is presented. Employing a linear transfer function approach the model is applicable to both isotropic and anisotropic materials. The model requires several inputs including source waveforms, phase velocity and attenuation. It is recognised that these variables may not be readily available, thus efficient measurement techniques are presented for obtaining phase velocity and attenuation in a form that can be exploited directly in the model. Inspired by previously documented methods, the application of these techniques is examined and some important implications for propagation characterisation in plates are discussed. Example measurements are made on isotropic and anisotropic plates and, where possible, comparisons with numerical solutions are made. By inputting experimentally obtained data into the model, quantitative system metrics are examined for different threshold values and sensor locations. By producing plots describing areas of hit success and source location error, the ability to measure the performance of different AE system configurations is demonstrated. This quantitative approach will help to place AE testing on a more solid foundation, underpinning its use in industrial AE applications.
Study of acoustic emission during mechanical tests of large flight weight tank structure
NASA Technical Reports Server (NTRS)
Mccauley, B. O.; Nakamura, Y.; Veach, C. L.
1973-01-01
A PPO-insulated, flight-weight, subscale, aluminum tank was monitored for acoustic emissions during a proof test and during 100 cycles of environmental test simulating space flights. The use of a combination of frequency filtering and appropriate spatial filtering to reduce background noise was found to be sufficient to detect acoustic emission signals of relatively small intensity expected from subcritical crack growth in the structure. Several emission source locations were identified, including the one where a flaw was detected by post-test x-ray inspections. For most source locations, however, post-test inspections did not detect flaws; this was partially attributed to the higher sensitivity of the acoustic emission technique than any other currently available NDT method for detecting flaws. For these non-verifiable emission sources, a problem still remains in correctly interpreting observed emission signals.
Mascarenhas, Nicholas; Marleau, Peter; Brennan, James S.; Krenz, Kevin D.
2010-06-22
An instrument that will directly image the fast fission neutrons from a special nuclear material source has been described. This instrument can improve the signal to background compared to non imaging neutron detection techniques by a factor given by ratio of the angular resolution window to 4.pi.. In addition to being a neutron imager, this instrument will also be an excellent neutron spectrometer, and will be able to differentiate between different types of neutron sources (e.g. fission, alpha-n, cosmic ray, and D-D or D-T fusion). Moreover, the instrument is able to pinpoint the source location.
North Alabama Lightning Mapping Array (LMA): VHF Source Retrieval Algorithm and Error Analyses
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solakiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J.; Bailey, J.; Krider, E. P.; Bateman, M. G.; Boccippio, D.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA Marshall Space Flight Center (MSFC) and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix Theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50 ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results. However, for many source locations, the Curvature Matrix Theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Automatic forest-fire measuring using ground stations and Unmanned Aerial Systems.
Martínez-de Dios, José Ramiro; Merino, Luis; Caballero, Fernando; Ollero, Anibal
2011-01-01
This paper presents a novel system for automatic forest-fire measurement using cameras distributed at ground stations and mounted on Unmanned Aerial Systems (UAS). It can obtain geometrical measurements of forest fires in real-time such as the location and shape of the fire front, flame height and rate of spread, among others. Measurement of forest fires is a challenging problem that is affected by numerous potential sources of error. The proposed system addresses them by exploiting the complementarities between infrared and visual cameras located at different ground locations together with others onboard Unmanned Aerial Systems (UAS). The system applies image processing and geo-location techniques to obtain forest-fire measurements individually from each camera and then integrates the results from all the cameras using statistical data fusion techniques. The proposed system has been extensively tested and validated in close-to-operational conditions in field fire experiments with controlled safety conditions carried out in Portugal and Spain from 2001 to 2006.
Automatic Forest-Fire Measuring Using Ground Stations and Unmanned Aerial Systems
Martínez-de Dios, José Ramiro; Merino, Luis; Caballero, Fernando; Ollero, Anibal
2011-01-01
This paper presents a novel system for automatic forest-fire measurement using cameras distributed at ground stations and mounted on Unmanned Aerial Systems (UAS). It can obtain geometrical measurements of forest fires in real-time such as the location and shape of the fire front, flame height and rate of spread, among others. Measurement of forest fires is a challenging problem that is affected by numerous potential sources of error. The proposed system addresses them by exploiting the complementarities between infrared and visual cameras located at different ground locations together with others onboard Unmanned Aerial Systems (UAS). The system applies image processing and geo-location techniques to obtain forest-fire measurements individually from each camera and then integrates the results from all the cameras using statistical data fusion techniques. The proposed system has been extensively tested and validated in close-to-operational conditions in field fire experiments with controlled safety conditions carried out in Portugal and Spain from 2001 to 2006. PMID:22163958
NASA Astrophysics Data System (ADS)
Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe
2017-12-01
This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.
Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems
Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao
2016-01-01
In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896
Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.
Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao
2016-03-12
In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.
Technique to determine location of radio sources from measurements taken on spinning spacecraft
NASA Technical Reports Server (NTRS)
Fainberg, J.
1979-01-01
The procedure developed to extract average source direction and average source size from spin-modulated radio astronomy data measured on the IMP-6 spacecraft is described. Because all measurements are used, rather than just finding maxima or minima in the data, the method is very sensitive, even in the presence of large amounts of noise. The technique is applicable to all experiments with directivity characteristics. It is suitable for onboard processing on satellites to reduce the data flow to Earth. The application to spin-modulated nonpolarized radio astronomy data is made and includes the effects of noise, background, and second source interference. The analysis was tested with computer simulated data and the results agree with analytic predictions. Applications of this method with IMP-6 radio data have led to: (1) determination of source positions of traveling solar radio bursts at large distances from the Sun; (2) mapping of magnetospheric radio emissions by radio triangulation; and (3) detection of low frequency radio emissions from Jupiter and Saturn.
Seismic envelope-based detection and location of ground-coupled airwaves from volcanoes in Alaska
Fee, David; Haney, Matt; Matoza, Robin S.; Szuberla, Curt A.L.; Lyons, John; Waythomas, Christopher F.
2016-01-01
Volcanic explosions and other infrasonic sources frequently produce acoustic waves that are recorded by seismometers. Here we explore multiple techniques to detect, locate, and characterize ground‐coupled airwaves (GCA) on volcano seismic networks in Alaska. GCA waveforms are typically incoherent between stations, thus we use envelope‐based techniques in our analyses. For distant sources and planar waves, we use f‐k beamforming to estimate back azimuth and trace velocity parameters. For spherical waves originating within the network, we use two related time difference of arrival (TDOA) methods to detect and localize the source. We investigate a modified envelope function to enhance the signal‐to‐noise ratio and emphasize both high energies and energy contrasts within a spectrogram. We apply these methods to recent eruptions from Cleveland, Veniaminof, and Pavlof Volcanoes, Alaska. Array processing of GCA from Cleveland Volcano on 4 May 2013 produces robust detection and wave characterization. Our modified envelopes substantially improve the short‐term average/long‐term average ratios, enhancing explosion detection. We detect GCA within both the Veniaminof and Pavlof networks from the 2007 and 2013–2014 activity, indicating repeated volcanic explosions. Event clustering and forward modeling suggests that high‐resolution localization is possible for GCA on typical volcano seismic networks. These results indicate that GCA can be used to help detect, locate, characterize, and monitor volcanic eruptions, particularly in difficult‐to‐monitor regions. We have implemented these GCA detection algorithms into our operational volcano‐monitoring algorithms at the Alaska Volcano Observatory.
Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data
Bakun, W.H.; Gomez, Capera A.; Stucchi, M.
2011-01-01
Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental magnitudes for large and small earthquakes are generally consistent with the confidence intervals inferred from the distribution of bootstrap resampled magnitudes.
Leisman, Gerald; Ashkenazi, Maureen
1979-01-01
Objective psychophysical techniques for investigating visual fields are described. The paper concerns methods for the collection and analysis of evoked potentials using a small laboratory computer and provides efficient methods for obtaining information about the conduction pathways of the visual system.
NASA Technical Reports Server (NTRS)
King, R. B.; Fordyce, J. S.; Antoine, A. C.; Leibecki, H. F.; Neustadter, H. E.; Sidik, S. M.; Burr, J. C.; Craig, G. T.; Cornett, C. L.
1974-01-01
Preliminary review of a study of trace elements and compound concentrations in the ambient suspended particulate matter in Cleveland, Ohio, measured from August 1971 through June 1973, as a function of source, monitoring location, and meteorological conditions. The study is aimed at the development of techniques for identifying specific pollution sources which could be integrated into a practical system readily usable by an enforcement agency.
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
Wind profiler signal detection improvements
NASA Technical Reports Server (NTRS)
Hart, G. F.; Divis, Dale H.
1992-01-01
Research is described on potential improvements to the software used with the NASA 49.25 MHz wind profiler located at Kennedy Space Center. In particular, the analysis and results are provided of a study to (1) identify preferred mathematical techniques for the detection of atmospheric signals that provide wind velocities which are obscured by natural and man-made sources, and (2) to analyze one or more preferred techniques to demonstrate proof of the capability to improve the detection of wind velocities.
NASA Technical Reports Server (NTRS)
Prosser, W. H.; Jackson, K. E.; Kellas, S.; Smith, B. T.; McKeon, J.; Friedman, A.
1995-01-01
Transverse matrix cracking in cross-ply gr/ep laminates was studied with advanced acoustic emission (AE) techniques. The primary goal of this research was to measure the load required to initiate the first transverse matrix crack in cross-ply laminates of different thicknesses. Other methods had been previously used for these measurements including penetrant enhanced radiography, optical microscopy, and audible acoustic microphone measurements. The former methods required that the mechanical test be paused for measurements at load intervals. This slowed the test procedure and did not provide the required resolution in load. With acoustic microphones, acoustic signals from cracks could not be clearly differentiated from other noise sources such as grip damage, specimen slippage, or test machine noise. A second goal for this work was to use the high resolution source location accuracy of the advanced acoustic emission techniques to determine whether the crack initiation site was at the specimen edge or in the interior of the specimen.In this research, advanced AE techniques using broad band sensors, high capture rate digital waveform acquisition, and plate wave propagation based analysis were applied to cross-ply composite coupons with different numbers of 0 and 90 degree plies. Noise signals, believed to be caused by grip damage or specimen slipping, were eliminated based on their plate wave characteristics. Such signals were always located outside the sensor gage length in the gripped region of the specimen. Cracks were confirmed post-test by microscopic analysis of a polished specimen edge, backscatter ultrasonic scans, and in limited cases, by penetrant enhanced radiography. For specimens with three or more 90 degree plies together, there was an exact 1-1 correlation between AE crack signals and observed cracks. The ultrasonic scans and some destructive sectioning analysis showed that the cracks extended across the full width of the specimen. Furthermore, the locations of the cracks from the AE data were in excellent agreement with the locations measured with the microscope. The high resolution source location capability of this technique, combined with an array of sensors, was able to determine that the cracks initiated at the specimen edges, rather than in the interior. For specimens with only one or two 90 degree plies, the crack-like signals were significantly smaller in amplitude and there was not a 1-1 correlation to observed cracks. This was similar to previous results. In this case, however, ultrasonic and destructive sectioning analysis revealed that the cracks did not extend across the specimen. They initiated at the edge, but did not propagate any appreciable distance into the specimen. This explains the much smaller AE signal amplitudes and the difficulty in correlating these signals to actual cracks in this, as well as in the previous study.
Self characterization of a coded aperture array for neutron source imaging
NASA Astrophysics Data System (ADS)
Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; Guler, N.; Merrill, F. E.; Wilde, C. H.
2014-12-01
The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (˜100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.
New technique for the direct measurement of core noise from aircraft engines
NASA Technical Reports Server (NTRS)
Krejsa, E. A.
1981-01-01
A new technique is presented for directly measuring the core noise levels from gas turbine aircraft engines. The technique requires that fluctuating pressures be measured in the far-field and at two locations within the engine core. The cross-spectra of these measurements are used to determine the levels of the far-field noise that propagated from the engine core. The technique makes it possible to measure core noise levels even when other noise sources dominate. The technique was applied to signals measured from an AVCO Lycoming YF102 turbofan engine. Core noise levels as a function of frequency and radiation angle were measured and are presented over a range of power settings.
Elevated Arsenic and Uranium Concentrations in Unregulated Water Sources on the Navajo Nation, USA.
Hoover, Joseph; Gonzales, Melissa; Shuey, Chris; Barney, Yolanda; Lewis, Johnnye
2017-01-01
Regional water pollution and use of unregulated water sources can be an important mixed metals exposure pathway for rural populations located in areas with limited water infrastructure and an extensive mining history. Using censored data analysis and mapping techniques we analyzed the joint geospatial distribution of arsenic and uranium in unregulated water sources throughout the Navajo Nation, where over 500 abandoned uranium mine sites are located in the rural southwestern United States. Results indicated that arsenic and uranium concentrations exceeded national drinking water standards in 15.1 % (arsenic) and 12.8 % (uranium) of tested water sources. Unregulated sources in close proximity (i.e., within 6 km) to abandoned uranium mines yielded significantly higher concentrations of arsenic or uranium than more distant sources. The demonstrated regional trends for potential co-exposure to these chemicals have implications for public policy and future research. Specifically, to generate solutions that reduce human exposure to water pollution from unregulated sources in rural areas, the potential for co-exposure to arsenic and uranium requires expanded documentation and examination. Recommendations for prioritizing policy and research decisions related to the documentation of existing health exposures and risk reduction strategies are also provided.
Study of acoustic emission during mechanical tests of large flight weight tank structure
NASA Technical Reports Server (NTRS)
Nakamura, Y.; Mccauley, B. O.; Veach, C. L.
1972-01-01
A polyphenylane oxide insulated, flight weight, subscale, aluminum tank was monitored for acoustic emissions during a proof test and during 100 cycles of environmental test simulating space flights. The use of a combination of frequency filtering and appropriate spatial filtering to reduce background noise was found to be sufficient to detect acoustic emission signals of relatively small intensity expected from subcritical crack growth in the structure. Several emission source locations were identified, including the one where a flaw was detected by post-test X-ray inspections. For most source locations, however, post-test inspections did not detect flaws; this was partially attributed to the higher sensitivity of the acoustic emission technique than any other currently available NDT method for detecting flaws.
NASA Astrophysics Data System (ADS)
Alden, Caroline B.; Ghosh, Subhomoy; Coburn, Sean; Sweeney, Colm; Karion, Anna; Wright, Robert; Coddington, Ian; Rieker, Gregory B.; Prasad, Kuldeep
2018-03-01
Advances in natural gas extraction technology have led to increased activity in the production and transport sectors in the United States and, as a consequence, an increased need for reliable monitoring of methane leaks to the atmosphere. We present a statistical methodology in combination with an observing system for the detection and attribution of fugitive emissions of methane from distributed potential source location landscapes such as natural gas production sites. We measure long (> 500 m), integrated open-path concentrations of atmospheric methane using a dual frequency comb spectrometer and combine measurements with an atmospheric transport model to infer leak locations and strengths using a novel statistical method, the non-zero minimum bootstrap (NZMB). The new statistical method allows us to determine whether the empirical distribution of possible source strengths for a given location excludes zero. Using this information, we identify leaking source locations (i.e., natural gas wells) through rejection of the null hypothesis that the source is not leaking. The method is tested with a series of synthetic data inversions with varying measurement density and varying levels of model-data mismatch. It is also tested with field observations of (1) a non-leaking source location and (2) a source location where a controlled emission of 3.1 × 10-5 kg s-1 of methane gas is released over a period of several hours. This series of synthetic data tests and outdoor field observations using a controlled methane release demonstrates the viability of the approach for the detection and sizing of very small leaks of methane across large distances (4+ km2 in synthetic tests). The field tests demonstrate the ability to attribute small atmospheric enhancements of 17 ppb to the emitting source location against a background of combined atmospheric (e.g., background methane variability) and measurement uncertainty of 5 ppb (1σ), when measurements are averaged over 2 min. The results of the synthetic and field data testing show that the new observing system and statistical approach greatly decreases the incidence of false alarms (that is, wrongly identifying a well site to be leaking) compared with the same tests that do not use the NZMB approach and therefore offers increased leak detection and sizing capabilities.
Apparatus and method for high dose rate brachytherapy radiation treatment
Macey, Daniel J.; Majewski, Stanislaw; Weisenberger, Andrew G.; Smith, Mark Frederick; Kross, Brian James
2005-01-25
A method and apparatus for the in vivo location and tracking of a radioactive seed source during and after brachytherapy treatment. The method comprises obtaining multiple views of the seed source in a living organism using: 1) a single PSPMT detector that is exposed through a multiplicity of pinholes thereby obtaining a plurality of images from a single angle; 2) a single PSPMT detector that may obtain an image through a single pinhole or a plurality of pinholes from a plurality of angles through movement of the detector; or 3) a plurality of PSPMT detectors that obtain a plurality of views from different angles simultaneously or virtually simultaneously. The plurality of images obtained from these various techniques, through angular displacement of the various acquired images, provide the information required to generate the three dimensional images needed to define the location of the radioactive seed source within the body of the living organism.
Passive coherent location direct signal suppression using hardware mixing techniques
NASA Astrophysics Data System (ADS)
Kaiser, Sean A.; Christianson, Andrew J.; Narayanan, Ram M.
2017-05-01
Passive coherent location (PCL) is a radar technique, in which the system uses reflections from opportunistic illumination sources in the environment for detection and tracking. Typically, PCL uses civilian communication transmitters not ideally suited for radar. The physical geometry of PCL is developed on the basis of bistatic radar without control of the transmitter antenna or waveform design. This poses the problem that often the receiver is designed with two antennas and channels, one for reference and one for surveillance. The surveillance channel is also contaminated with the direct signal and thus direct signal suppression (DSS) techniques must be used. This paper proposes an analytical solution based around hardware for DSS which is compared to other methods available in the literature. The methods are tested in varying bistatic geometries and with varying target radar cross section (RCS) and signal-to-noise ratio (SNR).
Sensorimotor System Measurement Techniques
Riemann, Bryan L.; Myers, Joseph B.; Lephart, Scott M.
2002-01-01
Objective: To provide an overview of currently available sensorimotor assessment techniques. Data Sources: We drew information from an extensive review of the scientific literature conducted in the areas of proprioception, neuromuscular control, and motor control measurement. Literature searches were conducted using MEDLINE for the years 1965 to 1999 with the key words proprioception, somatosensory evoked potentials, nerve conduction testing, electromyography, muscle dynamometry, isometric, isokinetic, kinetic, kinematic, posture, equilibrium, balance, stiffness, neuromuscular, sensorimotor, and measurement. Additional sources were collected using the reference lists of identified articles. Data Synthesis: Sensorimotor measurement techniques are discussed with reference to the underlying physiologic mechanisms, influential factors and locations of the variable within the system, clinical research questions, limitations of the measurement technique, and directions for future research. Conclusions/Recommendations: The complex interactions and relationships among the individual components of the sensorimotor system make measuring and analyzing specific characteristics and functions difficult. Additionally, the specific assessment techniques used to measure a variable can influence attained results. Optimizing the application of sensorimotor research to clinical settings can, therefore, be best accomplished through the use of common nomenclature to describe underlying physiologic mechanisms and specific measurement techniques. PMID:16558672
The analysis of complex mixed-radiation fields using near real-time imaging.
Beaumont, Jonathan; Mellor, Matthew P; Joyce, Malcolm J
2014-10-01
A new mixed-field imaging system has been constructed at Lancaster University using the principles of collimation and back projection to passively locate and assess sources of neutron and gamma-ray radiation. The system was set up at the University of Manchester where three radiation sources: (252)Cf, a lead-shielded (241)Am/Be and a (22)Na source were imaged. Real-time discrimination was used to find the respective components of the neutron and gamma-ray fields detected by a single EJ-301 liquid scintillator, allowing separate images of neutron and gamma-ray emitters to be formed. (252)Cf and (22)Na were successfully observed and located in the gamma-ray image; however, the (241)Am/Be was not seen owing to surrounding lead shielding. The (252)Cf and (241)Am/Be neutron sources were seen clearly in the neutron image, demonstrating the advantage of this mixed-field technique over a gamma-ray-only image where the (241)Am/Be source would have gone undetected. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Jones, Ryan M; O'Reilly, Meaghan A; Hynynen, Kullervo
2015-07-01
Experimentally verify a previously described technique for performing passive acoustic imaging through an intact human skull using noninvasive, computed tomography (CT)-based aberration corrections Jones et al. [Phys. Med. Biol. 58, 4981-5005 (2013)]. A sparse hemispherical receiver array (30 cm diameter) consisting of 128 piezoceramic discs (2.5 mm diameter, 612 kHz center frequency) was used to passively listen through ex vivo human skullcaps (n = 4) to acoustic emissions from a narrow-band fixed source (1 mm diameter, 516 kHz center frequency) and from ultrasound-stimulated (5 cycle bursts, 1 Hz pulse repetition frequency, estimated in situ peak negative pressure 0.11-0.33 MPa, 306 kHz driving frequency) Definity™ microbubbles flowing through a thin-walled tube phantom. Initial in vivo feasibility testing of the method was performed. The performance of the method was assessed through comparisons to images generated without skull corrections, with invasive source-based corrections, and with water-path control images. For source locations at least 25 mm from the inner skull surface, the modified reconstruction algorithm successfully restored a single focus within the skull cavity at a location within 1.25 mm from the true position of the narrow-band source. The results obtained from imaging single bubbles are in good agreement with numerical simulations of point source emitters and the authors' previous experimental measurements using source-based skull corrections O'Reilly et al. [IEEE Trans. Biomed. Eng. 61, 1285-1294 (2014)]. In a rat model, microbubble activity was mapped through an intact human skull at pressure levels below and above the threshold for focused ultrasound-induced blood-brain barrier opening. During bursts that led to coherent bubble activity, the location of maximum intensity in images generated with CT-based skull corrections was found to deviate by less than 1 mm, on average, from the position obtained using source-based corrections. Taken together, these results demonstrate the feasibility of using the method to guide bubble-mediated ultrasound therapies in the brain. The technique may also have application in ultrasound-based cerebral angiography.
Jones, Ryan M.; O’Reilly, Meaghan A.; Hynynen, Kullervo
2015-01-01
Purpose: Experimentally verify a previously described technique for performing passive acoustic imaging through an intact human skull using noninvasive, computed tomography (CT)-based aberration corrections Jones et al. [Phys. Med. Biol. 58, 4981–5005 (2013)]. Methods: A sparse hemispherical receiver array (30 cm diameter) consisting of 128 piezoceramic discs (2.5 mm diameter, 612 kHz center frequency) was used to passively listen through ex vivo human skullcaps (n = 4) to acoustic emissions from a narrow-band fixed source (1 mm diameter, 516 kHz center frequency) and from ultrasound-stimulated (5 cycle bursts, 1 Hz pulse repetition frequency, estimated in situ peak negative pressure 0.11–0.33 MPa, 306 kHz driving frequency) Definity™ microbubbles flowing through a thin-walled tube phantom. Initial in vivo feasibility testing of the method was performed. The performance of the method was assessed through comparisons to images generated without skull corrections, with invasive source-based corrections, and with water-path control images. Results: For source locations at least 25 mm from the inner skull surface, the modified reconstruction algorithm successfully restored a single focus within the skull cavity at a location within 1.25 mm from the true position of the narrow-band source. The results obtained from imaging single bubbles are in good agreement with numerical simulations of point source emitters and the authors’ previous experimental measurements using source-based skull corrections O’Reilly et al. [IEEE Trans. Biomed. Eng. 61, 1285–1294 (2014)]. In a rat model, microbubble activity was mapped through an intact human skull at pressure levels below and above the threshold for focused ultrasound-induced blood–brain barrier opening. During bursts that led to coherent bubble activity, the location of maximum intensity in images generated with CT-based skull corrections was found to deviate by less than 1 mm, on average, from the position obtained using source-based corrections. Conclusions: Taken together, these results demonstrate the feasibility of using the method to guide bubble-mediated ultrasound therapies in the brain. The technique may also have application in ultrasound-based cerebral angiography. PMID:26133635
NASA Technical Reports Server (NTRS)
Estulin, I. V.
1977-01-01
The progress made and techniques used by the Soviet-French group in the study of gamma and X ray pulses are described in abstracts of 16 reports. Experiments included calibration and operation of various recording instruments designed for measurements involving these pulses, specifically the location of sources of such pulses in outer space. Space vehicles are utilized in conjunction with ground equipment to accomplish these tests.
An inquiry into application of Gokyo (Aikido's Fifth Teaching) on human anatomy.
Olson, G D; Seitz, F C; Guldbrandsen, F
1996-06-01
In this anatomical analysis the authors examined Gokyo, Aikido's Fifth Teaching. Using their cadaver/anatomist-observer model, the authors observed that tissues manipulated by the technique were primarily on the dorsal side of the wrist, proximal to the second metacarpal. The source of the pain was thought to involve the manipulation of the wrist joints and associated carpometacarpal ligaments. Locations of the manipulated tissue and sources of pain associated with that tissue, and their limited practical application were discussed.
40 CFR Appendix W to Part 51 - Guideline on Air Quality Models
Code of Federal Regulations, 2011 CFR
2011-07-01
... sufficient spatial and temporal coverage are available. c. It would be advantageous to categorize the various... control strategies. These are referred to as refined models. c. The use of screening techniques followed... location of the source in question and its expected impacts. c. In all regulatory analyses, especially if...
2007-09-30
adaptively using real-time data collected with the international constellation of ocean color satellites, a nested grid of HF radars, and an...scattering source was identified during the experiment as dense, monotypic aggregations of a pelagic gastropod were located during a 2-day period. These
2008-01-01
which provides real-time data throughout the Mid-Atlantic Bight (MAB). The surveys will be positioned adaptively using real-time data collected with the...source was identified during the experiment as dense, monotypic aggregations of a pelagic gastropod were located during a 2-day period. These
2007-09-30
which provides real-time data throughout the Mid-Atlantic Bight (MAB). The surveys will be positioned adaptively using real-time data collected with...scattering source was identified during the experiment as dense, monotypic aggregations of a pelagic gastropod were located during a 2-day period. These
Due to the computational cost of running regional-scale numerical air quality models, reduced form models (RFM) have been proposed as computationally efficient simulation tools for characterizing the pollutant response to many different types of emission reductions. The U.S. Envi...
Water quality and shellfish sanitation. [Patuxent and Choptank River watersheds
NASA Technical Reports Server (NTRS)
Eisenberg, M.
1978-01-01
The use of remote sensing techniques for collecting bacteriological, physical, and chemical water quality data, locating point and nonpoint sources of pollution, and developing hydrological data was found to be valuable to the Maryland program if it could be produced effectively and rapidly with a minimum amount of ground corroboration.
Looking inside the microseismic cloud using seismic interferometry
NASA Astrophysics Data System (ADS)
Matzel, E.; Rhode, A.; Morency, C.; Templeton, D. C.; Pyle, M. L.
2015-12-01
Microseismicity provides a direct means of measuring the physical characteristics of active tectonic features such as fault zones. Thousands of microquakes are often associated with an active site. This cloud of microseismicity helps define the tectonically active region. When processed using novel geophysical techniques, we can isolate the energy sensitive to the faulting region, itself. The virtual seismometer method (VSM) is a technique of seismic interferometry that provides precise estimates of the GF between earthquakes. In many ways the converse of ambient noise correlation, it is very sensitive to the source parameters (location, mechanism and magnitude) and to the Earth structure in the source region. In a region with 1000 microseisms, we can calculate roughly 500,000 waveforms sampling the active zone. At the same time, VSM collapses the computation domain down to the size of the cloud of microseismicity, often by 2-3 orders of magnitude. In simple terms VSM involves correlating the waveforms from a pair of events recorded at an individual station and then stacking the results over all stations to obtain the final result. In the far-field, when most of the stations in a network fall along a line between the two events, the result is an estimate of the GF between the two, modified by the source terms. In this geometry each earthquake is effectively a virtual seismometer recording all the others. When applied to microquakes, this alignment is often not met, and we also need to address the effects of the geometry between the two microquakes relative to each seismometer. Nonetheless, the technique is quite robust, and highly sensitive to the microseismic cloud. Using data from the Salton Sea geothermal region, we demonstrate the power of the technique, illustrating our ability to scale the technique from the far-field, where sources are well separated, to the near field where their locations fall within each other's uncertainty ellipse. VSM provides better illumination of the complex subsurface by generating precise, high frequency estimates of the GF and resolution of seismic properties between every pair of events. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
Ismail, Azimah; Toriman, Mohd Ekhwan; Juahir, Hafizan; Zain, Sharifuddin Md; Habir, Nur Liyana Abdul; Retnam, Ananthy; Kamaruddin, Mohd Khairul Amri; Umar, Roslan; Azid, Azman
2016-05-15
This study presents the determination of the spatial variation and source identification of heavy metal pollution in surface water along the Straits of Malacca using several chemometric techniques. Clustering and discrimination of heavy metal compounds in surface water into two groups (northern and southern regions) are observed according to level of concentrations via the application of chemometric techniques. Principal component analysis (PCA) demonstrates that Cu and Cr dominate the source apportionment in northern region with a total variance of 57.62% and is identified with mining and shipping activities. These are the major contamination contributors in the Straits. Land-based pollution originating from vehicular emission with a total variance of 59.43% is attributed to the high level of Pb concentration in the southern region. The results revealed that one state representing each cluster (northern and southern regions) is significant as the main location for investigating heavy metal concentration in the Straits of Malacca which would save monitoring cost and time. The monitoring of spatial variation and source of heavy metals pollution at the northern and southern regions of the Straits of Malacca, Malaysia, using chemometric analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
A review of second law techniques applicable to basic thermal science research
NASA Astrophysics Data System (ADS)
Drost, M. Kevin; Zamorski, Joseph R.
1988-11-01
This paper reports the results of a review of second law analysis techniques which can contribute to basic research in the thermal sciences. The review demonstrated that second law analysis has a role in basic thermal science research. Unlike traditional techniques, second law analysis accurately identifies the sources and location of thermodynamic losses. This allows the development of innovative solutions to thermal science problems by directing research to the key technical issues. Two classes of second law techniques were identified as being particularly useful. First, system and component investigations can provide information of the source and nature of irreversibilities on a macroscopic scale. This information will help to identify new research topics and will support the evaluation of current research efforts. Second, the differential approach can provide information on the causes and spatial and temporal distribution of local irreversibilities. This information enhances the understanding of fluid mechanics, thermodynamics, and heat and mass transfer, and may suggest innovative methods for reducing irreversibilities.
NASA Astrophysics Data System (ADS)
Allaf, M. Athari; Shahriari, M.; Sohrabpour, M.
2004-04-01
A new method using Monte Carlo source simulation of interference reactions in neutron activation analysis experiments has been developed. The neutron spectrum at the sample location has been simulated using the Monte Carlo code MCNP and the contributions of different elements to produce a specified gamma line have been determined. The produced response matrix has been used to measure peak areas and the sample masses of the elements of interest. A number of benchmark experiments have been performed and the calculated results verified against known values. The good agreement obtained between the calculated and known values suggests that this technique may be useful for the elimination of interference reactions in neutron activation analysis.
Electron Energy Distribution function in a weakly magnetized expanding helicon plasma discharge
NASA Astrophysics Data System (ADS)
Sirse, Nishant; Harvey, Cleo; Gaman, Cezar; Ellingboe, Bert
2016-09-01
Helicon wave heating is well known to produce high-density plasma source for application in plasma thrusters, plasma processing and many more. Our previous study (B Ellingboe et al. APS Gaseous Electronics Conference 2015, abstract #KW2.005) has shown observation of helicon wave in a weakly magnetized inductively coupled plasma source excited by m =0 antenna at 13.56 MHz. In this paper, we investigated the Electron Energy Distribution Function (EEDF) in the same setup by using an RF compensated Langmuir probe. The ac signal superimposition technique (second harmonic technique) is used to determine EEDF. The EEDF is measured for 5-100 mTorr gas pressure, 100 W - 1.5 kW rf power and at different locations in the source chamber, boundary and diffusion chamber. This paper will discuss the change in the shape of EEDF for various heating mode transitions.
Noise Sources in Photometry and Radial Velocities
NASA Astrophysics Data System (ADS)
Oshagh, Mahmoudreza
The quest for Earth-like, extrasolar planets (exoplanets), especially those located inside the habitable zone of their host stars, requires techniques sensitive enough to detect the faint signals produced by those planets. The radial velocity (RV) and photometric transit methods are the most widely used and also the most efficient methods for detecting and characterizing exoplanets. However, presence of astrophysical "noise" makes it difficult to detect and accurately characterize exoplanets. It is important to note that the amplitude of such astrophysical noise is larger than both the signal of Earth-like exoplanets and state-of-the-art instrumentation limit precision, making this a pressing topic that needs to be addressed. In this chapter, I present a general review of the main sources of noise in photometric and RV observations, namely, stellar oscillations, granulation, and magnetic activity. Moreover, for each noise source I discuss the techniques and observational strategies which allow us to mitigate their impact.
Testing the seismology-based landquake monitoring system
NASA Astrophysics Data System (ADS)
Chao, Wei-An
2016-04-01
I have developed a real-time landquake monitoring system (RLMs), which monitor large-scale landquake activities in the Taiwan using real-time seismic network of Broadband Array in Taiwan for Seismology (BATS). The RLM system applies a grid-based general source inversion (GSI) technique to obtain the preliminary source location and force mechanism. A 2-D virtual source-grid on the Taiwan Island is created with an interval of 0.2° in both latitude and longitude. The depth of each grid point is fixed on the free surface topography. A database is stored on the hard disk for the synthetics, which are obtained using Green's functions computed by the propagator matrix approach for 1-D average velocity model, at all stations from each virtual source-grid due to nine elementary source components: six elementary moment tensors and three orthogonal (north, east and vertical) single-forces. Offline RLM system was carried out for events detected in previous studies. An important aspect of the RLM system is the implementation of GSI approach for different source types (e.g., full moment tensor, double couple faulting, and explosion source) by the grid search through the 2-D virtual source to automatically identify landquake event based on the improvement in waveform fitness and evaluate the best-fit solution in the monitoring area. With this approach, not only the force mechanisms but also the event occurrence time and location can be obtained simultaneously about 6-8 min after an occurrence of an event. To improve the insufficient accuracy of GSI-determined lotion, I further conduct a landquake epicenter determination (LED) method that maximizes the coherency of the high-frequency (1-3 Hz) horizontal envelope functions to determine the final source location. With good knowledge about the source location, I perform landquake force history (LFH) inversion to investigate the source dynamics (e.g., trajectory) for the relatively large-sized landquake event. With providing aforementioned source information in real-time, the government and emergency response agencies have sufficient reaction time for rapid assessment and response to landquake hazards. Since 2016, the RLM system has operated online.
NASA Astrophysics Data System (ADS)
Lachance, R. L.; Gordley, L. L.; Marshall, B. T.; Fisher, J.; Paxton, G.; Gubeli, J. F.
2015-12-01
Currently there is no efficient and affordable way to monitor gas releases over small to large areas. We have demonstrated the ability to accurately measure key greenhouse and pollutant gasses with low cost solar observations using the breakthrough sensor technology called the "Pupil Imaging Gas Correlation", PIGC™, which provides size and complexity reduction while providing exceptional resolution and coverage for various gas sensing applications. It is a practical implementation of the well-known Gas Filter Correlation Radiometry (GFCR) technique used for the HALOE and MOPITT satellite instruments that were flown on successful NASA missions in the early 2000s. This strong space heritage brings performance and reliability to the ground instrument design. A methane (CH4) abundance sensitivity of 0.5% or better of ambient column with uncooled microbolometers has been demonstrated with 1 second direct solar observations. These under $10 k sensors can be deployed in precisely balanced autonomous grids to monitor the flow of chosen gasses, and infer their source locations. Measureable gases include CH4, 13CO2, N2O, NO, NH3, CO, H2S, HCN, HCl, HF, HDO and others. A single instrument operates in a dual operation mode, at no additional cost, for continuous (real-time 24/7) local area perimeter monitoring for the detection of leaks for safety & security needs, looking at an artificial light source (for example a simple 60 W light bulb placed 100 m away), while simultaneously allowing solar observation for quasi-continuous wide area total atmospheric column scanning (3-D) for environmental monitoring (fixed and mobile configurations). The second mode of operation continuously quantifies the concentration and flux of specific gases over different ground locations, determined the amount of targeted gas being released from the area or getting into the area from outside locations, allowing better tracking of plumes and identification of sources. This paper reviews the measurement technique, performance demonstration and grid deployment strategy.
Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; ...
2016-04-01
Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.
Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less
Consistent modelling of wind turbine noise propagation from source to receiver.
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick
2017-11-01
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.
Consistent modelling of wind turbine noise propagation from source to receiver
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; ...
2017-11-28
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less
Consistent modelling of wind turbine noise propagation from source to receiver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less
NASA Astrophysics Data System (ADS)
Zhao, Zhan
2009-12-01
My dissertation consists of three parts. Parts I and II are focused on the climate change impacts on meteorology and air quality conditions in California (CA), while Part III is focused on the source-receptor relationship. The WRF model is applied to dynamically downscaled PCM data, with a horizontal resolution of approximately 2.8°x2.8°, to 4km resolution under the Business as Usual (BAU) scenario. The dynamical downscaling method could retain the large-scale features of the global simulations with more meso-scale details. A seven year simulation is conducted for both present (2000˜2006) and future (2047˜2053) in order to avoid the El Nino related inter-annual variation. In order to assess the PCM data quality and estimate the simulation error inherited from the PCM data bias, the present seven year simulations are driven by NCEP's Global Forecast System (GFS) data with the same model configuration. Part I is focused on the comparisons of the present time climatology from the two sets of simulations and the driving global datasets (i.e., PCM vs. GFS), which illustrate that the biases of the downscaling results are mostly inherited from the driving GCM. The imprecise prediction for the location and strength of the Pacific Subtropical High (PSH) is a main source of the PCM data bias. The analysis also implies that using the simulation results driven by PCM data as the input of the air quality model will underrate the air pollution problems in CA. The regional averaged statistics of the downscaling results compared to observational data show that both the surface temperature and wind speed were overestimate for most times of the year, and WRF preformed better during summer than winter. The low summer PBLH in the San Joaquin Valley (SJV) is addressed, and two reasons causing this are the dominance of a high pressure system over the valley and, to a lesser extent, the valley wind at daytime during summer. Part II is focused on the future change of meteorology and air quality in CA and comparisons are made between future and present simulations driven by the PCM data. Both the duration and strength of stagnant events, during which most air pollution problems occur in SJV, are increased during summer and winter. The seven-year averaged spatial distribution of the air-pollution related meteorological variables, such as surface wind, temperature, PBLH, etc., indicate that the future summer ozone problem would be mitigated in the coast region of Los Angeles County (LAC), while both the summer ozone and winter particulate matter (PM) problem in SJV and other parts of the Southern California Air Basin (SoCAB) will be exacerbated in the future. The impact on the land-sea breeze, which plays a big role in California's climate, is also explored in this part. Part III of the thesis is to investigate the potential of applying a signal technique on the source-receptor relationship. This approach is more economical in terms of computational time and memory than the conventional tracer method. The signal technique was implemented into the WRF model, and an idealized supercell case and a real case in Turkey were used to investigate the potential of the technique. Emissions from different source locations were tagged with different frequencies, which were added onto the emitted pollutants, with a specific frequency from each location. The time series of the pollutant concentration collected at receptors were then projected onto the frequency space using the Fourier transform and short-time Fourier transform methods to identify the source locations. During the model integration, a particular constant tracer was also emitted from each pollutant source location to validate and evaluate the signal technique. Results show that the frequencies could be slightly shifted after signals were transported over a long distance and evident secondary frequencies (i.e., beats) could be generated due to nonlinear effects. Although these could potentially confuse the identification of signals released from source points, signals were still distinguishable in this study.
Multi-channel photon counting DOT system based on digital lock-in detection technique
NASA Astrophysics Data System (ADS)
Wang, Tingting; Zhao, Huijuan; Wang, Zhichao; Hou, Shaohua; Gao, Feng
2011-02-01
Relying on deeper penetration of light in the tissue, Diffuse Optical Tomography (DOT) achieves organ-level tomography diagnosis, which can provide information on anatomical and physiological features. DOT has been widely used in imaging of breast, neonatal cerebral oxygen status and blood oxygen kinetics observed by its non-invasive, security and other advantages. Continuous wave DOT image reconstruction algorithms need the measurement of the surface distribution of the output photon flow inspired by more than one driving source, which means that source coding is necessary. The most currently used source coding in DOT is time-division multiplexing (TDM) technology, which utilizes the optical switch to switch light into optical fiber of different locations. However, in case of large amounts of the source locations or using the multi-wavelength, the measurement time with TDM and the measurement interval between different locations within the same measurement period will therefore become too long to capture the dynamic changes in real-time. In this paper, a frequency division multiplexing source coding technology is developed, which uses light sources modulated by sine waves with different frequencies incident to the imaging chamber simultaneously. Signal corresponding to an individual source is obtained from the mixed output light using digital phase-locked detection technology at the detection end. A digital lock-in detection circuit for photon counting measurement system is implemented on a FPGA development platform. A dual-channel DOT photon counting experimental system is preliminary established, including the two continuous lasers, photon counting detectors, digital lock-in detection control circuit, and codes to control the hardware and display the results. A series of experimental measurements are taken to validate the feasibility of the system. This method developed in this paper greatly accelerates the DOT system measurement, and can also obtain the multiple measurements in different source-detector locations.
NASA Astrophysics Data System (ADS)
Dai, Qianwei; Lin, Fangpeng; Wang, Xiaoping; Feng, Deshan; Bayless, Richard C.
2017-05-01
An integrated geophysical investigation was performed at S dam located at Dadu basin in China to assess the condition of the dam curtain. The key methodology of the integrated technique used was flow-field fitting method, which allowed identification of the hydraulic connections between the dam foundation and surface water sources (upstream and downstream), and location of the anomalous leakage outlets in the dam foundation. Limitations of the flow-field fitting method were complemented with resistivity logging to identify the internal erosion which had not yet developed into seepage pathways. The results of the flow-field fitting method and resistivity logging were consistent when compared with data provided by seismic tomography, borehole television, water injection test, and rock quality designation.
Yatsushiro, Satoshi; Hirayama, Akihiro; Matsumae, Mitsunori; Kajiwara, Nao; Abdullah, Afnizanfaizal; Kuroda, Kagayaki
2014-01-01
Correlation time mapping based on magnetic resonance (MR) velocimetry has been applied to pulsatile cerebrospinal fluid (CSF) motion to visualize the pressure transmission between CSF at different locations and/or between CSF and arterial blood flow. Healthy volunteer experiments demonstrated that the technique exhibited transmitting pulsatile CSF motion from CSF space in the vicinity of blood vessels with short delay and relatively high correlation coefficients. Patient and healthy volunteer experiments indicated that the properties of CSF motion were different from the healthy volunteers. Resultant images in healthy volunteers implied that there were slight individual difference in the CSF driving source locations. Clinical interpretation for these preliminary results is required to apply the present technique for classifying status of hydrocephalus.
Plain Language to Communicate Physical Activity Information: A Website Content Analysis.
Paige, Samantha R; Black, David R; Mattson, Marifran; Coster, Daniel C; Stellefson, Michael
2018-04-01
Plain language techniques are health literacy universal precautions intended to enhance health care system navigation and health outcomes. Physical activity (PA) is a popular topic on the Internet, yet it is unknown if information is communicated in plain language. This study examined how plain language techniques are included in PA websites, and if the use of plain language techniques varies according to search procedures (keyword, search engine) and website host source (government, commercial, educational/organizational). Three keywords ("physical activity," "fitness," and "exercise") were independently entered into three search engines (Google, Bing, and Yahoo) to locate a nonprobability sample of websites ( N = 61). Fourteen plain language techniques were coded within each website to examine content formatting, clarity and conciseness, and multimedia use. Approximately half ( M = 6.59; SD = 1.68) of the plain language techniques were included in each website. Keyword physical activity resulted in websites with fewer clear and concise plain language techniques ( p < .05), whereas fitness resulted in websites with more clear and concise techniques ( p < .01). Plain language techniques did not vary by search engine or the website host source. Accessing PA information that is easy to understand and behaviorally oriented may remain a challenge for users. Transdisciplinary collaborations are needed to optimize plain language techniques while communicating online PA information.
NASA Astrophysics Data System (ADS)
Muñoz-Martín, Alfonso; Antón, Loreto; Granja, Jose Luis; Villarroya, Fermín; Montero, Esperanza; Rodríguez, Vanesa
2016-04-01
Soil contamination can come from diffuse sources (air deposition, agriculture, etc.) or local sources, these last being related to anthropogenic activities that are potentially soil contaminating activities. According to data from the EU, in Spain, and particularly for the Autonomous Community of Madrid, it can be considered that heavy metals, toxic organic compounds (including Non Aqueous Phases Liquids, NAPLs) and combinations of both are the main problem of point sources of soil contamination in our community. The five aspects that will be applied in Caresoil Program (S2013/MAE-2739) in the analysis and remediation of a local soil contamination are: 1) the location of the source of contamination and characterization of soil and aquifer concerned, 2) evaluation of the dispersion of the plume, 3) application of effective remediation techniques, 4) monitoring the evolution of the contaminated soil and 5) risk analysis throughout this process. These aspects involve advanced technologies (hydrogeology, geophysics, geochemistry,...) that require new developing of knowledge, being necessary the contribution of several researching groups specialized in the fields previously cited, as they are those integrating CARESOIL Program. Actually two cases concerning hydrocarbon spills, as representative examples of soil local contamination in Madrid area, are being studied. The first is being remediated and we are monitoring this process to evaluate its effectiveness. In the second location we are defining the extent of contamination in soil and aquifer to define the most effective remediation technique.
Assessing and optimizing infrasound network performance: application to remote volcano monitoring
NASA Astrophysics Data System (ADS)
Tailpied, D.; LE Pichon, A.; Marchetti, E.; Kallel, M.; Ceranna, L.
2014-12-01
Infrasound is an efficient monitoring technique to remotely detect and characterize explosive sources such as volcanoes. Simulation methods incorporating realistic source and propagation effects have been developed to quantify the detection capability of any network. These methods can also be used to optimize the network configuration (number of stations, geographical location) in order to reduce the detection thresholds taking into account seasonal effects in infrasound propagation. Recent studies have shown that remote infrasound observations can provide useful information about the eruption chronology and the released acoustic energy. Comparisons with near-field recordings allow evaluating the potential of these observations to better constrain source parameters when other monitoring techniques (satellite, seismic, gas) are not available or cannot be made. Because of its regular activity, the well-instrumented Mount Etna is in Europe a unique natural repetitive source to test and optimize detection and simulation methods. The closest infrasound station part of the International Monitoring System is located in Tunisia (IS48). In summer, during the downwind season, it allows an unambiguous identification of signals associated with Etna eruptions. Under the European ARISE project (Atmospheric dynamics InfraStructure in Europe, FP7/2007-2013), experimental arrays have been installed in order to characterize infrasound propagation in different ranges of distance and direction. In addition, a small-aperture array, set up on the flank by the University of Firenze, has been operating since 2007. Such an experimental setting offers an opportunity to address the societal benefits that can be achieved through routine infrasound monitoring.
Source Complexity of an Injection Induced Event: The 2016 Mw 5.1 Fairview, Oklahoma Earthquake
NASA Astrophysics Data System (ADS)
López-Comino, J. A.; Cesca, S.
2018-05-01
Complex rupture processes are occasionally resolved for weak earthquakes and can reveal a dominant direction of the rupture propagation and the presence and geometry of main slip patches. Finding and characterizing such properties could be important for understanding the nucleation and growth of induced earthquakes. One of the largest earthquakes linked to wastewater injection, the 2016 Mw 5.1 Fairview, Oklahoma earthquake, is analyzed using empirical Green's function techniques to reveal its source complexity. Two subevents are clearly identified and located using a new approach based on relative hypocenter-centroid location. The first subevent has a magnitude of Mw 5.0 and shows the main rupture propagated toward the NE, in the direction of higher pore pressure perturbations due to wastewater injection. The second subevent appears as an early aftershock with lower magnitude, Mw 4.7. It is located SW of the mainshock in a region of increased Coulomb stress, where most aftershocks relocated.
Self characterization of a coded aperture array for neutron source imaging
Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; ...
2014-12-15
The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning DT plasma during the stagnation stage of ICF implosions. Since the neutron source is small (~100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be preciselymore » aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.« less
Ambient Seismic Source Inversion in a Heterogeneous Earth: Theory and Application to the Earth's Hum
NASA Astrophysics Data System (ADS)
Ermert, Laura; Sager, Korbinian; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas
2017-11-01
The sources of ambient seismic noise are extensively studied both to better understand their influence on ambient noise tomography and related techniques, and to infer constraints on their excitation mechanisms. Here we develop a gradient-based inversion method to infer the space-dependent and time-varying source power spectral density of the Earth's hum from cross correlations of continuous seismic data. The precomputation of wavefields using spectral elements allows us to account for both finite-frequency sensitivity and for three-dimensional Earth structure. Although similar methods have been proposed previously, they have not yet been applied to data to the best of our knowledge. We apply this method to image the seasonally varying sources of Earth's hum during North and South Hemisphere winter. The resulting models suggest that hum sources are localized, persistent features that occur at Pacific coasts or shelves and in the North Atlantic during North Hemisphere winter, as well as South Pacific coasts and several distinct locations in the Southern Ocean in South Hemisphere winter. The contribution of pelagic sources from the central North Pacific cannot be constrained. Besides improving the accuracy of noise source locations through the incorporation of finite-frequency effects and 3-D Earth structure, this method may be used in future cross-correlation waveform inversion studies to provide initial source models and source model updates.
NASA Astrophysics Data System (ADS)
Cao, Y.; Cervone, G.; Barkley, Z.; Lauvaux, T.; Deng, A.; Miles, N.; Richardson, S.
2016-12-01
Fugitive methane emission rates for the Marcellus shale area are estimated using a genetic algorithm that finds optimal weights to minimize the error between simulated and observed concentrations. The overall goal is to understand the relative contribution of methane due to Shale gas extraction. Methane sensors were installed on four towers located in northeastern Pennsylvania to measure atmospheric concentrations since May 2015. Inverse Lagrangian dispersion model runs are performed from each of these tower locations for each hour of 2015. Simulated methane concentrations at each of the four towers are computed by multiplying the resulting footprints from the atmospheric simulations by thousands of emission sources grouped into 11 classes. The emission sources were identified using GIS techniques, and include conventional and unconventional wells, different types of compressor stations, pipelines, landfills, farming and wetlands. Initial estimates for each source are calculated based on emission factors from EPA and few regional studies. A genetic algorithm is then used to identify optimal emission rates for the 11 classes of methane emissions and to explore extreme events and spatial and temporal structures in the emissions associated with natural gas activities.
Infrasound Observations from Lightning
NASA Astrophysics Data System (ADS)
Arechiga, R. O.; Johnson, J. B.; Edens, H. E.; Thomas, R. J.; Jones, K. R.
2008-12-01
To provide additional insight into the nature of lightning, we have investigated its infrasound manifestations. An array of three stations in a triangular configuration, with three sensors each, was deployed during the Summer of 2008 (July 24 to July 28) in the Magdalena mountains of New Mexico, to monitor infrasound (below 20 Hz) sources due to lightning. Hyperbolic formulations of time of arrival (TOA) measurements and interferometric techniques were used to locate lightning sources occurring over and outside the network. A comparative analysis of simultaneous Lightning Mapping Array (LMA) data and infrasound measurements operating in the same area was made. The LMA locates the sources of impulsive RF radiation produced by lightning flashes in three spatial dimensions and time, operating in the 60 - 66 MHz television band. The comparison showed strong evidence that lightning does produce infrasound. This work is a continuation of the study of the frequency spectrum of thunder conducted by Holmes et al., who reported measurements of infrasound frequencies. The integration of infrasound measurements with RF source localization by the LMA shows great potential for improved understanding of lightning processes.
Gelfusa, M; Gaudio, P; Malizia, A; Murari, A; Vega, J; Richetta, M; Gonzalez, S
2014-06-01
Recently, surveying large areas in an automatic way, for early detection of both harmful chemical agents and forest fires, has become a strategic objective of defence and public health organisations. The Lidar and Dial techniques are widely recognized as a cost-effective alternative to monitor large portions of the atmosphere. To maximize the effectiveness of the measurements and to guarantee reliable monitoring of large areas, new data analysis techniques are required. In this paper, an original tool, the Universal Multi Event Locator, is applied to the problem of automatically identifying the time location of peaks in Lidar and Dial measurements for environmental physics applications. This analysis technique improves various aspects of the measurements, ranging from the resilience to drift in the laser sources to the increase of the system sensitivity. The method is also fully general, purely software, and can therefore be applied to a large variety of problems without any additional cost. The potential of the proposed technique is exemplified with the help of data of various instruments acquired during several experimental campaigns in the field.
NASA Technical Reports Server (NTRS)
Sidney, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.;
2014-01-01
The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiralonly signals from compact binary systems with a total mass of equal to or less than 20M solar mass and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor approx. equals 20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor approx. equals 1000 longer processing time.
NASA Astrophysics Data System (ADS)
Sidery, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; Kalogera, V.; Mandel, I.; O'Shaughnessy, R.; Pitkin, M.; Price, L.; Raymond, V.; Röver, C.; Singer, L.; van der Sluys, M.; Smith, R. J. E.; Vecchio, A.; Veitch, J.; Vitale, S.
2014-04-01
The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiral-only signals from compact binary systems with a total mass of ≤20M⊙ and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor ≈20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor ≈1000 longer processing time.
Reconstruction of source location in a network of gravitational wave interferometric detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavalier, Fabien; Barsuglia, Matteo; Bizouard, Marie-Anne
2006-10-15
This paper deals with the reconstruction of the direction of a gravitational wave source using the detection made by a network of interferometric detectors, mainly the LIGO and Virgo detectors. We suppose that an event has been seen in coincidence using a filter applied on the three detector data streams. Using the arrival time (and its associated error) of the gravitational signal in each detector, the direction of the source in the sky is computed using a {chi}{sup 2} minimization technique. For reasonably large signals (SNR>4.5 in all detectors), the mean angular error between the real location and the reconstructedmore » one is about 1 deg. . We also investigate the effect of the network geometry assuming the same angular response for all interferometric detectors. It appears that the reconstruction quality is not uniform over the sky and is degraded when the source approaches the plane defined by the three detectors. Adding at least one other detector to the LIGO-Virgo network reduces the blind regions and in the case of 6 detectors, a precision less than 1 deg. on the source direction can be reached for 99% of the sky.« less
NASA Astrophysics Data System (ADS)
Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio
2005-10-01
This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG.
Towards 3D Noise Source Localization using Matched Field Processing
NASA Astrophysics Data System (ADS)
Umlauft, J.; Walter, F.; Lindner, F.; Flores Estrella, H.; Korn, M.
2017-12-01
The Matched Field Processing (MFP) is an array-processing and beamforming method, initially developed in ocean acoustics, that locates noise sources in range, depth and azimuth. In this study, we discuss the applicability of MFP for geophysical problems on the exploration scale and its suitability as a monitoring tool for near surface processes. First, we used synthetic seismograms to analyze the resolution and sensitivity of MFP in a 3D environment. The inversion shows how the localization accuracy is affected by the array design, pre-processing techniques, the velocity model and considered wave field characteristics. Hence, we can formulate guidelines for an improved MFP handling. Additionally, we present field datasets, aquired from two different environmental settings and in the presence of different source types. Small-scale, dense aperture arrays (Ø <1 km) were installed on a natural CO2 degassing field (Czech Republic) and on a Glacier site (Switzerland). The located noise sources form distinct 3 dimensional zones and channel-like structures (several 100 m depth range), which could be linked to the expected environmental processes taking place at each test site. Furthermore, fast spatio-temporal variations (hours to days) of the source distribution could be succesfully monitored.
Studies of solar flares: Homology and X-ray line broadening
NASA Astrophysics Data System (ADS)
Ranns, Neale David Raymond
This thesis starts with an introduction to the solar atmosphere and the physics that governs its behaviour. The formation processes of spectral lines are presented followed by an explanation of employed plasma diagnostic techniques and line broadening mechanisms. The current understanding on some principle concepts of flare physics are reviewed and the topics of flare homology and non-thermal line broadening are introduced. The many solar satellites and instrumentation that were utilised during this thesis are described. Analysis techniques for some instruments are also presented. A series of solar flares that conform to the literature definition for homologous flares are examined. The apparent homology is shown to be caused by emerging flux rather than continual stressing of a single, or group of, magnetic structure's. The implications for flare homology are discussed. The analysis of a solar flare with a rise and peak in the observed non-thermal X-ray line broadening (Vnt) is then performed. The location of the hot plasma within the flare area is determined and consequently the source of Vnt is located to be within and above the flare loops. The flare footpoints are therefore discarded as a possible source location. Viable source locations are discussed with a view to determining the dominant mechanism for the generation of line broadening. The timing relationships between the hard X-ray (HXR) flux and Vnt in many solar flares are then examined. I show that there is a causal relationship between these two parameters and that the HXR rise time is related to the time delay between the maxima of HXR flux and Vnt. The temporal evolution of Vnt is shown to be dependent upon the shape of the HXR burst. The implications of these results are discussed in terms of determining the line broadening mechanism and the limitations of the data. A summary of the results in this thesis is then presented together with suggestions for future research.
On The Source Of The 25 November 1941 - Atlantic Tsunami
NASA Astrophysics Data System (ADS)
Baptista, M. A.; Lisboa, F. B.; Miranda, J. M. A.
2015-12-01
In this study we analyze the tsunami recorded in the North Atlantic following the 25 November 1941 earthquake. The earthquake with a magnitude of 8.3, located on the Gloria Fault, was one of the largest strike slip events recorded. The Gloria fault is a 500 km long scarp in the North Atlantic Ocean between 19W and 24W known to be a segment of the Eurasia-Nubia plate boundary between Iberia and the Azores. Ten tide stations recorded the tsunami. Six in Portugal (mainland, Azores and Madeira Islands), two in Morocco, one in the United Kingdom and one in Spain (Tenerife-Canary Islands). The tsunami waves reached Azores and Madeira Islands less than one hour after the main shock. The tide station of Casablanca (in Morocco) recorded the maximum amplitude of 0.54 m. All amplitudes recorded are lower than 0.5 m but the tsunami reached Portugal mainland in high tide conditions where the sea flooded some streets We analyze the 25 November 1941 tsunami data using the tide-records in the coasts of Portugal, Spain, Morocco and UK to infer its source. The use of wavelet analysis to characterize the frequency content of the tide-records shows predominant periods of 9-13min e 18-22min. A preliminary location of the tsunami source location was obtained Backward Ray Tracing (BRT). The results of the BRT technique are compatible with the epicenter location of the earthquake. We compute empirical Green functions for the earthquake generation area, and use a linear shallow water inversion technique to compute the initial water displacement. The comparison between forward modeling with observations shows a fair agreement with available data. This work received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)"
Reconstruction of reflectance data using an interpolation technique.
Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh
2009-03-01
A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.
Sherlin, Leslie; Congedo, Marco
2005-10-21
Electroencephalographic mapping techniques have been used to show differences between normal subjects and those diagnosed with various mental disorders. To date, there is no other research using the techniques of low-resolution brain electromagnetic tomography (LORETA) with the obsessive-compulsive disorder (OCD) population. The current investigation compares current source density measures of persons with OCD symptoms to an age-matched control group. The main finding is excess current source density in the Beta frequencies in the cingulate gyrus. This Beta activity is primarily located in the middle cingulate gyrus as well as adjacent frontal parieto-occipital regions. Lower frequency Beta is prominent more anteriorly in the cingulate gyrus whereas higher frequency Beta is seen more posteriorly. These preliminary findings indicate the utility of LORETA as a clinical and diagnostic tool.
Utilizing the N beam position monitor method for turn-by-turn optics measurements
NASA Astrophysics Data System (ADS)
Langner, A.; Benedetti, G.; Carlà, M.; Iriso, U.; Martí, Z.; de Portugal, J. Coello; Tomás, R.
2016-09-01
The N beam position monitor method (N -BPM) which was recently developed for the LHC has significantly improved the precision of optics measurements that are based on BPM turn-by-turn data. The main improvement is due to the consideration of correlations for statistical and systematic error sources, as well as increasing the amount of BPM combinations which are used to derive the β -function at one location. We present how this technique can be applied at light sources like ALBA, and compare the results with other methods.
Instrumentation for localized superconducting cavity diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conway, Z. A.; Ge, M.; Iwashita, Y.
2017-01-12
Superconducting accelerator cavities are now routinely operated at levels approaching the theoretical limit of niobium. To achieve these operating levels more information than is available from the RF excitation signal is required to characterize and determine fixes for the sources of performance limitations. This information is obtained using diagnostic techniques which complement the analysis of the RF signal. In this paper we describe the operation and select results from three of these diagnostic techniques: the use of large scale thermometer arrays, second sound wave defect location and high precision cavity imaging with the Kyoto camera.
NASA Technical Reports Server (NTRS)
Smathers, J. B.; Kuykendall, W. E., Jr.; Wright, R. E., Jr.; Marshall, J. R.
1973-01-01
Radioisotope measurement techniques and neutron activation analysis are evaluated for use in identifying and locating contamination sources in space environment simulation chambers. The alpha range method allows the determination of total contaminant concentration in vapor state and condensate state. A Cf-252 neutron activation analysis system for detecting oils and greases tagged with stable elements is described. While neutron activation analysis of tagged contaminants offers specificity, an on-site system is extremely costly to implement and provides only marginal detection sensitivity under even the most favorable conditions.
Modeling the Volcanic Source at Long Valley, CA, Using a Genetic Algorithm Technique
NASA Technical Reports Server (NTRS)
Tiampo, Kristy F.
1999-01-01
In this project, we attempted to model the deformation pattern due to the magmatic source at Long Valley caldera using a real-value coded genetic algorithm (GA) inversion similar to that found in Michalewicz, 1992. The project has been both successful and rewarding. The genetic algorithm, coded in the C programming language, performs stable inversions over repeated trials, with varying initial and boundary conditions. The original model used a GA in which the geophysical information was coded into the fitness function through the computation of surface displacements for a Mogi point source in an elastic half-space. The program was designed to invert for a spherical magmatic source - its depth, horizontal location and volume - using the known surface deformations. It also included the capability of inverting for multiple sources.
Tanter, M; Thomas, J L; Fink, M
1998-05-01
The time-reversal process is applied to focus pulsed ultrasonic waves through the human skull bone. The aim here is to treat brain tumors, which are difficult to reach with classical surgery means. Such a surgical application requires precise control of the size and location of the therapeutic focal beam. The severe ultrasonic attenuation in the skull reduces the efficiency of the time reversal process. Nevertheless, an improvement of the time reversal process in absorbing media has been investigated and applied to the focusing through the skull [J.-L. Thomas and M. Fink, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 43, 1122-1129 (1996)]. Here an extension of this technique is presented in order to focus on a set of points surrounding an initial artificial source implanted in the tissue volume to treat. From the knowledge of the Green's function matched to this initial source location a new Green's function matched to various points of interest is deduced in order to treat the whole volume. In a homogeneous medium, conventional steering consists of tilting the wave front focused on the acoustical source. In a heterogeneous medium, this process is only valid for small angles or when aberrations are located in a layer close to the array. It is shown here how to extend this method to aberrating and absorbing layers, like the skull bone, located at any distance from the array of transducers.
Further Progress in Noise Source Identification in High Speed Jets via Causality Principle
NASA Technical Reports Server (NTRS)
Panda, J.; Seasholtz, R. G.; Elam, K. A.
2004-01-01
To locate noise sources in high-speed jets, the sound pressure fluctuations p/, measured at far field locations, were correlated with each of density p, axial velocity u, radial velocity v, puu and pvv fluctuations measured from various points in fully expanded, unheated plumes of Mach number 0.95, 1.4 and 1.8. The velocity and density fluctuations were measured simultaneously using a recently developed, non-intrusive, point measurement technique based on molecular Rayleigh scattering (Seasholtz, Panda, and Elam, AIAA Paper 2002-0827). The technique uses a continuous wave, narrow line-width laser, Fabry-Perot interferometer and photon counting electronics. The far field sound pressure fluctuations at 30 to the jet axis provided the highest correlation coefficients with all flow variables. The correlation coefficients decreased sharply with increased microphone polar angle, and beyond about 60 all correlation mostly fell below the experimental noise floor. Among all correlations < puu; p/> showed the highest values. Interestingly,
, in all respects, were very similar toRostad, C.E.
2006-01-01
Polar components in fuels may enable differentiation between fuel types or commercial fuel sources. A range of commercial fuels from numerous sources were analyzed by flow injection analysis/electrospray ionization/mass spectrometry without extensive sample preparation, separation, or chromatography. This technique enabled screening for unique polar components at parts per million levels in commercial hydrocarbon products, including a range of products from a variety of commercial sources and locations. Because these polar compounds are unique in different fuels, their presence may provide source information on hydrocarbons released into the environment. This analysis was then applied to mixtures of various products, as might be found in accidental releases into the environment. Copyright ?? Taylor & Francis Group, LLC.
NASA Astrophysics Data System (ADS)
Burman, Jerry; Hespanha, Joao; Madhow, Upamanyu; Pham, Tien
2011-06-01
A team consisting of Teledyne Scientific Company, the University of California at Santa Barbara and the Army Research Laboratory* is developing technologies in support of automated data exfiltration from heterogeneous battlefield sensor networks to enhance situational awareness for dismounts and command echelons. Unmanned aerial vehicles (UAV) provide an effective means to autonomously collect data from a sparse network of unattended ground sensors (UGSs) that cannot communicate with each other. UAVs are used to reduce the system reaction time by generating autonomous collection routes that are data-driven. Bio-inspired techniques for search provide a novel strategy to detect, capture and fuse data. A fast and accurate method has been developed to localize an event by fusing data from a sparse number of UGSs. This technique uses a bio-inspired algorithm based on chemotaxis or the motion of bacteria seeking nutrients in their environment. A unique acoustic event classification algorithm was also developed based on using swarm optimization. Additional studies addressed the problem of routing multiple UAVs, optimally placing sensors in the field and locating the source of gunfire at helicopters. A field test was conducted in November of 2009 at Camp Roberts, CA. The field test results showed that a system controlled by bio-inspired software algorithms can autonomously detect and locate the source of an acoustic event with very high accuracy and visually verify the event. In nine independent test runs of a UAV, the system autonomously located the position of an explosion nine times with an average accuracy of 3 meters. The time required to perform source localization using the UAV was on the order of a few minutes based on UAV flight times. In June 2011, additional field tests of the system will be performed and will include multiple acoustic events, optimal sensor placement based on acoustic phenomenology and the use of the International Technology Alliance (ITA) Sensor Network Fabric (IBM).
NASA Astrophysics Data System (ADS)
Karabelchtchikova, Olga; Rivero, Iris V.
2005-02-01
The distribution of residual stresses (RS) and surface integrity generated in heat treatment and subsequent multipass grinding was investigated in this experimental study to examine the source of variability and the nature of the interactions of the experimental factors. A nested experimental design was implemented to (a) compare the sources of the RS variability, (b) to examine RS distribution and tensile peak location due to experimental factors, and (c) to analyze the superposition relationship in the RS distribution due to multipass grinding technique. To characterize the material responses, several techniques were used, including microstructural analysis, hardness-toughness and roughness examinations, and retained austenite and RS measurements using x-ray diffraction. The causality of the RS was explained through the strong correlation of the surface integrity characteristics and RS patterns. The main sources of variation were the depth of the RS distribution and the multipass grinding technique. The grinding effect on the RS was statistically significant; however, it was mostly predetermined by the preexisting RS induced in heat treatment. Regardless of the preceding treatments, the effect of the multipass grinding technique exhibited similar RS patterns, which suggests the existence of the superposition relationship and orthogonal memory between the passes of the grinding operation.
NASA Astrophysics Data System (ADS)
Nooshiri, Nima; Saul, Joachim; Heimann, Sebastian; Tilmann, Frederik; Dahm, Torsten
2017-02-01
Global earthquake locations are often associated with very large systematic travel-time residuals even for clear arrivals, especially for regional and near-regional stations in subduction zones because of their strongly heterogeneous velocity structure. Travel-time corrections can drastically reduce travel-time residuals at regional stations and, in consequence, improve the relative location accuracy. We have extended the shrinking-box source-specific station terms technique to regional and teleseismic distances and adopted the algorithm for probabilistic, nonlinear, global-search location. We evaluated the potential of the method to compute precise relative hypocentre locations on a global scale. The method has been applied to two specific test regions using existing P- and pP-phase picks. The first data set consists of 3103 events along the Chilean margin and the second one comprises 1680 earthquakes in the Tonga-Fiji subduction zone. Pick data were obtained from the GEOFON earthquake bulletin, produced using data from all available, global station networks. A set of timing corrections varying as a function of source position was calculated for each seismic station. In this way, we could correct the systematic errors introduced into the locations by the inaccuracies in the assumed velocity structure without explicitly solving for a velocity model. Residual statistics show that the median absolute deviation of the travel-time residuals is reduced by 40-60 per cent at regional distances, where the velocity anomalies are strong. Moreover, the spread of the travel-time residuals decreased by ˜20 per cent at teleseismic distances (>28°). Furthermore, strong variations in initial residuals as a function of recording distance are smoothed out in the final residuals. The relocated catalogues exhibit less scattered locations in depth and sharper images of the seismicity associated with the subducting slabs. Comparison with a high-resolution local catalogue reveals that our relocation process significantly improves the hypocentre locations compared to standard locations.
Sources of variability in collection and preparation of paint and lead-coating samples.
Harper, S L; Gutknecht, W F
2001-06-01
Chronic exposure of children to lead (Pb) can result in permanent physiological impairment. Since surfaces coated with lead-containing paints and varnishes are potential sources of exposure, it is extremely important that reliable methods for sampling and analysis be available. The sources of variability in the collection and preparation of samples were investigated to improve the performance and comparability of methods and to ensure that data generated will be adequate for its intended use. Paint samples of varying sizes (areas and masses) were collected at different locations across a variety of surfaces including metal, plaster, concrete, and wood. A variety of grinding techniques were compared. Manual mortar and pestle grinding for at least 1.5 min and mechanized grinding techniques were found to generate similar homogenous particle size distributions required for aliquots as small as 0.10 g. When 342 samples were evaluated for sample weight loss during mortar and pestle grinding, 4% had 20% or greater loss with a high of 41%. Homogenization and sub-sampling steps were found to be the principal sources of variability related to the size of the sample collected. Analysis of samples from different locations on apparently identical surfaces were found to vary by more than a factor of two both in Pb concentration (mg cm-2 or %) and areal coating density (g cm-2). Analyses of substrates were performed to determine the Pb remaining after coating removal. Levels as high as 1% Pb were found in some substrate samples, corresponding to more than 35 mg cm-2 Pb. In conclusion, these sources of variability must be considered in development and/or application of any sampling and analysis methodologies.
A source-channel coding approach to digital image protection and self-recovery.
Sarreshtedari, Saeed; Akhaee, Mohammad Ali
2015-07-01
Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.
SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization
Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah
2014-01-01
Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431
Testing contamination source identification methods for water distribution networks
Seth, Arpan; Klise, Katherine A.; Siirola, John D.; ...
2016-04-01
In the event of contamination in a water distribution network (WDN), source identification (SI) methods that analyze sensor data can be used to identify the source location(s). Knowledge of the source location and characteristics are important to inform contamination control and cleanup operations. Various SI strategies that have been developed by researchers differ in their underlying assumptions and solution techniques. The following manuscript presents a systematic procedure for testing and evaluating SI methods. The performance of these SI methods is affected by various factors including the size of WDN model, measurement error, modeling error, time and number of contaminant injections,more » and time and number of measurements. This paper includes test cases that vary these factors and evaluates three SI methods on the basis of accuracy and specificity. The tests are used to review and compare these different SI methods, highlighting their strengths in handling various identification scenarios. These SI methods and a testing framework that includes the test cases and analysis tools presented in this paper have been integrated into EPA’s Water Security Toolkit (WST), a suite of software tools to help researchers and others in the water industry evaluate and plan various response strategies in case of a contamination incident. Lastly, a set of recommendations are made for users to consider when working with different categories of SI methods.« less
NASA Astrophysics Data System (ADS)
Bowman, A.; Cardace, D.; August, P.
2012-12-01
Springs sourced in the mantle units of ophiolites serve as windows to the deep biosphere, and thus hold promise in elucidating survival strategies of extremophiles, and may also inform discourse on the origin of life on Earth. Understanding how organisms can survive in extreme environments provides clues to how microbial life responds to gradients in pH, temperature, and oxidation-reduction potential. Spring locations associated with serpentinites have traditionally been located using a variety of field techniques. The aqueous alteration of ultramafic rocks to serpentinites is accompanied by the production of very unusual formation fluids, accessed by drilling into subsurface flow regimes or by sampling at related surface springs. The chemical properties of these springs are unique to water associated with actively serpentinizing rocks; they reflect a reducing subsurface environment reacting at low temperatures producing high pH, Ca-rich formation fluids with high dissolved hydrogen and methane. This study applies GIS site suitability analysis to locate high pH springs upwelling from Coast Range Ophiolite serpentinites in Northern California. We used available geospatial data (e.g., geologic maps, topography, fault locations, known spring locations, etc.) and ArcGIS software to predict new spring localities. Important variables in the suitability model were: (a) bedrock geology (i.e., unit boundaries and contacts for peridotite, serpentinite, possibly pyroxenite, or chromite), (b) fault locations, (c) regional data for groundwater characteristics such as pH, Ca2+, and Mg2+, and (d) slope-aspect ratio. The GIS model derived from these geological and environmental data sets predicts the latitude/longitude points for novel and known high pH springs sourced in serpentinite outcrops in California. Field work confirms the success of the model, and map output can be merged with published environmental microbiology data (e.g., occurrence of hydrogen-oxidizers) to showcase patterns in microbial community structure. Discrepancies between predicted and actual spring locations are then used to tune GIS suitability analysis, re-running the model with corrected geo-referenced data. This presentation highlights a powerful GIS-based technique for accelerating field exploration in this area of ongoing research.
D.M. Olson; T.J. Griffis; A. Noormets; R. Kolka; J. Chen
2013-01-01
Three years (2009-2011) of near-continuous methane (CH4) and carbon dioxide (CO2) fluxes were measured with the eddy covariance (EC) technique at a temperate peatland located within the Marcell Experimental Forest, in northern Minnesota, USA. The peatland was a net source of CH4 and a net sink of CO...
Open-Source Data Collection Techniques for Weapons Transfer Information
2012-03-01
IR Infrared ISO International Organization for Standardization ITAR International Traffic in Arms Regulations NER Named Entity Recognition NLP ...Control Protocol UAE United Arab Emirates URI Uniform Resource Identifier URL Uniform Resource Locator USSR Union of Soviet Socialist Republics UTF...KOREA, DEMOCRATIC PEOPLE’S REPUBLIC OF North Korea KOREA, REPUBLIC OF South Korea LIBYAN ARAB JAMAHIRIYA Libya RUSSIAN FEDERATION Russia Table 3
1977-02-22
included. Acoustic results from the Learjet and NASA-Lewis F-106 Aircraft Flyovers and the French Aerotrain Tests, taken with a baseline, 8-lobe, and 104...between aerotrain data and transformed free jet data are presented for three primary jet velocities and two flight velocities for the three nozzle types.
Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo
2016-01-01
Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473
Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A; Zhang, Wenbo; He, Bin
2016-12-01
Combined source-imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a noninvasive fashion. Source-imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source-imaging algorithms to both find the network nodes [regions of interest (ROI)] and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses, and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Source-imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from interictal and ictal signals recorded by EEG and/or Magnetoencephalography (MEG). Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ∼20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Our study indicates that combined source-imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions.
NASA Astrophysics Data System (ADS)
Munafo, I.; Malagnini, L.; Tinti, E.; Chiaraluce, L.; Di Stefano, R.; Valoroso, L.
2014-12-01
The Alto Tiberina Fault (ATF) is a 60 km long east-dipping low-angle normal fault, located in a sector of the Northern Apennines (Italy) undergoing active extension since the Quaternary. The ATF has been imaged by analyzing the active source seismic reflection profiles, and the instrumentally recorded persistent background seismicity. The present study is an attempt to separate the contributions of source, site, and crustal attenuation, in order to focus on the mechanics of the seismic sources on the ATF, as well on the synthetic and the antithetic structures within the ATF hanging-wall (i.e. Colfiorito fault, Gubbio fault and Umbria Valley fault). In order to compute source spectra, we perform a set of regressions over the seismograms of 2000 small earthquakes (-0.8 < ML< 4) recorded between 2010 and 2014 at 50 permanent seismic stations deployed in the framework of the Alto Tiberina Near Fault Observatory project (TABOO) and equipped with three-components seismometers, three of which located in shallow boreholes. Because we deal with some very small earthquakes, we maximize the signal to noise ratio (SNR) with a technique based on the analysis of peak values of bandpass-filtered time histories, in addition to the same processing performed on Fourier amplitudes. We rely on a tool called Random Vibration Theory (RVT) to completely switch from peak values in the time domain to Fourier spectral amplitudes. Low-frequency spectral plateau of the source terms are used to compute moment magnitudes (Mw) of all the events, whereas a source spectral ratio technique is used to estimate the corner frequencies (Brune spectral model) of a subset of events chosen over the analysis of the noise affecting the spectral ratios. So far, the described approach provides high accuracy over the spectral parameters of earthquakes of localized seismicity, and may be used to gain insights into the underlying mechanics of faulting and the earthquake processes.
Tracing nitrates and sulphates in river basins using isotope techniques.
Rock, L; Mayer, B
2006-01-01
The objective of this paper is to outline how stable isotope techniques can contribute to the elucidation of the sources and the fate of riverine nitrate and sulphate in watershed studies. The example used is the Oldman River Basin (OMRB), located in southern Alberta (Canada). Increasing sulphate concentrations and decreasing delta(34)S values along the flowpath of the Oldman River indicate that oxidation of pyrite in tills is a major source of riverine sulphate in the agriculturally used portion of the OMRB. Chemical and isotopic data showed that manure-derived nitrogen contributes significantly to the increase in nitrate concentrations in the Oldman River and its tributaries draining agricultural land. It is suggested that hydrological conditions control agricultural return flows to the surface water bodies in southern Alberta and impart significant seasonal variations on concentrations and isotopic compositions of riverine nitrate. Combining isotopic, chemical, and hydrometric data permitted us to estimate the relative contribution of major sources to the total solute fluxes. Hence, we submit that isotopic measurements can make an important contribution to the identification of nutrient and pollutant sources and to river basin management.
Locating hydrothermal acoustic sources at Old Faithful Geyser using Matched Field Processing
NASA Astrophysics Data System (ADS)
Cros, E.; Roux, P.; Vandemeulebrouck, J.; Kedar, S.
2011-10-01
In 1992, a large and dense array of geophones was placed around the geyser vent of Old Faithful, in the Yellowstone National Park, to determine the origin of the seismic hydrothermal noise recorded at the surface of the geyser and to understand its dynamics. Old Faithful Geyser (OFG) is a small-scale hydrothermal system where a two-phase flow mixture erupts every 40 to 100 min in a high continuous vertical jet. Using Matched Field Processing (MFP) techniques on 10-min-long signal, we localize the source of the seismic pulses recorded at the surface of the geyser. Several MFP approaches are compared in this study, the frequency-incoherent and frequency-coherent approach, as well as the linear Bartlett processing and the non-linear Minimum Variance Distorsionless Response (MVDR) processing. The different MFP techniques used give the same source position with better focalization in the case of the MVDR processing. The retrieved source position corresponds to the geyser conduit at a depth of 12 m and the localization is in good agreement with in situ measurements made at Old Faithful in past studies.
Roy, Debananda; Singh, Gurdeep; Yadav, Pankaj
2016-10-01
Source apportionment study of PM 10 (Particulate Matter) in a critically polluted area of Jharia coalfield, India has been carried out using Dispersion model, Principle Component Analysis (PCA) and Chemical Mass Balance (CMB) techniques. Dispersion model Atmospheric Dispersion Model (AERMOD) was introduced to simplify the complexity of sources in Jharia coalfield. PCA and CMB analysis indicates that monitoring stations near the mining area were mainly affected by the emission from open coal mining and its associated activities such as coal transportation, loading and unloading of coal. Mine fire emission also contributed a considerable amount of particulate matters in monitoring stations. Locations in the city area were mostly affected by vehicular, Liquid Petroleum Gas (LPG) & Diesel Generator (DG) set emissions, residential, and commercial activities. The experimental data sampling and their analysis could aid understanding how dispersion based model technique along with receptor model based concept can be strategically used for quantitative analysis of Natural and Anthropogenic sources of PM 10 . Copyright © 2016. Published by Elsevier B.V.
Measurement of magnetic field gradients using Raman spectroscopy in a fountain
NASA Astrophysics Data System (ADS)
Srinivasan, Arvind; Zimmermann, Matthias; Efremov, Maxim A.; Davis, Jon P.; Narducci, Frank A.
2017-02-01
In many experiments involving cold atoms, it is crucial to know the strength of the magnetic field and/or the magnetic field gradient at the precise location of a measurement. While auxiliary sensors can provide some of this information, the sensors are usually not perfectly co-located with the atoms and so can only provide an approximation to the magnetic field strength. In this article, we describe a technique to measure the magnetic field, based on Raman spectroscopy, using the same atomic fountain source that will be used in future magnetically sensitive measurements.
Source counting in MEG neuroimaging
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Dell, John; Magee, Ralphy; Roberts, Timothy P. L.
2009-02-01
Magnetoencephalography (MEG) is a multi-channel, functional imaging technique. It measures the magnetic field produced by the primary electric currents inside the brain via a sensor array composed of a large number of superconducting quantum interference devices. The measurements are then used to estimate the locations, strengths, and orientations of these electric currents. This magnetic source imaging technique encompasses a great variety of signal processing and modeling techniques which include Inverse problem, MUltiple SIgnal Classification (MUSIC), Beamforming (BF), and Independent Component Analysis (ICA) method. A key problem with Inverse problem, MUSIC and ICA methods is that the number of sources must be detected a priori. Although BF method scans the source space on a point-to-point basis, the selection of peaks as sources, however, is finally made by subjective thresholding. In practice expert data analysts often select results based on physiological plausibility. This paper presents an eigenstructure approach for the source number detection in MEG neuroimaging. By sorting eigenvalues of the estimated covariance matrix of the acquired MEG data, the measured data space is partitioned into the signal and noise subspaces. The partition is implemented by utilizing information theoretic criteria. The order of the signal subspace gives an estimate of the number of sources. The approach does not refer to any model or hypothesis, hence, is an entirely data-led operation. It possesses clear physical interpretation and efficient computation procedure. The theoretical derivation of this method and the results obtained by using the real MEG data are included to demonstrates their agreement and the promise of the proposed approach.
NASA Astrophysics Data System (ADS)
Ranjeva, Minna; Thompson, Lee; Perlitz, Daniel; Bonness, William; Capone, Dean; Elbing, Brian
2011-11-01
Cavitation is a major concern for the US Navy since it can cause ship damage and produce unwanted noise. The ability to precisely locate cavitation onset in laboratory scale experiments is essential for proper design that will minimize this undesired phenomenon. Measuring the cavitation onset is more accurately determined acoustically than visually. However, if other parts of the model begin to cavitate prior to the component of interest the acoustic data is contaminated with spurious noise. Consequently, cavitation onset is widely determined by optically locating the event of interest. The current research effort aims at developing an acoustic localization scheme for reverberant environments such as water tunnels. Currently cavitation bubbles are being induced in a static water tank with a laser, allowing the localization techniques to be refined with the bubble at a known location. The source is located with the use of acoustic data collected with hydrophones and analyzed using signal processing techniques. To verify the accuracy of the acoustic scheme, the events are simultaneously monitored visually with the use of a high speed camera. Once refined testing will be conducted in a water tunnel. This research was sponsored by the Naval Engineering Education Center (NEEC).
Traffic-Sensitive Live Migration of Virtual Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deshpande, Umesh; Keahey, Kate
2015-01-01
In this paper we address the problem of network contention between the migration traffic and the VM application traffic for the live migration of co-located Virtual Machines (VMs). When VMs are migrated with pre-copy, they run at the source host during the migration. Therefore the VM applications with predominantly outbound traffic contend with the outgoing migration traffic at the source host. Similarly, during post-copy migration, the VMs run at the destination host. Therefore the VM applications with predominantly inbound traffic contend with the incoming migration traffic at the destination host. Such a contention increases the total migration time of themore » VMs and degrades the performance of VM application. Here, we propose traffic-sensitive live VM migration technique to reduce the contention of migration traffic with the VM application traffic. It uses a combination of pre-copy and post-copy techniques for the migration of the co-located VMs, instead of relying upon any single pre-determined technique for the migration of all the VMs. We base the selection of migration techniques on VMs' network traffic profiles so that the direction of migration traffic complements the direction of the most VM application traffic. We have implemented a prototype of traffic-sensitive migration on the KVM/QEMU platform. In the evaluation, we compare traffic-sensitive migration against the approaches that use only pre-copy or only post-copy for VM migration. We show that our approach minimizes the network contention for migration, thus reducing the total migration time and the application degradation.« less
Time-Frequency Analysis of the Dispersion of Lamb Modes
NASA Technical Reports Server (NTRS)
Prosser, W. H.; Seale, Michael D.; Smith, Barry T.
1999-01-01
Accurate knowledge of the velocity dispersion of Lamb modes is important for ultrasonic nondestructive evaluation methods used in detecting and locating flaws in thin plates and in determining their elastic stiffness coefficients. Lamb mode dispersion is also important in the acoustic emission technique for accurately triangulating the location of emissions in thin plates. In this research, the ability to characterize Lamb mode dispersion through a time-frequency analysis (the pseudo-Wigner-Ville distribution) was demonstrated. A major advantage of time-frequency methods is the ability to analyze acoustic signals containing multiple propagation modes, which overlap and superimpose in the time domain signal. By combining time-frequency analysis with a broadband acoustic excitation source, the dispersion of multiple Lamb modes over a wide frequency range can be determined from as little as a single measurement. In addition, the technique provides a direct measurement of the group velocity dispersion. The technique was first demonstrated in the analysis of a simulated waveform in an aluminum plate in which the Lamb mode dispersion was well known. Portions of the dispersion curves of the AO, A I , So, and S2 Lamb modes were obtained from this one waveform. The technique was also applied for the analysis of experimental waveforms from a unidirectional graphite/epoxy composite plate. Measurements were made both along and perpendicular to the fiber direction. In this case, the signals contained only the lowest order symmetric and antisymmetric modes. A least squares fit of the results from several source to detector distances was used. Theoretical dispersion curves were calculated and are shown to be in good agreement with experimental results.
Comparative study of shear wave-based elastography techniques in optical coherence tomography
NASA Astrophysics Data System (ADS)
Zvietcovich, Fernando; Rolland, Jannick P.; Yao, Jianing; Meemon, Panomsak; Parker, Kevin J.
2017-03-01
We compare five optical coherence elastography techniques able to estimate the shear speed of waves generated by one and two sources of excitation. The first two techniques make use of one piezoelectric actuator in order to produce a continuous shear wave propagation or a tone-burst propagation (TBP) of 400 Hz over a gelatin tissue-mimicking phantom. The remaining techniques utilize a second actuator located on the opposite side of the region of interest in order to create three types of interference patterns: crawling waves, swept crawling waves, and standing waves, depending on the selection of the frequency difference between the two actuators. We evaluated accuracy, contrast to noise ratio, resolution, and acquisition time for each technique during experiments. Numerical simulations were also performed in order to support the experimental findings. Results suggest that in the presence of strong internal reflections, single source methods are more accurate and less variable when compared to the two-actuator methods. In particular, TBP reports the best performance with an accuracy error <4.1%. Finally, the TBP was tested in a fresh chicken tibialis anterior muscle with a localized thermally ablated lesion in order to evaluate its performance in biological tissue.
Methods of localization of Lamb wave sources on thin plates
NASA Astrophysics Data System (ADS)
Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut
2015-04-01
Signal localization techniques are ubiquitous in both industry and academic communities. We propose a new localization method on plates which is based on energy amplitude attenuation and inverted source amplitude comparison. This inversion is tested on synthetic data using Lamb wave propagation direct model and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers (1-26 kHz frequency range)). We compare the performance of the technique to the classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. Furthermore, we measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, geometry, Signal to Noise Ratio, and we show that this very versatile technique works better than classical ones over the sampling rates 100kHz - 1MHz. Experimental phase consists of a glass plate having dimensions of 80cmx40cm with a thickness of 1cm. Generated signals due to a wooden hammer hit or a steel ball hit are captured by sensors placed on the plate on different locations with the mentioned sensors. Numerical simulations are done using dispersive far field approximation of plate waves. Signals are generated using a hertzian loading over the plate. Using imaginary sources outside the plate boundaries the effect of reflections is also included. This proposed method, can be modified to be implemented on 3d environments, monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).
NASA Technical Reports Server (NTRS)
Clapp, J. L.
1973-01-01
Research objectives during 1972-73 were to: (1) Ascertain the extent to which special aerial photography can be operationally used in monitoring water pollution parameters. (2) Ascertain the effectiveness of remote sensing in the investigation of nearshore mixing and coastal entrapment in large water bodies. (3) Develop an explicit relationship of the extent of the mixing zone in terms of the outfall, effluent and water body characteristics. (4) Develop and demonstrate the use of the remote sensing method as an effective legal implement through which administrative agencies and courts can not only investigate possible pollution sources but also legally prove the source of water pollution. (5) Evaluate the field potential of remote sensing techniques in monitoring algal blooms and aquatic macrophytes, and the use of these as indicators of lake eutrophication level. (6) Develop a remote sensing technique for the determination of the location and extent of hydrologically active source areas in a watershed.
Fidan, Barış; Umay, Ilknur
2015-01-01
Accurate signal-source and signal-reflector target localization tasks via mobile sensory units and wireless sensor networks (WSNs), including those for environmental monitoring via sensory UAVs, require precise knowledge of specific signal propagation properties of the environment, which are permittivity and path loss coefficients for the electromagnetic signal case. Thus, accurate estimation of these coefficients has significant importance for the accuracy of location estimates. In this paper, we propose a geometric cooperative technique to instantaneously estimate such coefficients, with details provided for received signal strength (RSS) and time-of-flight (TOF)-based range sensors. The proposed technique is integrated to a recursive least squares (RLS)-based adaptive localization scheme and an adaptive motion control law, to construct adaptive target localization and adaptive target tracking algorithms, respectively, that are robust to uncertainties in aforementioned environmental signal propagation coefficients. The efficiency of the proposed adaptive localization and tracking techniques are both mathematically analysed and verified via simulation experiments. PMID:26690441
The oasis of Tiout in the southwest of Algeria: Water resources and sustainable development
NASA Astrophysics Data System (ADS)
Hadidi, Abdelkader; Remini, Boualem; Habi, Mohamed; Saba, Djamel; Benmedjaed, Milloud
2016-07-01
The Tiout oasis is located in the municipality of Naama at the south west of Algeria is known by their ksour, the palm plantations and the good quality of their fruit and vegetables, in particular the dates and its varieties. This area contains enormous capacities of subsoil and superficial water. For several centuries, domestic consumption and the irrigation are carried out by the use of the traditional techniques of water collecting such as; the pendulum wells and foggaras them. Currently, this hydraulic heritage encounters technical and social problems, in particular with the contribution of drillings and the motor- pumps. The main issues are quoted: • Beating and draining of the water sources; • Degradation and abandonment of the traditional techniques.This study objective is to make an inventory of all the water sources in the study area, to study the impact of the modern technologies contribution on the ancestral techniques and finally to propose recommendations for the backup of the hydraulic heritage.
Persistent Structures in the Turbulent Boundary Layer
NASA Technical Reports Server (NTRS)
Palumbo, Dan; Chabalko, Chris
2005-01-01
Persistent structures in the turbulent boundary layer are located and analyzed. The data are taken from flight experiments on large commercial aircraft. An interval correlation technique is introduced which is able to locate the structures. The Morlet continuous wavelet is shown to not only locates persistent structures but has the added benefit that the pressure data are decomposed in time and frequency. To better understand how power is apportioned among these structures, a discrete Coiflet wavelet is used to decompose the pressure data into orthogonal frequency bands. Results indicate that some structures persist a great deal longer in the TBL than would be expected. These structure contain significant power and may be a primary source of vibration energy in the airframe.
Beamforming array techniques for acoustic emission monitoring of large concrete structures
NASA Astrophysics Data System (ADS)
McLaskey, Gregory C.; Glaser, Steven D.; Grosse, Christian U.
2010-06-01
This paper introduces a novel method of acoustic emission (AE) analysis which is particularly suited for field applications on large plate-like reinforced concrete structures, such as walls and bridge decks. Similar to phased-array signal processing techniques developed for other non-destructive evaluation methods, this technique adapts beamforming tools developed for passive sonar and seismological applications for use in AE source localization and signal discrimination analyses. Instead of relying on the relatively weak P-wave, this method uses the energy-rich Rayleigh wave and requires only a small array of 4-8 sensors. Tests on an in-service reinforced concrete structure demonstrate that the azimuth of an artificial AE source can be determined via this method for sources located up to 3.8 m from the sensor array, even when the P-wave is undetectable. The beamforming array geometry also allows additional signal processing tools to be implemented, such as the VESPA process (VElocity SPectral Analysis), whereby the arrivals of different wave phases are identified by their apparent velocity of propagation. Beamforming AE can reduce sampling rate and time synchronization requirements between spatially distant sensors which in turn facilitates the use of wireless sensor networks for this application.
Source term identification in atmospheric modelling via sparse optimization
NASA Astrophysics Data System (ADS)
Adam, Lukas; Branda, Martin; Hamburger, Thomas
2015-04-01
Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.
NASA Astrophysics Data System (ADS)
Horstmann, T.; Harrington, R. M.; Cochran, E. S.
2012-12-01
Frequently, the lack of distinctive phase arrivals makes locating tectonic tremor more challenging than locating earthquakes. Classic location algorithms based on travel times cannot be directly applied because impulsive phase arrivals are often difficult to recognize. Traditional location algorithms are often modified to use phase arrivals identified from stacks of recurring low-frequency events (LFEs) observed within tremor episodes, rather than single events. Stacking the LFE waveforms improves the signal-to-noise ratio for the otherwise non-distinct phase arrivals. In this study, we apply a different method to locate tectonic tremor: a modified time-reversal imaging approach that potentially exploits the information from the entire tremor waveform instead of phase arrivals from individual LFEs. Time reversal imaging uses the waveforms of a given seismic source recorded by multiple seismometers at discrete points on the surface and a 3D velocity model to rebroadcast the waveforms back into the medium to identify the seismic source location. In practice, the method works by reversing the seismograms recorded at each of the stations in time, and back-propagating them from the receiver location individually into the sub-surface as a new source time function. We use a staggered-grid, finite-difference code with 2.5 ms time steps and a grid node spacing of 50 m to compute the rebroadcast wavefield. We calculate the time-dependent curl field at each grid point of the model volume for each back-propagated seismogram. To locate the tremor, we assume that the source time function back-propagated from each individual station produces a similar curl field at the source position. We then cross-correlate the time dependent curl field functions and calculate a median cross-correlation coefficient at each grid point. The highest median cross-correlation coefficient in the model volume is expected to represent the source location. For our analysis, we use the velocity model of Thurber et al. (2006) interpolated to a grid spacing of 50 m. Such grid spacing corresponds to frequencies of up to 8 Hz, which is suitable to calculate the wave propagation of tremor. Our dataset contains continuous broadband data from 13 STS-2 seismometers deployed from May 2010 to July 2011 along the Cholame segment of the San Andreas Fault as well as data from the HRSN and PBO networks. Initial synthetic results from tests on a 2D plane using a line of 15 receivers suggest that we are able to recover accurate event locations to within 100 m horizontally and 300 m depth. We conduct additional synthetic tests to determine the influence of signal-to-noise ratio, number of stations used, and the uncertainty in the velocity model on the location result by adding noise to the seismograms and perturbations to the velocity model. Preliminary results show accurate show location results to within 400 m with a median signal-to-noise ratio of 3.5 and 5% perturbations in the velocity model. The next steps will entail performing the synthetic tests on the 3D velocity model, and applying the method to tremor waveforms. Furthermore, we will determine the spatial and temporal distribution of the source locations and compare our results to those by Sumy and others.
Modeling Finite Faults Using the Adjoint Wave Field
NASA Astrophysics Data System (ADS)
Hjörleifsdóttir, V.; Liu, Q.; Tromp, J.
2004-12-01
Time-reversal acoustics, a technique in which an acoustic signal is recorded by an array of transducers, time-reversed, and retransmitted, is used, e.g., in medical therapy to locate and destroy gallstones (for a review see Fink, 1997). As discussed by Tromp et al. (2004), time-reversal techniques for locating sources are closely linked to so-called `adjoint methods' (Talagrand and Courtier, 1987), which may be used to evaluate the gradient of a misfit function. Tromp et al. (2004) illustrate how a (finite) source inversion may be implemented based upon the adjoint wave field by writing the change in the misfit function, δ χ, due to a change in the moment-density tensor, δ m, as an integral of the adjoint strain field ɛ x,t) over the fault plane Σ : δ χ = ∫ 0T∫_Σ ɛ x,T-t) :δ m(x,t) d2xdt. We find that if the real fault plane is located at a distance δ h in the direction of the fault normal hat n, then to first order an additional factor of ∫ 0T∫_Σ δ h (x) ∂ n ɛ x,T-t):m(x,t) d2xdt is added to the change in the misfit function. The adjoint strain is computed by using the time-reversed difference between data and synthetics recorded at all receivers as simultaneous sources and recording the resulting strain on the fault plane. In accordance with time-reversal acoustics, all the resulting waves will constructively interfere at the position of the original source in space and time. The level of convergence will be deterimined by factors such as the source-receiver geometry, the frequency of the recorded data and synthetics, and the accuracy of the velocity structure used when back propagating the wave field. The terms ɛ x,T-t) and ∂ n ɛ x,T-t):m(x,t) can be viewed as sensitivity kernels for the moment density and the faultplane location respectively. By looking at these quantities we can make an educated choice of fault parametrization given the data in hand. The process can then be repeated to invert for the best source model, as demonstrated by Tromp et al. (2004) for the magnitude of a point force. In this presentation we explore the applicability of adjoint methods to estimating finite source parameters. Fink, M. (1997), Time reversed acoustics, Physics Today, 50(3), 34--40. Talagrand, O., and P.~Courtier (1987), Variational assimilation of meteorological observations with the adjoint vorticity equatuation. I: Theory, Q. J. R. Meteorol. Soc., 113, 1311--1328. Tromp, J., C.~Tape, and Q.~Liu (2004), Waveform tomography, adjoint methods, time reversal, and banana-doughnut kernels, Geophys. Jour. Int., in press
Martian methane plume models for defining Mars rover methane source search strategies
NASA Astrophysics Data System (ADS)
Nicol, Christopher; Ellery, Alex; Lynch, Brian; Cloutis, Ed
2018-07-01
The detection of atmospheric methane on Mars implies an active methane source. This introduces the possibility of a biotic source with the implied need to determine whether the methane is indeed biotic in nature or geologically generated. There is a clear need for robotic algorithms which are capable of manoeuvring a rover through a methane plume on Mars to locate its source. We explore aspects of Mars methane plume modelling to reveal complex dynamics characterized by advection and diffusion. A statistical analysis of the plume model has been performed and compared to analyses of terrestrial plume models. Finally, we consider a robotic search strategy to find a methane plume source. We find that gradient-based techniques are ineffective, but that more sophisticated model-based search strategies are unlikely to be available in near-term rover missions.
NASA Astrophysics Data System (ADS)
Comsa, Daria Craita
2008-10-01
There is a real need for improved small animal imaging techniques to enhance the development of therapies in which animal models of disease are used. Optical methods for imaging have been extensively studied in recent years, due to their high sensitivity and specificity. Methods like bioluminescence and fluorescence tomography report promising results for 3D reconstructions of source distributions in vivo. However, no standard methodology exists for optical tomography, and various groups are pursuing different approaches. In a number of studies on small animals, the bioluminescent or fluorescent sources can be reasonably approximated as point or line sources. Examples include images of bone metastases confined to the bone marrow. Starting with this premise, we propose a simpler, faster, and inexpensive technique to quantify optical images of point-like sources. The technique avoids the computational burden of a tomographic method by using planar images and a mathematical model based on diffusion theory. The model employs in situ optical properties estimated from video reflectometry measurements. Modeled and measured images are compared iteratively using a Levenberg-Marquardt algorithm to improve estimates of the depth and strength of the bioluminescent or fluorescent inclusion. The performance of the technique to quantify bioluminescence images was first evaluated on Monte Carlo simulated data. Simulated data also facilitated a methodical investigation of the effect of errors in tissue optical properties on the retrieved source depth and strength. It was found that, for example, an error of 4 % in the effective attenuation coefficient led to 4 % error in the retrieved depth for source depths of up to 12mm, while the error in the retrieved source strength increased from 5.5 % at 2mm depth, to 18 % at 12mm depth. Experiments conducted on images from homogeneous tissue-simulating phantoms showed that depths up to 10mm could be estimated within 8 %, and the relative source strength within 20 %. For sources 14mm deep, the inaccuracy in determining the relative source strength increased to 30 %. Measurements on small animals post mortem showed that the use of measured in situ optical properties to characterize heterogeneous tissue resulted in a superior estimation of the source strength and depth compared to when literature optical properties for organs or tissues were used. Moreover, it was found that regardless of the heterogeneity of the implant location or depth, our algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the emission image. Our bioluminescence algorithm was generally able to predict the source strength within a factor of 2 of the true strength, but the performance varied with the implant location and depth. In fluorescence imaging a more complex technique is required, including knowledge of tissue optical properties at both the excitation and emission wavelengths. A theoretical study using simulated fluorescence data showed that, for example, for a source 5 mm deep in tissue, errors of up to 15 % in the optical properties would give rise to errors of +/-0.7 mm in the retrieved depth and the source strength would be over- or under-estimated by a factor ranging from 1.25 to 2. Fluorescent sources implanted in rats post mortem at the same depth were localized with an error just slightly higher than predicted theoretically: a root-mean-square value of 0.8 mm was obtained for all implants 5 mm deep. However, for this source depth, the source strength was assessed within a factor ranging from 1.3 to 4.2 from the value estimated in a controlled medium. Nonetheless, similarly to the bioluminescence study, the fluorescence quantification algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the fluorescence image. Few studies have been reported in the literature that reconstruct known sources of bioluminescence or fluorescence in vivo or in heterogeneous phantoms. The few reported results show that the 3D tomographic methods have not yet reached their full potential. In this context, the simplicity of our technique emerges as a strong advantage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seth, Arpan; Klise, Katherine A.; Siirola, John D.
In the event of contamination in a water distribution network (WDN), source identification (SI) methods that analyze sensor data can be used to identify the source location(s). Knowledge of the source location and characteristics are important to inform contamination control and cleanup operations. Various SI strategies that have been developed by researchers differ in their underlying assumptions and solution techniques. The following manuscript presents a systematic procedure for testing and evaluating SI methods. The performance of these SI methods is affected by various factors including the size of WDN model, measurement error, modeling error, time and number of contaminant injections,more » and time and number of measurements. This paper includes test cases that vary these factors and evaluates three SI methods on the basis of accuracy and specificity. The tests are used to review and compare these different SI methods, highlighting their strengths in handling various identification scenarios. These SI methods and a testing framework that includes the test cases and analysis tools presented in this paper have been integrated into EPA’s Water Security Toolkit (WST), a suite of software tools to help researchers and others in the water industry evaluate and plan various response strategies in case of a contamination incident. Lastly, a set of recommendations are made for users to consider when working with different categories of SI methods.« less
2018-01-01
ABSTRACT Population at risk of crime varies due to the characteristics of a population as well as the crime generator and attractor places where crime is located. This establishes different crime opportunities for different crimes. However, there are very few efforts of modeling structures that derive spatiotemporal population models to allow accurate assessment of population exposure to crime. This study develops population models to depict the spatial distribution of people who have a heightened crime risk for burglaries and robberies. The data used in the study include: Census data as source data for the existing population, Twitter geo-located data, and locations of schools as ancillary data to redistribute the source data more accurately in the space, and finally gridded population and crime data to evaluate the derived population models. To create the models, a density-weighted areal interpolation technique was used that disaggregates the source data in smaller spatial units considering the spatial distribution of the ancillary data. The models were evaluated with validation data that assess the interpolation error and spatial statistics that examine their relationship with the crime types. Our approach derived population models of a finer resolution that can assist in more precise spatial crime analyses and also provide accurate information about crime rates to the public. PMID:29887766
NASA Astrophysics Data System (ADS)
Brouwer de Koning, Susan G.; Baltussen, E. J. M.; Karakullukcu, M. Baris; Smit, L.; van Veen, R. L. P.; Hendriks, Benno H. W.; Sterenborg, H. J. C. M.; Ruers, Theo J. M.
2017-02-01
This ex vivo study evaluates the feasibility of diffuse reflectance spectroscopy (DRS) for discriminating tumor from healthy oral tissue, with the aim to develop a technique that can be used to determine a complete excision of tumor through intraoperative margin assessment. DRS spectra were acquired on fresh surgical specimens from patients with an oral squamous cell carcinoma. The spectra represent a measure of diffuse light reflectance (wavelength range of 400-1600 nm), detected after illuminating tissue with a source fiber at 1.0 and 2.0 mm distances from a detection fiber. Spectra were obtained from 23 locations of tumor tissue and 16 locations of healthy muscle tissue. Biopsies were taken from all measured locations to facilitate an optimal correlation between spectra and pathological information. The area under the spectrum was used as a parameter to classify spectra of tumor and healthy tissue. Next, a receiver operating characteristics (ROC) analysis was performed to provide the area under the receiver operating curve (AUROC) as a measure for discriminative power. The area under the spectrum between 650 and 750 nm was used in the ROC analysis and provided AUROC values of 0.99 and 0.97, for distances of 1 mm and 2 mm between source and detector fiber, respectively. DRS can discriminate tumor from healthy oral tissue in an ex vivo setting. More specimens are needed to further evaluate this technique with component analyses and classification methods, prior to in vivo patient measurements.
Principal Locations of Metal Loading from Flood-Plain Tailings, Lower Silver Creek, Utah, April 2004
Kimball, Briant A.; Runkel, Robert L.; Walton-Day, Katherine
2007-01-01
Because of the historical deposition of mill tailings in flood plains, the process of determining total maximum daily loads for streams in an area like the Park City mining district of Utah is complicated. Understanding the locations of metal loading to Silver Creek and the relative importance of these locations is necessary to make science-based decisions. Application of tracer-injection and synoptic-sampling techniques provided a means to quantify and rank the many possible source areas. A mass-loading study was conducted along a 10,000-meter reach of Silver Creek, Utah, in April 2004. Mass-loading profiles based on spatially detailed discharge and chemical data indicated five principal locations of metal loading. These five locations contributed more than 60 percent of the cadmium and zinc loads to Silver Creek along the study reach and can be considered locations where remediation efforts could have the greatest effect upon improvement of water quality in Silver Creek.
Ultrabroadband phased-array radio frequency (RF) receivers based on optical techniques
NASA Astrophysics Data System (ADS)
Overmiller, Brock M.; Schuetz, Christopher A.; Schneider, Garrett; Murakowski, Janusz; Prather, Dennis W.
2014-03-01
Military operations require the ability to locate and identify electronic emissions in the battlefield environment. However, recent developments in radio detection and ranging (RADAR) and communications technology are making it harder to effectively identify such emissions. Phased array systems aid in discriminating emitters in the scene by virtue of their relatively high-gain beam steering and nulling capabilities. For the purpose of locating emitters, we present an approach realize a broadband receiver based on optical processing techniques applied to the response of detectors in conformal antenna arrays. This approach utilizes photonic techniques that enable us to capture, route, and process the incoming signals. Optical modulators convert the incoming signals up to and exceeding 110 GHz with appreciable conversion efficiency and route these signals via fiber optics to a central processing location. This central processor consists of a closed loop phase control system which compensates for phase fluctuations induced on the fibers due to thermal or acoustic vibrations as well as an optical heterodyne approach for signal conversion down to baseband. Our optical heterodyne approach uses injection-locked paired optical sources to perform heterodyne downconversion/frequency identification of the detected emission. Preliminary geolocation and frequency identification testing of electronic emissions has been performed demonstrating the capabilities of our RF receiver.
Enhancing source location protection in wireless sensor networks
NASA Astrophysics Data System (ADS)
Chen, Juan; Lin, Zhengkui; Wu, Di; Wang, Bailing
2015-12-01
Wireless sensor networks are widely deployed in the internet of things to monitor valuable objects. Once the object is monitored, the sensor nearest to the object which is known as the source informs the base station about the object's information periodically. It is obvious that attackers can capture the object successfully by localizing the source. Thus, many protocols have been proposed to secure the source location. However, in this paper, we examine that typical source location protection protocols generate not only near but also highly localized phantom locations. As a result, attackers can trace the source easily from these phantom locations. To address these limitations, we propose a protocol to enhance the source location protection (SLE). With phantom locations far away from the source and widely distributed, SLE improves source location anonymity significantly. Theory analysis and simulation results show that our SLE provides strong source location privacy preservation and the average safety period increases by nearly one order of magnitude compared with existing work with low communication cost.
Blecha, Kevin A.; Alldredge, Mat W.
2015-01-01
Animal space use studies using GPS collar technology are increasingly incorporating behavior based analysis of spatio-temporal data in order to expand inferences of resource use. GPS location cluster analysis is one such technique applied to large carnivores to identify the timing and location of feeding events. For logistical and financial reasons, researchers often implement predictive models for identifying these events. We present two separate improvements for predictive models that future practitioners can implement. Thus far, feeding prediction models have incorporated a small range of covariates, usually limited to spatio-temporal characteristics of the GPS data. Using GPS collared cougar (Puma concolor) we include activity sensor data as an additional covariate to increase prediction performance of feeding presence/absence. Integral to the predictive modeling of feeding events is a ground-truthing component, in which GPS location clusters are visited by human observers to confirm the presence or absence of feeding remains. Failing to account for sources of ground-truthing false-absences can bias the number of predicted feeding events to be low. Thus we account for some ground-truthing error sources directly in the model with covariates and when applying model predictions. Accounting for these errors resulted in a 10% increase in the number of clusters predicted to be feeding events. Using a double-observer design, we show that the ground-truthing false-absence rate is relatively low (4%) using a search delay of 2–60 days. Overall, we provide two separate improvements to the GPS cluster analysis techniques that can be expanded upon and implemented in future studies interested in identifying feeding behaviors of large carnivores. PMID:26398546
A low-frequency near-field interferometric-TOA 3-D Lightning Mapping Array
NASA Astrophysics Data System (ADS)
Lyu, Fanchao; Cummer, Steven A.; Solanki, Rahulkumar; Weinert, Joel; McTague, Lindsay; Katko, Alex; Barrett, John; Zigoneanu, Lucian; Xie, Yangbo; Wang, Wenqi
2014-11-01
We report on the development of an easily deployable LF near-field interferometric-time of arrival (TOA) 3-D Lightning Mapping Array applied to imaging of entire lightning flashes. An interferometric cross-correlation technique is applied in our system to compute windowed two-sensor time differences with submicrosecond time resolution before TOA is used for source location. Compared to previously reported LF lightning location systems, our system captures many more LF sources. This is due mainly to the improved mapping of continuous lightning processes by using this type of hybrid interferometry/TOA processing method. We show with five station measurements that the array detects and maps different lightning processes, such as stepped and dart leaders, during both in-cloud and cloud-to-ground flashes. Lightning images mapped by our LF system are remarkably similar to those created by VHF mapping systems, which may suggest some special links between LF and VHF emission during lightning processes.
Observation of gravity waves during the extreme tornado outbreak of 3 April 1974
NASA Technical Reports Server (NTRS)
Hung, R. J.; Phan, T.; Smith, R. E.
1978-01-01
A continuous wave-spectrum high-frequency radiowave Doppler sounder array was used to observe upper-atmospheric disturbances during an extreme tornado outbreak. The observations indicated that gravity waves with two harmonic wave periods were detected at the F-region ionospheric height. Using a group ray path computational technique, the observed gravity waves were traced in order to locate potential sources. The signals were apparently excited 1-3 hours before tornado touchdown. Reverse ray tracing indicated that the wave source was located at the aurora zone with a Kp index of 6 at the time of wave excitation. The summation of the 24-hour Kp index for the day was 36. The results agree with existing theories (Testud, 1970; Titheridge, 1971; Kato, 1976) for the excitation of large-scale traveling ionospheric disturbances associated with geomagnetic activity in the aurora zone.
Computing Fault Displacements from Surface Deformations
NASA Technical Reports Server (NTRS)
Lyzenga, Gregory; Parker, Jay; Donnellan, Andrea; Panero, Wendy
2006-01-01
Simplex is a computer program that calculates locations and displacements of subterranean faults from data on Earth-surface deformations. The calculation involves inversion of a forward model (given a point source representing a fault, a forward model calculates the surface deformations) for displacements, and strains caused by a fault located in isotropic, elastic half-space. The inversion involves the use of nonlinear, multiparameter estimation techniques. The input surface-deformation data can be in multiple formats, with absolute or differential positioning. The input data can be derived from multiple sources, including interferometric synthetic-aperture radar, the Global Positioning System, and strain meters. Parameters can be constrained or free. Estimates can be calculated for single or multiple faults. Estimates of parameters are accompanied by reports of their covariances and uncertainties. Simplex has been tested extensively against forward models and against other means of inverting geodetic data and seismic observations. This work
An overview of the sustainability of solid waste management at military installations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borglin, S.; Shore, J.; Worden, H.
2009-08-15
Sustainable municipal solid waste management at military solutions necessitates a combined approach that includes waste reduction, alternative disposal techniques, and increased recycling. Military installations are unique because they often represent large employers in the region in which they are located, thereby making any practices they employ impact overall waste management strategies of the region. Solutions for waste sustainability will be dependent on operational directives and base location, availability of resources such as water and energy, and size of population. Presented in this paper are descriptions of available waste strategies that can be used to support sustainable waste management. Results presentedmore » indicate source reduction and recycling to be the most sustainable solutions. However, new waste-to-energy plants and composting have potential to improve on these well proven techniques and allow military installations to achieve sustainable waste management.« less
In situ surface/interface x-ray diffractometer for oxide molecular beam epitaxy
NASA Astrophysics Data System (ADS)
Lee, J. H.; Tung, I. C.; Chang, S.-H.; Bhattacharya, A.; Fong, D. D.; Freeland, J. W.; Hong, Hawoong
2016-01-01
In situ studies of oxide molecular beam epitaxy by synchrotron x-ray scattering has been made possible by upgrading an existing UHV/molecular beam epitaxy (MBE) six-circle diffractometer system. For oxide MBE growth, pure ozone delivery to the chamber has been made available, and several new deposition sources have been made available on a new 12 in. CF (ConFlat, a registered trademark of Varian, Inc.) flange. X-ray diffraction has been used as a major probe for film growth and structures for the system. In the original design, electron diffraction was intended for the secondary diagnostics available without the necessity of the x-ray and located at separate positions. Deposition of films was made possible at the two diagnostic positions. And, the aiming of the evaporation sources is fixed to the point between two locations. Ozone can be supplied through two separate nozzles for each location. Also two separate thickness monitors are installed. Additional features of the equipment are also presented together with the data taken during typical oxide film growth to illustrate the depth of information available via in situ x-ray techniques.
In situ surface/interface x-ray diffractometer for oxide molecular beam epitaxy.
Lee, J H; Tung, I C; Chang, S-H; Bhattacharya, A; Fong, D D; Freeland, J W; Hong, Hawoong
2016-01-01
In situ studies of oxide molecular beam epitaxy by synchrotron x-ray scattering has been made possible by upgrading an existing UHV/molecular beam epitaxy (MBE) six-circle diffractometer system. For oxide MBE growth, pure ozone delivery to the chamber has been made available, and several new deposition sources have been made available on a new 12 in. CF (ConFlat, a registered trademark of Varian, Inc.) flange. X-ray diffraction has been used as a major probe for film growth and structures for the system. In the original design, electron diffraction was intended for the secondary diagnostics available without the necessity of the x-ray and located at separate positions. Deposition of films was made possible at the two diagnostic positions. And, the aiming of the evaporation sources is fixed to the point between two locations. Ozone can be supplied through two separate nozzles for each location. Also two separate thickness monitors are installed. Additional features of the equipment are also presented together with the data taken during typical oxide film growth to illustrate the depth of information available via in situ x-ray techniques.
Focus characterization at an X-ray free-electron laser by coherent scattering and speckle analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikorski, Marcin; Song, Sanghoon; Schropp, Andreas
2015-04-14
X-ray focus optimization and characterization based on coherent scattering and quantitative speckle size measurements was demonstrated at the Linac Coherent Light Source. Its performance as a single-pulse free-electron laser beam diagnostic was tested for two typical focusing configurations. The results derived from the speckle size/shape analysis show the effectiveness of this technique in finding the focus' location, size and shape. In addition, its single-pulse compatibility enables users to capture pulse-to-pulse fluctuations in focus properties compared with other techniques that require scanning and averaging.
Innovative signal processing for Johnson Noise thermometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ezell, N. Dianne Bull; Britton, Jr, Charles L.; Roberts, Michael
This report summarizes the newly developed algorithm that subtracted the Electromagnetic Interference (EMI). The EMI performance is very important to this measurement because any interference in the form on pickup from external signal sources from such as fluorescent lighting ballasts, motors, etc. can skew the measurement. Two methods of removing EMI were developed and tested at various locations. This report also summarizes the testing performed at different facilities outside Oak Ridge National Laboratory using both EMI removal techniques. The first EMI removal technique reviewed in previous milestone reports and therefore this report will detail the second method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne
Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less
NASA Astrophysics Data System (ADS)
Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques
2015-12-01
In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.
NASA Astrophysics Data System (ADS)
Almurshedi, Ahmed; Ismail, Abd Khamim
2015-04-01
EEG source localization was studied in order to determine the location of the brain sources that are responsible for the measured potentials at the scalp electrodes using EEGLAB with Independent Component Analysis (ICA) algorithm. Neuron source locations are responsible in generating current dipoles in different states of brain through the measured potentials. The current dipole sources localization are measured by fitting an equivalent current dipole model using a non-linear optimization technique with the implementation of standardized boundary element head model. To fit dipole models to ICA components in an EEGLAB dataset, ICA decomposition is performed and appropriate components to be fitted are selected. The topographical scalp distributions of delta, theta, alpha, and beta power spectrum and cross coherence of EEG signals are observed. In close eyes condition it shows that during resting and action states of brain, alpha band was activated from occipital (O1, O2) and partial (P3, P4) area. Therefore, parieto-occipital area of brain are active in both resting and action state of brain. However cross coherence tells that there is more coherence between right and left hemisphere in action state of brain than that in the resting state. The preliminary result indicates that these potentials arise from the same generators in the brain.
Full moment tensor and source location inversion based on full waveform adjoint method
NASA Astrophysics Data System (ADS)
Morency, C.
2012-12-01
The development of high-performance computing and numerical techniques enabled global and regional tomography to reach high levels of precision, and seismic adjoint tomography has become a state-of-the-art tomographic technique. The method was successfully used for crustal tomography of Southern California (Tape et al., 2009) and Europe (Zhu et al., 2012). Here, I will focus on the determination of source parameters (full moment tensor and location) based on the same approach (Kim et al, 2011). The method relies on full wave simulations and takes advantage of the misfit between observed and synthetic seismograms. An adjoint wavefield is calculated by back-propagating the difference between observed and synthetics from the receivers to the source. The interaction between this adjoint wavefield and the regular forward wavefield helps define Frechet derivatives of the source parameters, that is, the sensitivity of the misfit with respect to the source parameters. Source parameters are then recovered by minimizing the misfit based on a conjugate gradient algorithm using the Frechet derivatives. First, I will demonstrate the method on synthetic cases before tackling events recorded at the Geysers. The velocity model used at the Geysers is based on the USGS 3D velocity model. Waveform datasets come from the Northern California Earthquake Data Center. Finally, I will discuss strategies to ultimately use this method to characterize smaller events for microseismic and induced seismicity monitoring. References: - Tape, C., Q. Liu, A. Maggi, and J. Tromp, 2009, Adjoint tomography of the Southern California crust: Science, 325, 988992. - Zhu, H., Bozdag, E., Peter, D., and Tromp, J., 2012, Structure of the European upper mantle revealed by adjoint method: Nature Geoscience, 5, 493-498. - Kim, Y., Q. Liu, and J. Tromp, 2011, Adjoint centroid-moment tensor inversions: Geophys. J. Int., 186, 264278. Prepared by LLNL under Contract DE-AC52-07NA27344.
Development of lidar sensor for cloud-based measurements during convective conditions
NASA Astrophysics Data System (ADS)
Vishnu, R.; Bhavani Kumar, Y.; Rao, T. Narayana; Nair, Anish Kumar M.; Jayaraman, A.
2016-05-01
Atmospheric convection is a natural phenomena associated with heat transport. Convection is strong during daylight periods and rigorous in summer months. Severe ground heating associated with strong winds experienced during these periods. Tropics are considered as the source regions for strong convection. Formation of thunder storm clouds is common during this period. Location of cloud base and its associated dynamics is important to understand the influence of convection on the atmosphere. Lidars are sensitive to Mie scattering and are the suitable instruments for locating clouds in the atmosphere than instruments utilizing the radio frequency spectrum. Thunder storm clouds are composed of hydrometers and strongly scatter the laser light. Recently, a lidar technique was developed at National Atmospheric Research Laboratory (NARL), a Department of Space (DOS) unit, located at Gadanki near Tirupati. The lidar technique employs slant path operation and provides high resolution measurements on cloud base location in real-time. The laser based remote sensing technique allows measurement of atmosphere for every second at 7.5 m range resolution. The high resolution data permits assessment of updrafts at the cloud base. The lidar also provides real-time convective boundary layer height using aerosols as the tracers of atmospheric dynamics. The developed lidar sensor is planned for up-gradation with scanning facility to understand the cloud dynamics in the spatial direction. In this presentation, we present the lidar sensor technology and utilization of its technology for high resolution cloud base measurements during convective conditions over lidar site, Gadanki.
Location of acoustic emission sources generated by air flow
Kosel; Grabec; Muzic
2000-03-01
The location of continuous acoustic emission sources is a difficult problem of non-destructive testing. This article describes one-dimensional location of continuous acoustic emission sources by using an intelligent locator. The intelligent locator solves a location problem based on learning from examples. To verify whether continuous acoustic emission caused by leakage air flow can be located accurately by the intelligent locator, an experiment on a thin aluminum band was performed. Results show that it is possible to determine an accurate location by using a combination of a cross-correlation function with an appropriate bandpass filter. By using this combination, discrete and continuous acoustic emission sources can be located by using discrete acoustic emission sources for locator learning.
Characterization of metals emitted from motor vehicles.
Schauer, James J; Lough, Glynis C; Shafer, Martin M; Christensen, William F; Arndt, Michael F; DeMinter, Jeffrey T; Park, June-Soo
2006-03-01
A systematic approach was used to quantify the metals present in particulate matter emissions associated with on-road motor vehicles. Consistent sampling and chemical analysis techniques were used to determine the chemical composition of particulate matter less than 10 microm in aerodynamic diameter (PM10*) and particulate matter less than 2.5 microm in aerodynamic diameter (PM2.5), including analysis of trace metals by inductively coupled plasma mass spectrometry (ICP-MS). Four sources of metals were analyzed in emissions associated with motor vehicles: tailpipe emissions from gasoline- and diesel-powered vehicles, brake wear, tire wear, and resuspended road dust. Profiles for these sources were used in a chemical mass balance (CMB) model to quantify their relative contributions to the metal emissions measured in roadway tunnel tests in Milwaukee, Wisconsin. Roadway tunnel measurements were supplemented by parallel measurements of atmospheric particulate matter and associated metals at three urban locations: Milwaukee and Waukesha, Wisconsin, and Denver, Colorado. Ambient aerosol samples were collected every sixth day for one year and analyzed by the same chemical analysis techniques used for the source samples. The two Wisconsin sites were studied to assess the spatial differences, within one urban airshed, of trace metals present in atmospheric particulate matter. The measurements were evaluated to help understand source and seasonal trends in atmospheric concentrations of trace metals. ICP-MS methods have not been widely used in analyses of ambient aerosols for metals despite demonstrated advantages over traditional techniques. In a preliminary study, ICP-MS techniques were used to assess the leachability of trace metals present in atmospheric particulate matter samples and motor vehicle source samples in a synthetic lung fluid.
Deformation of Copahue volcano: Inversion of InSAR data using a genetic algorithm
NASA Astrophysics Data System (ADS)
Velez, Maria Laura; Euillades, Pablo; Caselli, Alberto; Blanco, Mauro; Díaz, Jose Martínez
2011-04-01
The Copahue volcano is one of the most active volcanoes in Argentina with eruptions having been reported as recently as 1992, 1995 and 2000. A deformation analysis using the Differential Synthetic Aperture Radar technique (DInSAR) was performed on Copahue-Caviahue Volcanic Complex (CCVC) from Envisat radar images between 2002 and 2007. A deformation rate of approximately 2 cm/yr was calculated, located mostly on the north-eastern flank of Copahue volcano, and assumed to be constant during the period of the interferograms. The geometry of the source responsible for the deformation was evaluated from an inversion of the mean velocity deformation measurements using two different models based on pressure sources embedded in an elastic homogeneous half-space. A genetic algorithm was applied as an optimization tool to find the best fit source. Results from inverse modelling indicate that a source located beneath the volcano edifice at a mean depth of 4 km is producing a volume change of approximately 0.0015 km/yr. This source was analysed considering the available studies of the area, and a conceptual model of the volcanic-hydrothermal system was designed. The source of deformation is related to a depressurisation of the system that results from the release of magmatic fluids across the boundary between the brittle and plastic domains. These leakages are considered to be responsible for the weak phreatic eruptions recently registered at the Copahue volcano.
NASA Astrophysics Data System (ADS)
Muñoz-Potosi, A. F.; Granados-Agustín, F.; Campos-García, M.; Valdivieso-González, L. G.; Percino-Zacarias, M. E.
2017-11-01
Among the various techniques that can be used to assess the quality of optical surfaces, deflectometry evaluates the reflection experienced by rays impinging on a surface whose topography is under study. We propose the use of a screen spatial filter to select rays from a light source. The screen must be placed at a distance shorter than the radius of curvature of the surface under study. The location of the screen depends on the exit pupil of the system and the caustic area. The reflected rays are measured using an observation plane/screen/CCD located beyond the point of convergence of the rays. To implement an experimental design of the proposed technique and determine the topography of the surface under study, it is necessary to measure tilt, decentering and focus errors caused by mechanical misalignment, which could influence the results of this technique but are not related to the quality of the surface. The aim of this study is to analyze an ideal spherical surface with known radius of curvature to identify the variations introduced by such misalignment errors.
Corrosion/erosion detection of boiler tubes utilizing pulsed infrared imaging
NASA Astrophysics Data System (ADS)
Bales, Maurice J.; Bishop, Chip C.
1995-05-01
This paper discusses a new technique for locating and detecting wall thickness reduction in boiler tubes caused by erosion/corrosion. Traditional means for this type of defect detection utilizes ultrasonics (UT) to perform a point by point measurement at given intervals of the tube length, which requires extensive and costly shutdown or `outage' time to complete the inspection, and has led to thin areas going undetected simply because they were located in between the sampling points. Pulsed infrared imaging (PII) can provide nearly 100% inspection of the tubes in a fraction of the time needed for UT. The IR system and heat source used in this study do not require any special access or fixed scaffolding, and can be remotely operated from a distance of up to 100 feet. This technique has been tried experimentally in a laboratory environment and verified in an actual field application. Since PII is a non-contact technique, considerable time and cost savings should be realized as well as the ability to predict failures rather than repairing them once they have occurred.
Time-Frequency Analysis of the Dispersion of Lamb Modes
NASA Technical Reports Server (NTRS)
Prosser, W. H.; Seale, Michael D.; Smith, Barry T.
1999-01-01
Accurate knowledge of the velocity dispersion of Lamb modes is important for ultrasonic nondestructive evaluation methods used in detecting and locating flaws in thin plates and in determining their elastic stiffness coefficients. Lamb mode dispersion is also important in the acoustic emission technique for accurately triangulating the location of emissions in thin plates. In this research, the ability to characterize Lamb mode dispersion through a time-frequency analysis (the pseudo Wigner-Ville distribution) was demonstrated. A major advantage of time-frequency methods is the ability to analyze acoustic signals containing multiple propagation modes, which overlap and superimpose in the time domain signal. By combining time-frequency analysis with a broadband acoustic excitation source, the dispersion of multiple Lamb modes over a wide frequency range can be determined from as little as a single measurement. In addition, the technique provides a direct measurement of the group velocity dispersion. The technique was first demonstrated in the analysis of a simulated waveform in an aluminum plate in which the Lamb mode dispersion was well known. Portions of the dispersion curves of the A(sub 0), A(sub 1), S(sub 0), and S(sub 2)Lamb modes were obtained from this one waveform. The technique was also applied for the analysis of experimental waveforms from a unidirectional graphite/epoxy composite plate. Measurements were made both along, and perpendicular to the fiber direction. In this case, the signals contained only the lowest order symmetric and antisymmetric modes. A least squares fit of the results from several source to detector distances was used. Theoretical dispersion curves were calculated and are shown to be in good agreement with experimental results.
Locating very high energy gamma-ray sources with arcminute accuracy
NASA Technical Reports Server (NTRS)
Akerlof, C. W.; Cawley, M. F.; Chantell, M.; Harris, K.; Lawrence, M. A.; Fegan, D. J.; Lang, M. J.; Hillas, A. M.; Jennings, D. G.; Lamb, R. C.
1991-01-01
The angular accuracy of gamma-ray detectors is intrinsically limited by the physical processes involved in photon detection. Although a number of pointlike sources were detected by the COS B satellite, only two have been unambiguously identified by time signature with counterparts at longer wavelengths. By taking advantage of the extended longitudinal structure of VHE gamma-ray showers, measurements in the TeV energy range can pinpoint source coordinates to arcminute accuracy. This has now been demonstrated with new data analysis procedures applied to observations of the Crab Nebula using Cherenkov air shower imaging techniques. With two telescopes in coincidence, the individual event circular probable error will be 0.13 deg. The half-cone angle of the field of view is effectively 1 deg.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogge, J.
2006-11-20
The Spallation Neutron Source (SNS) accelerator systems will deliver a 1.0 GeV, 1.4 MW proton beam to a liquid mercury target for neutron scattering research. The SNS MEBT Emittance Harp consists of 16 X and 16 Y wires, located in close proximity to the RFQ, Source, and MEBT Choppers. Beam Studies for source and LINAC commissioning required an overall increase in sensitivity for halo monitoring and measurement, and at the same time several severe noise sources had to be effectively removed from the harp signals. This paper is an overview of the design approach and techniques used in increasing gainmore » and sensitivity while maintaining a large signal to noise ratio for the emittance scanner device. A brief discussion of the identification of the noise sources, the mechanism for transmission and pick up, how the signals were improved and a summary of results.« less
Detrecting and Locating Partial Discharges in Transformers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shourbaji, A.; Richards, R.; Kisner, R. A.
A collaborative research between the Oak Ridge National Laboratory (ORNL), the American Electric Power (AEP), the Tennessee Valley Authority (TVA), and the State of Ohio Energy Office (OEO) has been formed to conduct a feasibility study to detect and locate partial discharges (PDs) inside large transformers. The success of early detection of the PDs is necessary to avoid costly catastrophic failures that can occur if the process of PD is ignored. The detection method under this research is based on an innovative technology developed by ORNL researchers using optical methods to sense the acoustical energy produced by the PDs. ORNLmore » researchers conducted experimental studies to detect PD using an optical fiber as an acoustic sensor capable of detecting acoustical disturbances at any point along its length. This technical approach also has the potential to locate the point at which the PD was sensed within the transformer. Several optical approaches were experimentally investigated, including interferometric detection of acoustical disturbances along the sensing fiber, light detection and ranging (LIDAR) techniques using frequency modulation continuous wave (FMCW), frequency modulated (FM) laser with a multimode fiber, FM laser with a single mode fiber, and amplitude modulated (AM) laser with a multimode fiber. The implementation of the optical fiber-based acoustic measurement technique would include installing a fiber inside a transformer allowing real-time detection of PDs and determining their locations. The fibers are nonconductive and very small (core plus cladding are diameters of 125 μm for single-mode fibers and 230 μm for multimode fibers). The research identified the capabilities and limitations of using optical technology to detect and locate sources of acoustical disturbances such as in PDs in large transformers. Amplitude modulation techniques showed the most promising results and deserve further research to better quantify the technique’s sensitivity and its ability to characterize a PD event. Other sensing techniques have been also identified, such as the wavelength shifting fiber optics and custom fabricated fibers with special coatings.« less
NASA Astrophysics Data System (ADS)
Zeng, Lvming; Liu, Guodong; Yang, Diwu; Ren, Zhong; Huang, Zhen
2008-12-01
A near-infrared photoacoustic glucose monitoring system, which is integrated dual-wavelength pulsed laser diode excitation with eight-element planar annular array detection technique, is designed and fabricated during this study. It has the characteristics of nonivasive, inexpensive, portable, accurate location, and high signal-to-noise ratio. In the system, the exciting source is based on two laser diodes with wavelengths of 905 nm and 1550 nm, respectively, with optical pulse energy of 20 μJ and 6 μJ. The laser beam is optically focused and jointly projected to a confocal point with a diameter of 0.7 mm approximately. A 7.5 MHz 8-element annular array transducer with a hollow structure is machined to capture photoacoustic signal in backward mode. The captured signals excitated from blood glucose are processed with a synthetic focusing algorithm to obtain high signal-to-noise ratio and accurate location over a range of axial detection depth. The custom-made transducer with equal area elements is coaxially collimated with the laser source to improve the photoacoustic excite/receive efficiency. In the paper, we introduce the photoacoustic theory, receive/process technique, and design method of the portable noninvasive photoacoustic glucose monitoring system, which can potentially be developed as a powerful diagnosis and treatment tool for diabetes mellitus.
Study of the atmospheric aerosol composition in equatorial Africa using PIXE as analytical technique
NASA Astrophysics Data System (ADS)
Maenhaut, W.; Akilimali, K.
1987-03-01
Small Nuclepore total filter holders and a single orifice 8-stage cascade impactor were used to collect atmospheric aerosol samples in Kinshasa, Zaire, and Butare, Rwanda. The samples were analyzed for about 20 elements by means of the PIXE technique. The results obtained for parallel samples, taken with two total filter holders and one cascade impactor, were generally in excellent agreement, suggesting that the different samplers collected very similar aerosol particle populations. Most elements were associated with a crustal dust dispersion source, which may include road dust dispersal. The Butare samples, particularly those collected during the night, were significantly influenced by biomass burning, as was deduced from enhanced filter blackness and noncrustal K levels. The pyrogenic component also contained P, S, Mn and Rb. Br and Pb were highly enriched at both locations, indicating that automotive sources had a strong influence on the aerosol composition.
Environmental monitoring using autonomous vehicles: a survey of recent searching techniques.
Bayat, Behzad; Crasta, Naveena; Crespi, Alessandro; Pascoal, António M; Ijspeert, Auke
2017-06-01
Autonomous vehicles are becoming an essential tool in a wide range of environmental applications that include ambient data acquisition, remote sensing, and mapping of the spatial extent of pollutant spills. Among these applications, pollution source localization has drawn increasing interest due to its scientific and commercial interest and the emergence of a new breed of robotic vehicles capable of operating in harsh environments without human supervision. The aim is to find the location of a region that is the source of a given substance of interest (e.g. a chemical pollutant at sea or a gas leakage in air) using a group of cooperative autonomous vehicles. Motivated by fast paced advances in this challenging area, this paper surveys recent advances in searching techniques that are at the core of environmental monitoring strategies using autonomous vehicles. Copyright © 2017 Elsevier Ltd. All rights reserved.
Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.
2017-12-01
We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.
NASA Astrophysics Data System (ADS)
Verlinden, Christopher M.
Controlled acoustic sources have typically been used for imaging the ocean. These sources can either be used to locate objects or characterize the ocean environment. The processing involves signal extraction in the presence of ambient noise, with shipping being a major component of the latter. With the advent of the Automatic Identification System (AIS) which provides accurate locations of all large commercial vessels, these major noise sources can be converted from nuisance to beacons or sources of opportunity for the purpose of studying the ocean. The source localization method presented here is similar to traditional matched field processing, but differs in that libraries of data-derived measured replicas are used in place of modeled replicas. In order to account for differing source spectra between library and target vessels, cross-correlation functions are compared instead of comparing acoustic signals directly. The library of measured cross-correlation function replicas is extrapolated using waveguide invariant theory to fill gaps between ship tracks, fully populating the search grid with estimated replicas allowing for continuous tracking. In addition to source localization, two ocean sensing techniques are discussed in this dissertation. The feasibility of estimating ocean sound speed and temperature structure, using ship noise across a drifting volumetric array of hydrophones suspended beneath buoys, in a shallow water marine environment is investigated. Using the attenuation of acoustic energy along eigenray paths to invert for ocean properties such as temperature, salinity, and pH is also explored. In each of these cases, the theory is developed, tested using numerical simulations, and validated with data from acoustic field experiments.
Analysis and suppression of passive noise in surface microseismic data
NASA Astrophysics Data System (ADS)
Forghani-Arani, Farnoush
Surface microseismic surveys are gaining popularity in monitoring the hydraulic fracturing process. The effectiveness of these surveys, however, is strongly dependent on the signal-to-noise ratio of the acquired data. Cultural and industrial noise generated during hydraulic fracturing operations usually dominate the data, thereby decreasing the effectiveness of using these data in identifying and locating microseismic events. Hence, noise suppression is a critical step in surface microseismic monitoring. In this thesis, I focus on two important aspects in using surface-recorded microseismic seismic data: first, I take advantage of the unwanted surface noise to understand the characteristics of these noise and extract information about the propagation medium from the noise; second, I propose effective techniques to suppress the surface noise while preserving the waveforms that contain information about the source of microseisms. Automated event identification on passive seismic data using only a few receivers is challenging especially when the record lengths span over long durations of time. I introduce an automatic event identification algorithm that is designed specifically for detecting events in passive data acquired with a small number of receivers. I demonstrate that the conventional STA/LTA (Short-term Average/Long-term Average) algorithm is not sufficiently effective in event detection in the common case of low signal-to-noise ratio. With a cross-correlation based method as an extension of the STA/LTA algorithm, even low signal-to-noise events (that were not detectable with conventional STA/LTA) were revealed. Surface microseismic data contains surface-waves (generated primarily from hydraulic fracturing activities) and body-waves in the form of microseismic events. It is challenging to analyze the surface-waves on the recorded data directly because of the randomness of their source and their unknown source signatures. I use seismic interferometry to extract the surface-wave arrivals. Interferometry is a powerful tool to extract waves (including body-wave and surface-waves) that propagate from any receiver in the array (called a pseudo source) to the other receivers across the array. Since most of the noise sources in surface microseismic data lie on the surface, seismic interferometry yields pseudo source gathers dominated by surface-wave energy. The dispersive characteristics of these surface-waves are important properties that can be used to extract information necessary for suppressing these waves. I demonstrate the application of interferometry to surface passive data recorded during the hydraulic fracturing operation of a tight gas reservoir and extract the dispersion properties of surface-waves corresponding to a pseudo-shot gather. Comparison of the dispersion characteristics of the surface waves from the pseudo-shot gather with that of an active shot-gather shows interesting similarities and differences. The dispersion character (e.g. velocity change with frequency) of the fundamental mode was observed to have the same behavior for both the active and passive data. However, for the higher mode surface-waves, the dispersion properties are extracted at different frequency ranges. Conventional noise suppression techniques in passive data are mostly stacking-based that rely on enforcing the amplitude of the signal by stacking the waveforms at the receivers and are unable to preserve the waveforms at the individual receivers necessary for estimating the microseismic source location and source mechanism. Here, I introduce a technique based on the tau - p transform, that effectively identifies and separates microseismic events from surface-wave noise in the tau -p domain. This technique is superior to conventional stacking-based noise suppression techniques, because it preserves the waveforms at individual receivers. Application of this methodology to microseismic events with isotropic and double-couple source mechanism, show substantial improvement in the signal-to-noise ratio. Imaging of the processed field data also show improved imaging of the hypocenter location of the microseismic source. In the case of double-couple source mechanism, I suggest two approaches for unifying the polarities at the receivers, a cross-correlation approach and a semblance-based prediction approach. The semblance-based approach is more effective at unifying the polarities, especially for low signal-to-noise ratio data.
Sensor Fusion Techniques for Phased-Array Eddy Current and Phased-Array Ultrasound Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arrowood, Lloyd F.
Sensor (or Data) fusion is the process of integrating multiple data sources to produce more consistent, accurate and comprehensive information than is provided by a single data source. Sensor fusion may also be used to combine multiple signals from a single modality to improve the performance of a particular inspection technique. Industrial nondestructive testing may utilize multiple sensors to acquire inspection data depending upon the object under inspection and the anticipated types of defects that can be identified. Sensor fusion can be performed at various levels of signal abstraction with each having its strengths and weaknesses. A multimodal data fusionmore » strategy first proposed by Heideklang and Shokouhi that combines spatially scattered detection locations to improve detection performance of surface-breaking and near-surface cracks in ferromagnetic metals is shown using a surface inspection example and is then extended for volumetric inspections. Utilizing data acquired from an Olympus Omniscan MX2 from both phased array eddy current and ultrasound probes on test phantoms, single and multilevel fusion techniques are employed to integrate signals from the two modalities. Preliminary results demonstrate how confidence in defect identification and interpretation benefit from sensor fusion techniques. Lastly, techniques for integrating data into radiographic and volumetric imagery from computed tomography are described and results are presented.« less
Satellite radar interferometry measures deformation at Okmok Volcano
Lu, Zhong; Mann, Dorte; Freymueller, Jeff
1998-01-01
The center of the Okmok caldera in Alaska subsided 140 cm as a result of its February– April 1997 eruption, according to satellite data from ERS-1 and ERS-2 synthetic aperture radar (SAR) interferometry. The inferred deflationary source was located 2.7 km beneath the approximate center of the caldera using a point source deflation model. Researchers believe this source is a magma chamber about 5 km from the eruptive source vent. During the 3 years before the eruption, the center of the caldera uplifted by about 23 cm, which researchers believe was a pre-emptive inflation of the magma chamber. Scientists say such measurements demonstrate that radar interferometry is a promising spaceborne technique for monitoring remote volcanoes. Frequent, routine acquisition of images with SAR interferometry could make near realtime monitoring at such volcanoes the rule, aiding in eruption forecasting.
Wampler, Peter J; Rediske, Richard R; Molla, Azizur R
2013-01-18
A remote sensing technique was developed which combines a Geographic Information System (GIS); Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only rarely was local knowledge required to identify and locate households. This method provides an important technique that can be applied to other developing countries where a randomized study design is needed but infrastructure is lacking to implement more traditional participant selection methods.
Assessment of ground-water contamination in the alluvial aquifer near West Point, Kentucky
Lyverse, M.A.; Unthank, M.D.
1988-01-01
Well inventories, water level measurements, groundwater quality samples, surface geophysical techniques (specifically, electromagnetic techniques), and test drilling were used to investigate the extent and sources of groundwater contamination in the alluvial aquifer near West Point, Kentucky. This aquifer serves as the principal source of drinking water for over 50,000 people. Groundwater flow in the alluvial aquifer is generally unconfined and moves in a northerly direction toward the Ohio River. Two large public supply well fields and numerous domestic wells are located in this natural flow path. High concentrations of chloride in groundwater have resulted in the abandonment of several public supply wells in the West Point areas. Chloride concentrations in water samples collected for this study were as high as 11,000 mg/L. Electromagnetic techniques indicated and test drilling later confirmed that the source of chloride in well waters was probably improperly plugged or unplugged, abandoned oil and gas exploration wells. The potential for chloride contamination of wells exists in the study area and is related to proximity to improperly abandoned oil and gas exploration wells and to gradients established by drawdowns associated with pumped wells. Periodic use of surface geophysical methods, in combination with added observation wells , could be used to monitor significant changes in groundwater quality related to chloride contamination. (USGS)
Hydro-fractured reservoirs: A study using double-difference location techniques
NASA Astrophysics Data System (ADS)
Kahn, Dan Scott
The mapping of induced seismicity in enhanced geothermal systems presents the best tool available for understanding the resulting hydro-fractured reservoir. In this thesis, two geothermal systems are studied; one in Krafla, Iceland and the other in Basel Switzerland. The purpose of the Krafla survey was to determine the relation between water injection into the fault system and the resulting earthquakes and fluid pressure in the subsurface crack system. The epicenters obtained from analyzing the seismic data gave a set of locations that are aligned along the border of a high resistivity zone ˜2500 meters below the injection well. Further magneto-telluric/seismic-data correlation was seen in the polarity of the cracks through shear wave splitting. The purpose of the Basel project was to examine the creation of a reservoir by the initial stimulation, using an injection well bored to 5000 meters. This stimulation triggered a M3.4 event, extending the normal range of event sizes commonly incurred in hydro-fractured reservoirs. To monitor the seismic activity 6 seismometer sondes were deployed at depths from 317 to 2740 meters below the ground surface. During the seven-day period over 13,000 events were recorded and approximately 3,300 located. These events were first located by single-difference techniques. Subsequently, after calculating their cross-correlation coefficients, clusters of events were relocated using a double-difference algorithm. The event locations support the existence of a narrow reservoir spreading form the injection well. Analysis of the seismic data indicates that the reservoir grew at a uniform rate punctuated by fluctuations which occurred at times of larger events, which were perhaps caused by sudden changes in pressure. The orientation and size of the main fracture plane was found by determining focal mechanisms and locating events that were similar to the M3.4 event. To address the question of whether smaller quakes are simply larger quakes scaled down, the data set was analyzed to determine whether scaling relations held for the source parameters, including seismic moment, source dimension, stress drop, radiated energy and apparent stress. It was found that there was a breakdown in scaling for smaller quakes.
Earthquake Monitoring with the MyShake Global Smartphone Seismic Network
NASA Astrophysics Data System (ADS)
Inbal, A.; Kong, Q.; Allen, R. M.; Savran, W. H.
2017-12-01
Smartphone arrays have the potential for significantly improving seismic monitoring in sparsely instrumented urban areas. This approach benefits from the dense spatial coverage of users, as well as from communication and computational capabilities built into smartphones, which facilitate big seismic data transfer and analysis. Advantages in data acquisition with smartphones trade-off with factors such as the low-quality sensors installed in phones, high noise levels, and strong network heterogeneity, all of which limit effective seismic monitoring. Here we utilize network and array-processing schemes to asses event detectability with the MyShake global smartphone network. We examine the benefits of using this network in either triggered or continuous modes of operation. A global database of ground motions measured on stationary phones triggered by M2-6 events is used to establish detection probabilities. We find that the probability of detecting an M=3 event with a single phone located <10 km from the epicenter exceeds 70%. Due to the sensor's self-noise, smaller magnitude events at short epicentral distances are very difficult to detect. To increase the signal-to-noise ratio, we employ array back-projection techniques on continuous data recorded by thousands of phones. In this class of methods, the array is used as a spatial filter that suppresses signals emitted from shallow noise sources. Filtered traces are stacked to further enhance seismic signals from deep sources. We benchmark our technique against traditional location algorithms using recordings from California, a region with large MyShake user database. We find that locations derived from back-projection images of M 3 events recorded by >20 nearby phones closely match the regional catalog locations. We use simulated broadband seismic data to examine how location uncertainties vary with user distribution and noise levels. To this end, we have developed an empirical noise model for the metropolitan Los-Angeles (LA) area. We find that densities larger than 100 stationary phones/km2 are required to accurately locate M 2 events in the LA basin. Given the projected MyShake user distribution, that condition may be met within the next few years.
Localization from near-source quasi-static electromagnetic fields
NASA Astrophysics Data System (ADS)
Mosher, J. C.
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.
Localization from near-source quasi-static electromagnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, John Compton
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less
Fox, G A; Sheshukov, A; Cruse, R; Kolar, R L; Guertault, L; Gesch, K R; Dutnell, R C
2016-05-01
The future reliance on water supply and flood control reservoirs across the globe will continue to expand, especially under a variable climate. As the inventory of new potential dam sites is shrinking, construction of additional reservoirs is less likely compared to simultaneous flow and sediment management in existing reservoirs. One aspect of this sediment management is related to the control of upstream sediment sources. However, key research questions remain regarding upstream sediment loading rates. Highlighted in this article are research needs relative to measuring and predicting sediment transport rates and loading due to streambank and gully erosion within a watershed. For example, additional instream sediment transport and reservoir sedimentation rate measurements are needed across a range of watershed conditions, reservoir sizes, and geographical locations. More research is needed to understand the intricate linkage between upland practices and instream response. A need still exists to clarify the benefit of restoration or stabilization of a small reach within a channel system or maturing gully on total watershed sediment load. We need to better understand the intricate interactions between hydrological and erosion processes to improve prediction, location, and timing of streambank erosion and failure and gully formation. Also, improved process-based measurement and prediction techniques are needed that balance data requirements regarding cohesive soil erodibility and stability as compared to simpler topographic indices for gullies or stream classification systems. Such techniques will allow the research community to address the benefit of various conservation and/or stabilization practices at targeted locations within watersheds.
NASA Astrophysics Data System (ADS)
Fox, G. A.; Sheshukov, A.; Cruse, R.; Kolar, R. L.; Guertault, L.; Gesch, K. R.; Dutnell, R. C.
2016-05-01
The future reliance on water supply and flood control reservoirs across the globe will continue to expand, especially under a variable climate. As the inventory of new potential dam sites is shrinking, construction of additional reservoirs is less likely compared to simultaneous flow and sediment management in existing reservoirs. One aspect of this sediment management is related to the control of upstream sediment sources. However, key research questions remain regarding upstream sediment loading rates. Highlighted in this article are research needs relative to measuring and predicting sediment transport rates and loading due to streambank and gully erosion within a watershed. For example, additional instream sediment transport and reservoir sedimentation rate measurements are needed across a range of watershed conditions, reservoir sizes, and geographical locations. More research is needed to understand the intricate linkage between upland practices and instream response. A need still exists to clarify the benefit of restoration or stabilization of a small reach within a channel system or maturing gully on total watershed sediment load. We need to better understand the intricate interactions between hydrological and erosion processes to improve prediction, location, and timing of streambank erosion and failure and gully formation. Also, improved process-based measurement and prediction techniques are needed that balance data requirements regarding cohesive soil erodibility and stability as compared to simpler topographic indices for gullies or stream classification systems. Such techniques will allow the research community to address the benefit of various conservation and/or stabilization practices at targeted locations within watersheds.
NASA Astrophysics Data System (ADS)
Krings, Thomas; Neininger, Bruno; Gerilowski, Konstantin; Krautwurst, Sven; Buchwitz, Michael; Burrows, John P.; Lindemann, Carsten; Ruhtz, Thomas; Schüttemeyer, Dirk; Bovensmann, Heinrich
2018-02-01
Reliable techniques to infer greenhouse gas emission rates from localised sources require accurate measurement and inversion approaches. In this study airborne remote sensing observations of CO2 by the MAMAP instrument and airborne in situ measurements are used to infer emission estimates of carbon dioxide released from a cluster of coal-fired power plants. The study area is complex due to sources being located in close proximity and overlapping associated carbon dioxide plumes. For the analysis of in situ data, a mass balance approach is described and applied, whereas for the remote sensing observations an inverse Gaussian plume model is used in addition to a mass balance technique. A comparison between methods shows that results for all methods agree within 10 % or better with uncertainties of 10 to 30 % for cases in which in situ measurements were made for the complete vertical plume extent. The computed emissions for individual power plants are in agreement with results derived from emission factors and energy production data for the time of the overflight.
New study on the 1941 Gloria Fault earthquake and tsunami
NASA Astrophysics Data System (ADS)
Baptista, Maria Ana; Miranda, Jorge Miguel; Batlló, Josep; Lisboa, Filipe; Luis, Joaquim; Maciá, Ramon
2016-08-01
The M ˜ 8.3-8.4 25 November 1941 was one of the largest submarine strike-slip earthquakes ever recorded in the Northeast (NE) Atlantic basin. This event occurred along the Eurasia-Nubia plate boundary between the Azores and the Strait of Gibraltar. After the earthquake, the tide stations in the NE Atlantic recorded a small tsunami with maximum amplitudes of 40 cm peak to through in the Azores and Madeira islands. In this study, we present a re-evaluation of the earthquake epicentre location using seismological data not included in previous studies. We invert the tsunami travel times to obtain a preliminary tsunami source location using the backward ray tracing (BRT) technique. We invert the tsunami waveforms to infer the initial sea surface displacement using empirical Green's functions, without prior assumptions about the geometry of the source. The results of the BRT simulation locate the tsunami source quite close to the new epicentre. This fact suggests that the co-seismic deformation of the earthquake induced the tsunami. The waveform inversion of tsunami data favours the conclusion that the earthquake ruptured an approximately 160 km segment of the plate boundary, in the eastern section of the Gloria Fault between -20.249 and -18.630° E. The results presented here contribute to the evaluation of tsunami hazard in the Northeast Atlantic basin.
Coal's role in California's energy needs
NASA Technical Reports Server (NTRS)
Daines, N. H.
1978-01-01
California's post-industrial society demands confidence in the energy supply system as an essential ingredient for social harmony and adequate job creating capital investment. Confidence requires policies which balance supply and demand using believable methods with adequate allowance for the unexpected, reliance on diverse sources and locations, respect for our environment, sustain our individual freedoms and provide opportunities for economic mobility. Coal will play only a part, but an important part, in a multifaceted energy policy using numerous energy sources and systems, conservation techniques, and cooperating societal institutions. Today's extensive and challenging research and development provides the foundation for future technologies which will further resolve the environmental effects associated with coal.
Peptide Fragmentation Induced by Radicals at Atmospheric Pressure
Vilkov, Andrey N.; Laiko, Victor V.; Doroshenko, Vladimir M.
2009-01-01
A novel ion dissociation technique, which is capable of providing an efficient fragmentation of peptides at essentially atmospheric pressure conditions, is developed. The fragmentation patterns observed often contain c-type fragments that are specific to ECD/ETD, along with the y-/b- fragments that are specific to CAD. In the presented experimental setup, ion fragmentation takes place within a flow reactor located in the atmospheric pressure region between the ion source and the mass spectrometer. According to a proposed mechanism, the fragmentation results from the interaction of ESI-generated analyte ions with the gas-phase radical species produced by a corona discharge source. PMID:19034885
Fast fluorescence techniques for crystallography beamlines
Stepanov, Sergey; Hilgart, Mark; Yoder, Derek W.; Makarov, Oleg; Becker, Michael; Sanishvili, Ruslan; Ogata, Craig M.; Venugopalan, Nagarajan; Aragão, David; Caffrey, Martin; Smith, Janet L.; Fischetti, Robert F.
2011-01-01
This paper reports on several developments of X-ray fluorescence techniques for macromolecular crystallography recently implemented at the National Institute of General Medical Sciences and National Cancer Institute beamlines at the Advanced Photon Source. These include (i) three-band on-the-fly energy scanning around absorption edges with adaptive positioning of the fine-step band calculated from a coarse pass; (ii) on-the-fly X-ray fluorescence rastering over rectangular domains for locating small and invisible crystals with a shuttle-scanning option for increased speed; (iii) fluorescence rastering over user-specified multi-segmented polygons; and (iv) automatic signal optimization for reduced radiation damage of samples. PMID:21808424
Repeated Earthquakes in the Vrancea Subcrustal Source and Source Scaling
NASA Astrophysics Data System (ADS)
Popescu, Emilia; Otilia Placinta, Anica; Borleasnu, Felix; Radulian, Mircea
2017-12-01
The Vrancea seismic nest, located at the South-Eastern Carpathians Arc bend, in Romania, is a well-confined cluster of seismicity at intermediate depth (60 - 180 km). During the last 100 years four major shocks were recorded in the lithosphere body descending almost vertically beneath the Vrancea region: 10 November 1940 (Mw 7.7, depth 150 km), 4 March 1977 (Mw 7.4, depth 94 km), 30 August 1986 (Mw 7.1, depth 131 km) and a double shock on 30 and 31 May 1990 (Mw 6.9, depth 91 km and Mw 6.4, depth 87 km, respectively). The probability of repeated earthquakes in the Vrancea seismogenic volume is relatively large taking into account the high density of foci. The purpose of the present paper is to investigate source parameters and clustering properties for the repetitive earthquakes (located close each other) recorded in the Vrancea seismogenic subcrustal region. To this aim, we selected a set of earthquakes as templates for different co-located groups of events covering the entire depth range of active seismicity. For the identified clusters of repetitive earthquakes, we applied spectral ratios technique and empirical Green’s function deconvolution, in order to constrain as much as possible source parameters. Seismicity patterns of repeated earthquakes in space, time and size are investigated in order to detect potential interconnections with larger events. Specific scaling properties are analyzed as well. The present analysis represents a first attempt to provide a strategy for detecting and monitoring possible interconnections between different nodes of seismic activity and their role in modelling tectonic processes responsible for generating the major earthquakes in the Vrancea subcrustal seismogenic source.
NASA Astrophysics Data System (ADS)
Ghosh, A.; LI, B.
2016-12-01
Alaska-Aleutian subduction zone is one of the most seismically active subduction zones in this planet. It is characterized by remarkable along-strike variations in seismic behavior, more than 50 active volcanoes, and presents a unique opportunity to serve as a natural laboratory to study subduction zone processes including fault dynamics. Yet details of the seismicity pattern, spatiotemporal distribution of slow earthquakes, nature of interaction between slow and fast earthquakes and their implication on the tectonic behavior remain unknown. We use a hybrid seismic network approach and install 3 mini seismic arrays and 5 stand-alone stations to simultaneously image subduction fault and nearby volcanic system (Makushin). The arrays and stations are strategically located in the Unalaska Island, where prolific tremor activity is detected and located by a solo pilot array in summer 2012. The hybrid network is operational between summer 2015 and 2016 in continuous mode. One of the three arrays starts in summer 2014 and provides additional data covering a longer time span. The pilot array in the Akutan Island recorded continuous seismic data for 2 months. An automatic beam-backprojection analysis detects almost daily tremor activity, with an average of more than an hour per day. We imaged two active sources separated by a tremor gap. The western source, right under the Unalaska Island shows the most prolific activity with a hint of steady migration. In addition, we are able to identify more than 10 families of low frequency earthquakes (LFEs) in this area. They are located within the tremor source area as imaged by the bean-backprojection technique. Application of a match filter technique reveals that intervals between LFE activities are shorter during tremor activity and longer during quiet time period. We expect to present new results from freshly obtained data. The experiment A-cubed is illuminating subduction zone processes under Unalaska Island in unprecedented detail.
Radio weak lensing shear measurement in the visibility domain - II. Source extraction
NASA Astrophysics Data System (ADS)
Rivi, M.; Miller, L.
2018-05-01
This paper extends the method introduced in Rivi et al. (2016b) to measure galaxy ellipticities in the visibility domain for radio weak lensing surveys. In that paper, we focused on the development and testing of the method for the simple case of individual galaxies located at the phase centre, and proposed to extend it to the realistic case of many sources in the field of view by isolating visibilities of each source with a faceting technique. In this second paper, we present a detailed algorithm for source extraction in the visibility domain and show its effectiveness as a function of the source number density by running simulations of SKA1-MID observations in the band 950-1150 MHz and comparing original and measured values of galaxies' ellipticities. Shear measurements from a realistic population of 104 galaxies randomly located in a field of view of 1 \\deg ^2 (i.e. the source density expected for the current radio weak lensing survey proposal with SKA1) are also performed. At SNR ≥ 10, the multiplicative bias is only a factor 1.5 worse than what found when analysing individual sources, and is still comparable to the bias values reported for similar measurement methods at optical wavelengths. The additive bias is unchanged from the case of individual sources, but it is significantly larger than typically found in optical surveys. This bias depends on the shape of the uv coverage and we suggest that a uv-plane weighting scheme to produce a more isotropic shape could reduce and control additive bias.
NASA Astrophysics Data System (ADS)
Gupta, I.; Chan, W.; Wagner, R.
2005-12-01
Several recent studies of the generation of low-frequency Lg from explosions indicate that the Lg wavetrain from explosions contains significant contributions from (1) the scattering of explosion-generated Rg into S and (2) direct S waves from the non-spherical spall source associated with a buried explosion. The pronounced spectral nulls observed in Lg spectra of Yucca Flats (NTS) and Semipalatinsk explosions (Patton and Taylor, 1995; Gupta et al., 1997) are related to Rg excitation caused by spall-related block motions in a conical volume over the shot point, which may be approximately represented by a compensated linear vector dipole (CLVD) source (Patton et al., 2005). Frequency-dependent excitation of Rg waves should be imprinted on all scattered P, S and Lg waves. A spectrogram may be considered as a three-dimensional matrix of numbers providing amplitude and frequency information for each point in the time series. We found difference spectrograms, derived from a normal explosion and a closely located over-buried shot recorded at the same common station, to be remarkably useful for an understanding of the origin and spectral contents of various regional phases. This technique allows isolation of source characteristics, essentially free from path and recording site effects, since the overburied shot acts as the empirical Green's function. Application of this methodology to several pairs of closely located explosions shows that the scattering of explosion-generated Rg makes significant contribution to not only Lg and its coda but also to the two other regional phases Pg (presumably by the scattering of Rg into P) and Sn. The scattered energy, identified by the presence of a spectral null at the appropriate frequency, generally appears to be more prominent in the somewhat later-arriving sections of Pg, Sn, and Lg than in the initial part. Difference spectrograms appear to provide a powerful new technique for understanding the mechanism of near-source scattering of explosion-generated Rg and its contribution to various regional phases.
A closer look at Galileo Thermal data from a Possible Plume Source North of Pwyll, Europa
NASA Astrophysics Data System (ADS)
Rathbun, J. A.; Spencer, J. R.
2017-12-01
Two different observing techniques, both employing the Hubble Space Telescope, have found evidence for plumes just off Europa's limb (Roth et al., 2014; Sparks et al., 2016). More recent observations using the Jovian transit technique enabled Sparks et al. (2017) to determine that one location was the source of two separate detections: just north of the impact crater Pwyll at 275 W, -16 S, a region we informally call North Pwyll. This source was detected on March 17, 2014 and February 22, 2016. Coinciding with this source is a broad thermal anomaly observed by the Galileo Photopolarimeter-Radiometer (PPR) during the Europan night (Sparks et al., 2017; Spencer et al., 1999). Rathbun et al. (2010) determined detection limits for the PPR observations and found that a 100 km2 hotspot in the vicinity of North Pwyll would have been detected if it had a temperature above about 150 K. We took a closer look at the PPR data and found that there are 5 PPR observations that include the North Pwyll region, at local times varying from midway between midnight and sunrise (the data already published) through midway between sunrise and noon. While at least one observation near noon is required for a complete measurement of the diurnal variation, we were able to fit a thermal model to the available data and found that endogenic heating is not required and that the data can be fit using an albedo of 0.4 and a thermal inertia of 114 in MKS units. Due to the sparseness and noisiness of the data, these values are very uncertain. Rathbun et al. (2010) found Europa's thermal inertias to be in the range of 20-140 MKS and albedos 0.3-0.7, so North Pwyll has a high thermal inertia and low albedo. Unfortunately, the high latitude of the other putative plume source locations (63 S and 75 S) puts them in areas poorly imaged by PPR.
Volcanic tremor and local earthquakes at Copahue volcanic complex, Southern Andes, Argentina
NASA Astrophysics Data System (ADS)
Ibáñez, J. M.; Del Pezzo, E.; Bengoa, C.; Caselli, A.; Badi, G.; Almendros, J.
2008-07-01
In the present paper we describe the results of a seismic field survey carried out at Copahue Volcano, Southern Andes, Argentina, using a small-aperture, dense seismic antenna. Copahue Volcano is an active volcano that exhibited a few phreatic eruptions in the last 20 years. The aim of this experiment was to record and classify the background seismic activity of this volcanic area, and locate the sources of local earthquakes and volcanic tremor. Data consist of several volcano-tectonic (VT) earthquakes, and many samples of back-ground seismic noise. We use both ordinary spectral, and multi-spectral techniques to measure the spectral content, and an array technique [Zero Lag Cross Correlation technique] to measure the back-azimuth and apparent slowness of the signals propagating across the array. We locate VT earthquakes using a procedure based on the estimate of slowness vector components and S-P time. VT events are located mainly along the border of the Caviahue caldera lake, positioned at the South-East of Copahue volcano, in a depth interval of 1-3 km below the surface. The background noise shows the presence of many transients with high correlation among the array stations in the frequency band centered at 2.5 Hz. These transients are superimposed to an uncorrelated background seismic signal. Array solutions for these transients show a predominant slowness vector pointing to the exploited geothermal field of "Las Maquinitas" and "Copahue Village", located about 6 km north of the array site. We interpret this coherent signal as a tremor generated by the activity of the geothermal field.
Hiding the Source Based on Limited Flooding for Sensor Networks.
Chen, Juan; Lin, Zhengkui; Hu, Ying; Wang, Bailing
2015-11-17
Wireless sensor networks are widely used to monitor valuable objects such as rare animals or armies. Once an object is detected, the source, i.e., the sensor nearest to the object, generates and periodically sends a packet about the object to the base station. Since attackers can capture the object by localizing the source, many protocols have been proposed to protect source location. Instead of transmitting the packet to the base station directly, typical source location protection protocols first transmit packets randomly for a few hops to a phantom location, and then forward the packets to the base station. The problem with these protocols is that the generated phantom locations are usually not only near the true source but also close to each other. As a result, attackers can easily trace a route back to the source from the phantom locations. To address the above problem, we propose a new protocol for source location protection based on limited flooding, named SLP. Compared with existing protocols, SLP can generate phantom locations that are not only far away from the source, but also widely distributed. It improves source location security significantly with low communication cost. We further propose a protocol, namely SLP-E, to protect source location against more powerful attackers with wider fields of vision. The performance of our SLP and SLP-E are validated by both theoretical analysis and simulation results.
Sherriff, Sophie C; Rowan, John S; Fenton, Owen; Jordan, Philip; Melland, Alice R; Mellander, Per-Erik; hUallacháin, Daire Ó
2016-02-16
Within agricultural watersheds suspended sediment-discharge hysteresis during storm events is commonly used to indicate dominant sediment sources and pathways. However, availability of high-resolution data, qualitative metrics, longevity of records, and simultaneous multiwatershed analyses has limited the efficacy of hysteresis as a sediment management tool. This two year study utilizes a quantitative hysteresis index from high-resolution suspended sediment and discharge data to assess fluctuations in sediment source location, delivery mechanisms and export efficiency in three intensively farmed watersheds during events over time. Flow-weighted event sediment export was further considered using multivariate techniques to delineate rainfall, stream hydrology, and antecedent moisture controls on sediment origins. Watersheds with low permeability (moderately- or poorly drained soils) with good surface hydrological connectivity, therefore, had contrasting hysteresis due to source location (hillslope versus channel bank). The well-drained watershed with reduced connectivity exported less sediment but, when watershed connectivity was established, the largest event sediment load of all watersheds occurred. Event sediment export was elevated in arable watersheds when low groundcover was coupled with high connectivity, whereas in the grassland watershed, export was attributed to wetter weather only. Hysteresis analysis successfully indicated contrasting seasonality, connectivity and source availability and is a useful tool to identify watershed specific sediment management practices.
NASA Technical Reports Server (NTRS)
Fioletov, V.E.; McLinden, C. A.; Krotkov, N.; Yang, K.; Loyola, D. G.; Valks, P.; Theys, N.; Van Roozendael, M.; Nowlan, C. R.; Chance, K.;
2013-01-01
Retrievals of sulfur dioxide (SO2) from space-based spectrometers are in a relatively early stage of development. Factors such as interference between ozone and SO2 in the retrieval algorithms often lead to errors in the retrieved values. Measurements from the Ozone Monitoring Instrument (OMI), Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY), and Global Ozone Monitoring Experiment-2 (GOME-2) satellite sensors, averaged over a period of several years, were used to identify locations with elevated SO2 values and estimate their emission levels. About 30 such locations, detectable by all three sensors and linked to volcanic and anthropogenic sources, were found after applying low and high spatial frequency filtration designed to reduce noise and bias and to enhance weak signals to SO2 data from each instrument. Quantitatively, the mean amount of SO2 in the vicinity of the sources, estimated from the three instruments, is in general agreement. However, its better spatial resolution makes it possible for OMI to detect smaller sources and with additional detail as compared to the other two instruments. Over some regions of China, SCIAMACHY and GOME-2 data show mean SO2 values that are almost 1.5 times higher than those from OMI, but the suggested spatial filtration technique largely reconciles these differences.
Monitoring fossil fuel sources of methane in Australia
NASA Astrophysics Data System (ADS)
Loh, Zoe; Etheridge, David; Luhar, Ashok; Hibberd, Mark; Thatcher, Marcus; Noonan, Julie; Thornton, David; Spencer, Darren; Gregory, Rebecca; Jenkins, Charles; Zegelin, Steve; Leuning, Ray; Day, Stuart; Barrett, Damian
2017-04-01
CSIRO has been active in identifying and quantifying methane emissions from a range of fossil fuel sources in Australia over the past decade. We present here a history of the development of our work in this domain. While we have principally focused on optimising the use of long term, fixed location, high precision monitoring, paired with both forward and inverse modelling techniques suitable either local or regional scales, we have also incorporated mobile ground surveys and flux calculations from plumes in some contexts. We initially developed leak detection methodologies for geological carbon storage at a local scale using a Bayesian probabilistic approach coupled to a backward Lagrangian particle dispersion model (Luhar et al. JGR, 2014), and single point monitoring with sector analysis (Etheridge et al. In prep.) We have since expanded our modelling techniques to regional scales using both forward and inverse approaches to constrain methane emissions from coal mining and coal seam gas (CSG) production. The Surat Basin (Queensland, Australia) is a region of rapidly expanding CSG production, in which we have established a pair of carefully located, well-intercalibrated monitoring stations. These data sets provide an almost continuous record of (i) background air arriving at the Surat Basin, and (ii) the signal resulting from methane emissions within the Basin, i.e. total downwind methane concentration (comprising emissions including natural geological seeps, agricultural and biogenic sources and fugitive emissions from CSG production) minus background or upwind concentration. We will present our latest results on monitoring from the Surat Basin and their application to estimating methane emissions.
Global Distribution of Dust, Smoke, Volcanic Ash, and Pollutant Aerosols Seen from Space
NASA Technical Reports Server (NTRS)
Herman, Jay R.; Hsu, Christina; Krotkov, Nickolay; Torres, Omar
1998-01-01
New technique for observing aerosols from space, using ultraviolet (UV) wavelengths, have been developed during the past three years. The chief benefit from observing aerosols in the UV is that they are easily visible over both land and water. While there is presently more than one satellite that can observe aerosols in the UV, only Total Ozone Mapping Spectrometer (TOMS) has a long-term record (since 1979) and adequate spatial resolutions (50 to 100 km) to observe the seasonal and interannual variations, and to locate some of the land sources of dust, smoke, volcanic ash and sulfate pollutants. The data has been assembled into daily images of the atmospheric aerosol loading in terms of optical depth and UV transmittance. For the major sources of aerosols, it is common for at least 50% of the total UV to be absorbed underneath aerosol plumes. This is particularly true for the spectacular smoke plumes originating from the recent Indonesian and Mexican fires, as well as under the huge African dust plumes. The sulfate pollutants are mostly present in the Northern Hemisphere and are associated with regions of high industrial activity. The location and seasonal dependence of these aerosol plumes over Europe and North America will be contrasted with the relatively clean Southern Hemisphere. Because of the success of this technique, it has formed the basis for a new generation of space-borne aerosol detection instruments. These new instruments combine the UV observations with the more traditional visible-wavelength data to obtain a more comprehensive characterization of aerosols that is possible with either UV or visible techniques by themselves.
NASA Astrophysics Data System (ADS)
Matsumoto, H.; Haralabus, G.; Zampolli, M.; Özel, N. M.
2016-12-01
Underwater acoustic signal waveforms recorded during the 2015 Chile earthquake (Mw 8.3) by the hydrophones of hydroacoustic station HA03, located at the Juan Fernandez Islands, are analyzed. HA03 is part of the Comprehensive Nuclear-Test-Ban Treaty International Monitoring System. The interest in the particular data set stems from the fact that HA03 is located only approximately 700 km SW from the epicenter of the earthquake. This makes it possible to study aspects of the signal associated with the tsunamigenic earthquake, which would be more difficult to detect had the hydrophones been located far from the source. The analysis shows that the direction of arrival of the T phase can be estimated by means of a three-step preprocessing technique which circumvents spatial aliasing caused by the hydrophone spacing, the latter being large compared to the wavelength. Following this preprocessing step, standard frequency-wave number analysis (F-K analysis) can accurately estimate back azimuth and slowness of T-phase signals. The data analysis also shows that the dispersive tsunami signals can be identified by the water-column hydrophones at the time when the tsunami surface gravity wave reaches the station.
In situ surface/interface x-ray diffractometer for oxide molecular beam epitaxy
Lee, J. H.; Tung, I. C.; Chang, S. -H.; ...
2016-01-05
In situ studies of oxide molecular beam epitaxy by synchrotron x-ray scattering has been made possible by upgrading an existing UHV/molecular beam epitaxy (MBE) six-circle diffractometer system. For oxide MBE growth, pure ozone delivery to the chamber has been made available, and several new deposition sources have been made available on a new 12 in. CF (ConFlat, a registered trademark of Varian, Inc.) flange. X-ray diffraction has been used as a major probe for film growth and structures for the system. In the original design, electron diffraction was intended for the secondary diagnostics available without the necessity of the x-raymore » and located at separate positions. Deposition of films was made possible at the two diagnostic positions. And, the aiming of the evaporation sources is fixed to the point between two locations. Ozone can be supplied through two separate nozzles for each location. Also two separate thickness monitors are installed. Finally, additional features of the equipment are also presented together with the data taken during typical oxide film growth to illustrate the depth of information available via in situ x-ray techniques.« less
Nealy, Jennifer; Benz, Harley M.; Hayes, Gavin; Berman, Eric; Barnhart, William
2017-01-01
The 2008 Wells, NV earthquake represents the largest domestic event in the conterminous U.S. outside of California since the October 1983 Borah Peak earthquake in southern Idaho. We present an improved catalog, magnitude complete to 1.6, of the foreshock-aftershock sequence, supplementing the current U.S. Geological Survey (USGS) Preliminary Determination of Epicenters (PDE) catalog with 1,928 well-located events. In order to create this catalog, both subspace and kurtosis detectors are used to obtain an initial set of earthquakes and associated locations. The latter are then calibrated through the implementation of the hypocentroidal decomposition method and relocated using the BayesLoc relocation technique. We additionally perform a finite fault slip analysis of the mainshock using InSAR observations. By combining the relocated sequence with the finite fault analysis, we show that the aftershocks occur primarily updip and along the southwestern edge of the zone of maximum slip. The aftershock locations illuminate areas of post-mainshock strain increase; aftershock depths, ranging from 5 to 16 km, are consistent with InSAR imaging, which shows that the Wells earthquake was a buried source with no observable near-surface offset.
Seismic Methods of Identifying Explosions and Estimating Their Yield
NASA Astrophysics Data System (ADS)
Walter, W. R.; Ford, S. R.; Pasyanos, M.; Pyle, M. L.; Myers, S. C.; Mellors, R. J.; Pitarka, A.; Rodgers, A. J.; Hauk, T. F.
2014-12-01
Seismology plays a key national security role in detecting, locating, identifying and determining the yield of explosions from a variety of causes, including accidents, terrorist attacks and nuclear testing treaty violations (e.g. Koper et al., 2003, 1999; Walter et al. 1995). A collection of mainly empirical forensic techniques has been successfully developed over many years to obtain source information on explosions from their seismic signatures (e.g. Bowers and Selby, 2009). However a lesson from the three DPRK declared nuclear explosions since 2006, is that our historic collection of data may not be representative of future nuclear test signatures (e.g. Selby et al., 2012). To have confidence in identifying future explosions amongst the background of other seismic signals, and accurately estimate their yield, we need to put our empirical methods on a firmer physical footing. Goals of current research are to improve our physical understanding of the mechanisms of explosion generation of S- and surface-waves, and to advance our ability to numerically model and predict them. As part of that process we are re-examining regional seismic data from a variety of nuclear test sites including the DPRK and the former Nevada Test Site (now the Nevada National Security Site (NNSS)). Newer relative location and amplitude techniques can be employed to better quantify differences between explosions and used to understand those differences in term of depth, media and other properties. We are also making use of the Source Physics Experiments (SPE) at NNSS. The SPE chemical explosions are explicitly designed to improve our understanding of emplacement and source material effects on the generation of shear and surface waves (e.g. Snelson et al., 2013). Finally we are also exploring the value of combining seismic information with other technologies including acoustic and InSAR techniques to better understand the source characteristics. Our goal is to improve our explosion models and our ability to understand and predict where methods of identifying explosions and estimating their yield work well, and any circumstances where they may not.
Locating Local Earthquakes Using Single 3-Component Broadband Seismological Data
NASA Astrophysics Data System (ADS)
Das, S. B.; Mitra, S.
2015-12-01
We devised a technique to locate local earthquakes using single 3-component broadband seismograph and analyze the factors governing the accuracy of our result. The need for devising such a technique arises in regions of sparse seismic network. In state-of-the-art location algorithms, a minimum of three station recordings are required for obtaining well resolved locations. However, the problem arises when an event is recorded by less than three stations. This may be because of the following reasons: (a) down time of stations in a sparse network; (b) geographically isolated regions with limited logistic support to setup large network; (c) regions of insufficient economy for financing multi-station network and (d) poor signal-to-noise ratio for smaller events at most stations, except the one in its closest vicinity. Our technique provides a workable solution to the above problematic scenarios. However, our methodology is strongly dependent on the velocity model of the region. Our method uses a three step processing: (a) ascertain the back-azimuth of the event from the P-wave particle motion recorded on the horizontal components; (b) estimate the hypocentral distance using the S-P time; and (c) ascertain the emergent angle from the vertical and radial components. Once this is obtained, one can ray-trace through the 1-D velocity model to estimate the hypocentral location. We test our method on synthetic data, which produces results with 99% precision. With observed data, the accuracy of our results are very encouraging. The precision of our results depend on the signal-to-noise ratio (SNR) and choice of the right band-pass filter to isolate the P-wave signal. We used our method on minor aftershocks (3 < mb < 4) of the 2011 Sikkim earthquake using data from the Sikkim Himalayan network. Location of these events highlight the transverse strike-slip structure within the Indian plate, which was observed from source mechanism study of the mainshock and larger aftershocks.
NASA Astrophysics Data System (ADS)
Hansen, Scott K.; Vesselinov, Velimir V.
2016-10-01
We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.
Charting a Path to Location Intelligence for STD Control.
Gerber, Todd M; Du, Ping; Armstrong-Brown, Janelle; McNutt, Louise-Anne; Coles, F Bruce
2009-01-01
This article describes the New York State Department of Health's GeoDatabase project, which developed new methods and techniques for designing and building a geocoding and mapping data repository for sexually transmitted disease (STD) control. The GeoDatabase development was supported through the Centers for Disease Control and Prevention's Outcome Assessment through Systems of Integrated Surveillance workgroup. The design and operation of the GeoDatabase relied upon commercial-off-the-shelf tools that other public health programs may also use for disease-control systems. This article provides a blueprint of the structure and software used to build the GeoDatabase and integrate location data from multiple data sources into the everyday activities of STD control programs.
Episodic inflation events at Akutan Volcano, Alaska, during 2005-2017
NASA Astrophysics Data System (ADS)
Ji, Kang Hyeun; Yun, Sang-Ho; Rim, Hyoungrea
2017-08-01
Detection of weak volcano deformation helps constrain characteristics of eruption cycles. We have developed a signal detection technique, called the Targeted Projection Operator (TPO), to monitor surface deformation with Global Positioning System (GPS) data. We have applied the TPO to GPS data collected at Akutan Volcano from June 2005 to March 2017 and detected four inflation events that occurred in 2008, 2011, 2014, and 2016 with inflation rates of about 8-22 mm/yr above the background trend at a near-source site AV13. Numerical modeling suggests that the events should be driven by closely located sources or a single source in a shallow magma chamber at a depth of about 4 km. The inflation events suggest that magma has episodically accumulated in a shallow magma chamber.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Donald D.; Gowardhan, Akshay; Cameron-Smith, Philip
2015-08-08
Here, a computational Bayesian inverse technique is used to quantify the effects of meteorological inflow uncertainty on tracer transport and source estimation in a complex urban environment. We estimate a probability distribution of meteorological inflow by comparing wind observations to Monte Carlo simulations from the Aeolus model. Aeolus is a computational fluid dynamics model that simulates atmospheric and tracer flow around buildings and structures at meter-scale resolution. Uncertainty in the inflow is propagated through forward and backward Lagrangian dispersion calculations to determine the impact on tracer transport and the ability to estimate the release location of an unknown source. Ourmore » uncertainty methods are compared against measurements from an intensive observation period during the Joint Urban 2003 tracer release experiment conducted in Oklahoma City.« less
Monochromatic body waves excited by great subduction zone earthquakes
NASA Astrophysics Data System (ADS)
Ihmlé, Pierre F.; Madariaga, Raúl
Large quasi-monochromatic body waves were excited by the 1995 Chile Mw=8.1 and by the 1994 Kurile Mw=8.3 events. They are observed on vertical/radial component seismograms following the direct P and Pdiff arrivals, at all azimuths. We devise a slant stack algorithm to characterize the source of the oscillations. This technique aims at locating near-source isotropic scatterers using broadband data from global networks. For both events, we find that the oscillations emanate from the trench. We show that these monochromatic waves are due to localized oscillations of the water column. Their period corresponds to the gravest ID mode of a water layer for vertically traveling compressional waves. We suggest that these monochromatic body waves may yield additional constraints on the source process of great subduction zone earthquakes.
Improving Focal Depth Estimates: Studies of Depth Phase Detection at Regional Distances
NASA Astrophysics Data System (ADS)
Stroujkova, A.; Reiter, D. T.; Shumway, R. H.
2006-12-01
The accurate estimation of the depth of small, regionally recorded events continues to be an important and difficult explosion monitoring research problem. Depth phases (free surface reflections) are the primary tool that seismologists use to constrain the depth of a seismic event. When depth phases from an event are detected, an accurate source depth is easily found by using the delay times of the depth phases relative to the P wave and a velocity profile near the source. Cepstral techniques, including cepstral F-statistics, represent a class of methods designed for the depth-phase detection and identification; however, they offer only a moderate level of success at epicentral distances less than 15°. This is due to complexities in the Pn coda, which can lead to numerous false detections in addition to the true phase detection. Therefore, cepstral methods cannot be used independently to reliably identify depth phases. Other evidence, such as apparent velocities, amplitudes and frequency content, must be used to confirm whether the phase is truly a depth phase. In this study we used a variety of array methods to estimate apparent phase velocities and arrival azimuths, including beam-forming, semblance analysis, MUltiple SIgnal Classification (MUSIC) (e.g., Schmidt, 1979), and cross-correlation (e.g., Cansi, 1995; Tibuleac and Herrin, 1997). To facilitate the processing and comparison of results, we developed a MATLAB-based processing tool, which allows application of all of these techniques (i.e., augmented cepstral processing) in a single environment. The main objective of this research was to combine the results of three focal-depth estimation techniques and their associated standard errors into a statistically valid unified depth estimate. The three techniques include: 1. Direct focal depth estimate from the depth-phase arrival times picked via augmented cepstral processing. 2. Hypocenter location from direct and surface-reflected arrivals observed on sparse networks of regional stations using a Grid-search, Multiple-Event Location method (GMEL; Rodi and Toksöz, 2000; 2001). 3. Surface-wave dispersion inversion for event depth and focal mechanism (Herrmann and Ammon, 2002). To validate our approach and provide quality control for our solutions, we applied the techniques to moderated- sized events (mb between 4.5 and 6.0) with known focal mechanisms. We illustrate the techniques using events observed at regional distances from the KSAR (Wonju, South Korea) teleseismic array and other nearby broadband three-component stations. Our results indicate that the techniques can produce excellent agreement between the various depth estimates. In addition, combining the techniques into a "unified" estimate greatly reduced location errors and improved robustness of the solution, even if results from the individual methods yielded large standard errors.
Near-Field Magnetic Dipole Moment Analysis
NASA Technical Reports Server (NTRS)
Harris, Patrick K.
2003-01-01
This paper describes the data analysis technique used for magnetic testing at the NASA Goddard Space Flight Center (GSFC). Excellent results have been obtained using this technique to convert a spacecraft s measured magnetic field data into its respective magnetic dipole moment model. The model is most accurate with the earth s geomagnetic field cancelled in a spherical region bounded by the measurement magnetometers with a minimum radius large enough to enclose the magnetic source. Considerably enhanced spacecraft magnetic testing is offered by using this technique in conjunction with a computer-controlled magnetic field measurement system. Such a system, with real-time magnetic field display capabilities, has been incorporated into other existing magnetic measurement facilities and is also used at remote locations where transport to a magnetics test facility is impractical.
Enhancements to the Bayesian Infrasound Source Location Method
2012-09-01
ENHANCEMENTS TO THE BAYESIAN INFRASOUND SOURCE LOCATION METHOD Omar E. Marcillo, Stephen J. Arrowsmith, Rod W. Whitaker, and Dale N. Anderson Los...ABSTRACT We report on R&D that is enabling enhancements to the Bayesian Infrasound Source Location (BISL) method for infrasound event location...the Bayesian Infrasound Source Location Method 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER
NASA Technical Reports Server (NTRS)
Waller, Jess M.; Saulsberry, Regor L.
2009-01-01
This project is a subtask of a multi-center project to advance the state-of-the-art by developing NDE techniques that are capable of evaluating stress rupture (SR) degradation in Kevlar/epoxy (K/Ep) composite overwrapped pressure vessels (COPVs), and damage progression in carbon/epoxy (C/Ep) COPVs. In this subtask, acoustic emission (AE) data acquired during intermittent load hold tensile testing of K/Ep and C/Ep composite tow materials-of-construction used in COPV fabrication were analyzed to monitor progressive damage during the approach to tensile failure. Insight into the progressive damage of composite tow was gained by monitoring AE event rate, energy, source location, and frequency. Source location based on arrival time data was used to discern between significant AE attributable to microstructural damage and spurious AE attributable to background and grip noise. One of the significant findings was the observation of increasing violation of the Kaiser effect (Felicity ratio < 1.0) with damage accumulation.
NASA Technical Reports Server (NTRS)
Hammock, William R., Jr.; Cota, Phillip E., Jr.; Rosenbaum, Bernard J.; Barrett, Michael J.
1991-01-01
Standard leak detection methods at ambient temperature have been developed in order to prevent excessive leakage from the Space Shuttle liquid oxygen and liquid hydrogen Main Propulsion System. Unacceptable hydrogen leakage was encountered on the Columbia and Atlantis flight vehicles in the summer of 1990 after the standard leak check requirements had been satisfied. The leakage was only detectable when the fuel system was exposed to subcooled liquid hydrogen during External Tank loading operations. Special instrumentation and analytical tools were utilized during a series of propellant tanking tests in order to identify the sources of the hydrogen leakage. After the leaks were located and corrected, the physical characteristics of the leak sources were analyzed in an effort to understand how the discrepancies were introduced and why the leakage had evaded the standard leak detection methods. As a result of the post-leak analysis, corrective actions and leak detection improvements have been implemented in order to preclude a similar incident.
Ionospsheric observation of enhanced convection-initiated gravity waves during tornadic storms
NASA Technical Reports Server (NTRS)
Hung, R. J.
1981-01-01
Atmospheric gravity waves associated with tornadoes, with locally severe storms occuring with tornadoes, and with hurricanes were studied through the coupling between the ionosphere and the troposphere. Reverse group ray tracing computations of gravity waves observed by an ionospheric Doppler sounder array were analyzed. The results of ray tracing computations and comparisons between the computed location of the wave sources and with conventional meteorological data indicate that the computed sources of the waves were near the touchdown of the tornadoes, near the eye of the hurricanes, and directly on the squall line of the severe thunderstorms. The signals excited occurred one hour in advance of the tornadoes and three hours in advance of the hurricanes. Satellite photographs show convective overshooting turrets occurring at the same locations and times the gravity waves were being excited. It is suggested that gravity wave observations, conventional meteorological data, and satellite photographs be combined to develop a remote sensing technique for detecting severe storms.
NASA Astrophysics Data System (ADS)
Lepage, Hugo; Laceby, J. Patrick; Evrard, Olivier; Onda, Yuichi; Caroline, Chartin; Lefèvre, Irène; Bonté, Philippe; Ayrault, Sophie
2015-04-01
Several coastal catchments located in the vicinity of the Fukushima Dai-Ichi Power Plant were impacted contaminated fallout in March 2011. Following the accident, typhoons and snowmelt runoff events transfer radiocesium contamination through the coastal floodplains and ultimately to the Pacific Ocean. Therefore it is important to understand the location and relative contribution of different erosion sources in order to manage radiocesium transfer within these coastal catchments and the cumulative export of radiocesium to the Pacific Ocean. Here we present a sediment fingerprinting approach to determine the relative contributions of sediment from different soil types to sediment transported throughout two coastal riverine systems. The sediment fingerprinting technique presented utilizes differences in the elemental geochemistry of the distinct soil types to determine their relative contributions to sediment sampled in riverine systems. This research is important as it furthers our understanding of dominant erosion sources in the region which will help with ongoing decontamination and monitoring efforts pertaining to the management of fallout radiocesium migration in the region.
Jácome, Gabriel; Valarezo, Carla; Yoo, Changkyoo
2018-03-30
Pollution and the eutrophication process are increasing in lake Yahuarcocha and constant water quality monitoring is essential for a better understanding of the patterns occurring in this ecosystem. In this study, key sensor locations were determined using spatial and temporal analyses combined with geographical information systems (GIS) to assess the influence of weather features, anthropogenic activities, and other non-point pollution sources. A water quality monitoring network was established to obtain data on 14 physicochemical and microbiological parameters at each of seven sample sites over a period of 13 months. A spatial and temporal statistical approach using pattern recognition techniques, such as cluster analysis (CA) and discriminant analysis (DA), was employed to classify and identify the most important water quality parameters in the lake. The original monitoring network was reduced to four optimal sensor locations based on a fuzzy overlay of the interpolations of concentration variations of the most important parameters.
Instrumentation techniques for monitoring shock and detonation waves
NASA Astrophysics Data System (ADS)
Dick, R. D.; Parrish, R. L.
1985-09-01
CORRTEX (Continuous Reflectometry for Radius Versus Time Experiments), SLIFER (Shorted Location Indication by Frequency of Electrical Resonance), and pin probes were used to monitor several conditions of blasting such as the detonation velocity of the explosive, the functioning of the stemming column confining the explosive, and rock mass motion. CORRTEX is a passive device that employs time-domain reflectometry to interrogate the two-way transit time of a coaxial cable. SLIFER is an active device that monitors the changing frequency resulting from a change in length of a coaxial cable forming an element of an oscillator circuit. Pin probes in this application consist of RG-174 coaxial cables, each with an open circuit, placed at several known locations within the material. Each cable is connected to a pulse-forming network and a voltage source. When the cables are shorted by the advancing wave, time-distance data are produced from which a velocity can be computed. Each technique, installation of the gauge, examples of the signals, and interpretation of the records are described.
Rohela, M; Lim, Y A L; Jamaiah, I; Khadijah, P Y Y; Laang, S T; Nazri, M H Mohd; Nurulhuda, Z
2005-01-01
The occurrence of a coccidian parasite, Cryptosporidium, among birds in the Kuala Lumpur National Zoo was investigated in this study. A hundred bird fecal samples were taken from various locations of the zoo. Fecal smears prepared using direct smear and formalin ethyl acetate concentration technique were stained with modified Ziehl-Neelsen stain. Samples positive for Cryptosporidium with Ziehl-Neelsen stain were later confirmed using the immunofluorescence technique and viewed under the epifluorescence microscope. Six species of bird feces were confirmed positive with Cryptosporidium oocysts. They included Wrinkled Hornbill (Aceros corrugatus), Great Argus Pheasant (Argusianus argus), Black Swan (Cygnus atratus), Swan Goose (Anser cygnoides), Marabou Stork (Leptoptilos crumeniferus), and Moluccan Cockatoo (Cacatua moluccencis). These birds were located in the aviary and lake, with the Moluccan Cockatoo routinely used as a show bird. Results obtained in this study indicated that animal sanctuaries like zoos and bird parks are important sources of Cryptosporidium infection to humans, especially children and other animals.
Model of a thin film optical fiber fluorosensor
NASA Technical Reports Server (NTRS)
Egalon, Claudio O.; Rogowski, Robert S.
1991-01-01
The efficiency of core-light injection from sources in the cladding of an optical fiber is modeled analytically by means of the exact field solution of a step-profile fiber. The analysis is based on the techniques by Marcuse (1988) in which the sources are treated as infinitesimal electric currents with random phase and orientation that excite radiation fields and bound modes. Expressions are developed based on an infinite cladding approximation which yield the power efficiency for a fiber coated with fluorescent sources in the core/cladding interface. Marcuse's results are confirmed for the case of a weakly guiding cylindrical fiber with fluorescent sources uniformly distributed in the cladding, and the power efficiency is shown to be practically constant for variable wavelengths and core radii. The most efficient fibers have the thin film located at the core/cladding boundary, and fibers with larger differences in the indices of refraction are shown to be the most efficient.
Methods for characterizing subsurface volatile contaminants using in-situ sensors
Ho, Clifford K [Albuquerque, NM
2006-02-21
An inverse analysis method for characterizing diffusion of vapor from an underground source of volatile contaminant using data taken by an in-situ sensor. The method uses one-dimensional solutions to the diffusion equation in Cartesian, cylindrical, or spherical coordinates for isotropic and homogenous media. If the effective vapor diffusion coefficient is known, then the distance from the source to the in-situ sensor can be estimated by comparing the shape of the predicted time-dependent vapor concentration response curve to the measured response curve. Alternatively, if the source distance is known, then the effective vapor diffusion coefficient can be estimated using the same inverse analysis method. A triangulation technique can be used with multiple sensors to locate the source in two or three dimensions. The in-situ sensor can contain one or more chemiresistor elements housed in a waterproof enclosure with a gas permeable membrane.
Sediment Tracking Using Carbon and Nitrogen Stable Isotopes
NASA Astrophysics Data System (ADS)
Fox, J. F.; Papanicolaou, A.
2002-12-01
As landscapes are stripped of valuable, nutrient rich topsoils and streams are clouded with habitat degrading fine sediment, it becomes increasingly important to identify and mitigate erosive surfaces. Particle tracking using vegetative derived carbon (C) and nitrogen (N) isotopic signatures and carbon/nitrogen (C/N) atomic ratios offer a promising technique to identify such problematic sources. Consultants and researchers successfully use C, N, and other stable isotopes of water for hydrologic purposes, such as quantifying groundwater vs. surface water contribution to a hydrograph. Recently, C and N isotopes and C/N atomic ratios of sediment were used to determine sediment mass balance within estuarine environments. The current research investigates C and N isotopes and C/N atomic ratios of source sediment for two primary purposes: (1) to establish a blueprint methodology for estimating sediment source and erosion rates within a watershed using this isotopic technology coupled with mineralogy fingerprinting techniques, radionuclide transport monitoring, and erosion-transport models, and (2) to complete field studies of upland erosion processes, such as, solifluction, mass wasting, creep, fluvial erosion, and vegetative induced erosion. Upland and floodplain sediment profiles and riverine suspended sediment were sampled on two occasions, May 2002 and August 2002, in the upper Palouse River watershed of northern Idaho. Over 300 samples were obtained from deep intermountain valley (i.e. forest) and rolling crop field (i.e. agriculture) locations. Preliminary sample treatment was completed at the Washington State University Water Quality Laboratory where samples were dried, removed of organic constituents, and prepared for isotopic analysis. C and N isotope and C/N atomic ratio analyses was performed at the University of Idaho Natural Resources Stable Isotope Laboratory using a Costech 4010 Elemental Combustion System connected with a continuous flow inlet system to the Finnigan MAT Delta Plus isotope ratio mass spectrometer. Results indicate distinct N isotopic signatures and C/N atomic ratios for forest and agriculture sediment sources. In addition, unique C and N isotopic signatures and C/N atomic ratios exist within floodplain and upland surfaces, and within the 10 centimeter profiles of erosion and deposition locations. Suspended sediment analyses are preliminary at this time. Conclusions indicate that sediment C and N isotopic signature and C/N atomic ratio are dependent upon land use and soil moisture conditions, and will serve as a useful technique in quantifying erosive source rates and understanding upland erosion processes.
NASA Technical Reports Server (NTRS)
Kuntz, Todd A.; Wadley, Haydn N. G.; Black, David R.
1993-01-01
An X-ray technique for the measurement of internal residual strain gradients near the continuous reinforcements of metal matrix composites has been investigated. The technique utilizes high intensity white X-ray radiation from a synchrotron radiation source to obtain energy spectra from small (0.001 cu mm) volumes deep within composite samples. The viability of the technique was tested using a model system with 800 micron Al203 fibers and a commercial purity titanium matrix. Good agreement was observed between the measured residual radial and hoop strain gradients and those estimated from a simple elastic concentric cylinders model. The technique was then used to assess the strains near (SCS-6) silicon carbide fibers in a Ti-14Al-21Nb matrix after consolidation processing. Reasonable agreement between measured and calculated strains was seen provided the probe volume was located 50 microns or more from the fiber/matrix interface.
Motion correction for passive radiation imaging of small vessels in ship-to-ship inspections
NASA Astrophysics Data System (ADS)
Ziock, K. P.; Boehnen, C. B.; Ernst, J. M.; Fabris, L.; Hayward, J. P.; Karnowski, T. P.; Paquit, V. C.; Patlolla, D. R.; Trombino, D. G.
2016-01-01
Passive radiation detection remains one of the most acceptable means of ascertaining the presence of illicit nuclear materials. In maritime applications it is most effective against small to moderately sized vessels, where attenuation in the target vessel is of less concern. Unfortunately, imaging methods that can remove source confusion, localize a source, and avoid other systematic detection issues cannot be easily applied in ship-to-ship inspections because relative motion of the vessels blurs the results over many pixels, significantly reducing system sensitivity. This is particularly true for the smaller watercraft, where passive inspections are most valuable. We have developed a combined gamma-ray, stereo visible-light imaging system that addresses this problem. Data from the stereo imager are used to track the relative location and orientation of the target vessel in the field of view of a coded-aperture gamma-ray imager. Using this information, short-exposure gamma-ray images are projected onto the target vessel using simple tomographic back-projection techniques, revealing the location of any sources within the target. The complex autonomous tracking and image reconstruction system runs in real time on a 48-core workstation that deploys with the system.
NASA Technical Reports Server (NTRS)
Ramachandran, Ganesh K.; Akopian, David; Heckler, Gregory W.; Winternitz, Luke B.
2011-01-01
Location technologies have many applications in wireless communications, military and space missions, etc. US Global Positioning System (GPS) and other existing and emerging Global Navigation Satellite Systems (GNSS) are expected to provide accurate location information to enable such applications. While GNSS systems perform very well in strong signal conditions, their operation in many urban, indoor, and space applications is not robust or even impossible due to weak signals and strong distortions. The search for less costly, faster and more sensitive receivers is still in progress. As the research community addresses more and more complicated phenomena there exists a demand on flexible multimode reference receivers, associated SDKs, and development platforms which may accelerate and facilitate the research. One of such concepts is the software GPS/GNSS receiver (GPS SDR) which permits a facilitated access to algorithmic libraries and a possibility to integrate more advanced algorithms without hardware and essential software updates. The GNU-SDR and GPS-SDR open source receiver platforms are such popular examples. This paper evaluates the performance of recently proposed block-corelator techniques for acquisition and tracking of GPS signals using open source GPS-SDR platform.
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Sutliff, Daniel L.
2014-01-01
The Rotating Rake mode measurement system was designed to measure acoustic duct modes generated by a fan stage. Initially, the mode amplitudes and phases were quantified from a single rake measurement at one axial location. To directly measure the modes propagating in both directions within a duct, a second rake was mounted to the rotating system with an offset in both the axial and the azimuthal directions. The rotating rake data analysis technique was then extended to include the data measured by the second rake. The analysis resulted in a set of circumferential mode levels at each of the two rake microphone locations. Radial basis functions were then least-squares fit to this data to obtain the radial mode amplitudes for the modes propagating in both directions within the duct. Validation experiments have been conducted using artificial acoustic sources. Results are shown for the measurement of the standing waves in the duct from sound generated by one and two acoustic sources that are separated into the component modes propagating in both directions within the duct. Measured reflection coefficients from the open end of the duct are compared to analytical predictions.
Motion correction for passive radiation imaging of small vessels in ship-to-ship inspections
Ziock, Klaus -Peter; Boehnen, Chris Bensing; Ernst, Joseph M.; ...
2015-09-05
Passive radiation detection remains one of the most acceptable means of ascertaining the presence of illicit nuclear materials. In maritime applications it is most effective against small to moderately sized vessels, where attenuation in the target vessel is of less concern. Unfortunately, imaging methods that can remove source confusion, localize a source, and avoid other systematic detection issues cannot be easily applied in ship-to-ship inspections because relative motion of the vessels blurs the results over many pixels, significantly reducing system sensitivity. This is particularly true for the smaller watercraft, where passive inspections are most valuable. We have developed a combinedmore » gamma-ray, stereo visible-light imaging system that addresses this problem. Data from the stereo imager are used to track the relative location and orientation of the target vessel in the field of view of a coded-aperture gamma-ray imager. Using this information, short-exposure gamma-ray images are projected onto the target vessel using simple tomographic back-projection techniques, revealing the location of any sources within the target. Here,the complex autonomous tracking and image reconstruction system runs in real time on a 48-core workstation that deploys with the system.« less
Measurements for the BETC in-situ combustion experiment. [Post test surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wayland, J.R.; Bartel, L.C.
The Bartlesville Energy Technology Center (BETC) in situ combustion pilot project near Bartlette, Kansas, was studied using controlled source audio-magnetotelluric (CSAMT) mapping, thermal gravimetric analysis (TGA), conventional geophysical logging and modeling of the fireflood. Measurements of formation resistivity changes induced by in situ combustion indicate that CSAMT resistivity maps should show an increase in apparent resistivity. The substantial decrease of apparent resistivity measured within the five spot pattern indicated a complex sequence of events. Using the results from the CSAMT surveys the fire front was located and posttest core samples were obtained. The posttest core samples were examined using TGAmore » techniques, and using information from combustion tube runs as standards, the location of the fire front in the core samples from the posttest holes was inferred. Models of the reservoir in situ combustion process were developed to help analyze the field results. The combustion kinematics, when used in conjunction with CSAMT and TGA techniques, indicated that considerable bypass of injected air occurred with an influx of brine into previously burned zones. This experiment offered an opportunity to integrate several new techniques into a systematic study of a difficult problem.« less
Ravi, Logesh; Vairavasundaram, Subramaniyaswamy
2016-01-01
Rapid growth of web and its applications has created a colossal importance for recommender systems. Being applied in various domains, recommender systems were designed to generate suggestions such as items or services based on user interests. Basically, recommender systems experience many issues which reflects dwindled effectiveness. Integrating powerful data management techniques to recommender systems can address such issues and the recommendations quality can be increased significantly. Recent research on recommender systems reveals an idea of utilizing social network data to enhance traditional recommender system with better prediction and improved accuracy. This paper expresses views on social network data based recommender systems by considering usage of various recommendation algorithms, functionalities of systems, different types of interfaces, filtering techniques, and artificial intelligence techniques. After examining the depths of objectives, methodologies, and data sources of the existing models, the paper helps anyone interested in the development of travel recommendation systems and facilitates future research direction. We have also proposed a location recommendation system based on social pertinent trust walker (SPTW) and compared the results with the existing baseline random walk models. Later, we have enhanced the SPTW model for group of users recommendations. The results obtained from the experiments have been presented.
Pey, Jorge; Alastuey, Andrés; Querol, Xavier
2013-07-01
PM₁₀ and PM₂.₅ chemical composition has been determined at a suburban insular site in the Balearic Islands (Spain) during almost one and a half year. As a result, 200 samples with more than 50 chemical parameters analyzed have been obtained. The whole database has been analyzed by two receptor modelling techniques (Principal Component Analysis and Positive Matrix Factorisation) in order to identify the main PM sources. After that, regression analyses with respect to the PM mass concentrations were conducted to quantify the daily contributions of each source. Four common sources were identified by both receptor models: secondary nitrate coupled with vehicular emissions, secondary sulphate influenced by fuel-oil combustion, aged marine aerosols and mineral dust. In addition, PCA isolated harbour emissions and a mixed anthropogenic factor containing industrial emissions; whereas PMF isolated an additional mineral factor interpreted as road dust+harbour emissions, and a vehicular abrasion products factor. The use of both methodologies appeared complementary. Nevertheless, PMF sources by themselves were better differentiated. Besides these receptor models, a specific methodology to quantify African dust was also applied. The combination of these three source apportionment tools allowed the identification of 8 sources, being 4 of them mineral (African, regional, urban and harbour dusts). As a summary, 29% of PM₁₀ was attributed to natural sources (African dust, regional dust and sea spray), whereas the proportion diminished to 11% in PM₂.₅. Furthermore, the secondary sulphate source, which accounted for about 22 and 32% of PM₁₀ and PM₂.₅, is strongly linked to the aged polluted air masses residing over the western Mediterranean in the warm period. Copyright © 2013 Elsevier B.V. All rights reserved.
Response of a grounded dielectric slab to an impulse line source using leaky modes
NASA Technical Reports Server (NTRS)
Duffy, Dean G.
1994-01-01
This paper describes how expansions in leaky (or improper) modes may be used to represent the continuous spectrum in an open radiating waveguide. The technique requires a thorough knowledge of the life history of the improper modes as they migrate from improper to proper Riemann surfaces. The method is illustrated by finding the electric field resulting from an impulsively forced current located in the free space above a grounded dielectric slab.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zolotov, D. A., E-mail: zolotovden@crys.ras.ru; Buzmakov, A. V.; Elfimov, D. A.
2017-01-15
The spatial arrangement of single linear defects in a Si single crystal (input surface (111)) has been investigated by X-ray topo-tomography using laboratory X-ray sources. The experimental technique and the procedure of reconstructing a 3D image of dislocation half-loops near the Si crystal surface are described. The sizes of observed linear defects with a spatial resolution of about 10 μm are estimated.
Enhanced Imaging of Corrosion in Aircraft Structures with Reverse Geometry X-ray(registered tm)
NASA Technical Reports Server (NTRS)
Winfree, William P.; Cmar-Mascis, Noreen A.; Parker, F. Raymond
2000-01-01
The application of Reverse Geometry X-ray to the detection and characterization of corrosion in aircraft structures is presented. Reverse Geometry X-ray is a unique system that utilizes an electronically scanned x-ray source and a discrete detector for real time radiographic imaging of a structure. The scanned source system has several advantages when compared to conventional radiography. First, the discrete x-ray detector can be miniaturized and easily positioned inside a complex structure (such as an aircraft wing) enabling images of each surface of the structure to be obtained separately. Second, using a measurement configuration with multiple detectors enables the simultaneous acquisition of data from several different perspectives without moving the structure or the measurement system. This provides a means for locating the position of flaws and enhances separation of features at the surface from features inside the structure. Data is presented on aircraft specimens with corrosion in the lap joint. Advanced laminographic imaging techniques utilizing data from multiple detectors are demonstrated to be capable of separating surface features from corrosion in the lap joint and locating the corrosion in multilayer structures. Results of this technique are compared to computed tomography cross sections obtained from a microfocus x-ray tomography system. A method is presented for calibration of the detectors of the Reverse Geometry X-ray system to enable quantification of the corrosion to within 2%.
Locating the source of diffusion in complex networks by time-reversal backward spreading.
Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene
2016-03-01
Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.
Locating the source of diffusion in complex networks by time-reversal backward spreading
NASA Astrophysics Data System (ADS)
Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H. Eugene
2016-03-01
Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.
NASA Astrophysics Data System (ADS)
Iezzi, A. M.; Fee, D.; Matoza, R. S.; Jolly, A. D.; Kim, K.; Christenson, B. W.; Johnson, R.; Kilgour, G.; Garaebiti, E.; Austin, A.; Kennedy, B.; Fitzgerald, R.; Gomez, C.; Key, N.
2017-12-01
Well-constrained acoustic waveform inversion can provide robust estimates of erupted volume and mass flux, increasing our ability to monitor volcanic emissions (potentially in real-time). Previous studies have made assumptions about the multipole source mechanism, which can be represented as the combination of pressure fluctuations from a volume change, directionality, and turbulence. The vertical dipole has not been addressed due to ground-based recording limitations. In this study we deployed a high-density seismo-acoustic network around Yasur Volcano, Vanuatu, including multiple acoustic sensors along a tethered balloon that was moved every 15-60 minutes. Yasur has frequent strombolian eruptions every 1-4 minutes from any one of three active vents within a 400 m diameter crater. Our experiment captured several explosions from each vent at 38 tether locations covering 200 in azimuth and a take-off range of 50 (Jolly et. al., in review). Additionally, FLIR, FTIR, and a variety of visual imagery were collected during the deployment to aid in the seismo-acoustic interpretations. The third dimension (vertical) of pressure sensor coverage allows us to more completely constrain the acoustic source. Our analysis employs Finite-Difference Time-Domain (FDTD) modeling to obtain the full 3-D Green's functions for each propagation path. This method, following Kim et al. (2015), takes into account realistic topographic scattering based on a high-resolution digital elevation model created using structure-from-motion techniques. We then invert for the source location and multipole source-time function using a grid-search approach. We perform this inversion for multiple events from vents A and C to examine the source characteristics of the vents, including an infrasound-derived volume flux as a function of time. These volumes fluxes are then compared to those derived independently from geochemical and seismic inversion techniques. Jolly, A., Matoza, R., Fee, D., Kennedy, B., Iezzi, A., Fitzgerald, R., Austin, A., & Johnson, R. (in review). Kim, K., Fee, D., Yokoo, A., & Lees, J. M. (2015). Acoustic source inversion to estimate volume flux from volcanic explosions. Geophysical Research Letters, 42(13), 5243-5249.
A reassessment of ground water flow conditions and specific yield at Borden and Cape Cod
Grimestad, Garry
2002-01-01
Recent widely accepted findings respecting the origin and nature of specific yield in unconfined aquifers rely heavily on water level changes observed during two pumping tests, one conducted at Borden, Ontario, Canada, and the other at Cape Cod, Massachusetts. The drawdown patterns observed during those tests have been taken as proof that unconfined specific yield estimates obtained from long-duration pumping tests should approach the laboratory-estimated effective porosity of representative aquifer formation samples. However, both of the original test reports included direct or referential descriptions of potential supplemental sources of pumped water that would have introduced intractable complications and errors into straightforward interpretations of the drawdown observations if actually present. Searches for evidence of previously neglected sources were performed by screening the original drawdown observations from both locations for signs of diagnostic skewing that should be present only if some of the extracted water was derived from sources other than main aquifer storage. The data screening was performed using error-guided computer assisted fitting techniques, capable of accurately sensing and simulating the effects of a wide range of non-traditional and external sources. The drawdown curves from both tests proved to be inconsistent with traditional single-source pumped aquifer models but consistent with site-specific alternatives that included significant contributions of water from external sources. The corrected pumping responses shared several important features. Unsaturated drainage appears to have ceased effectively at both locations within the first day of pumping, and estimates of specific yield stabilized at levels considerably smaller than the corresponding laboratory-measured or probable effective porosity. Separate sequential analyses of progressively later field observations gave stable and nearly constant specific yield estimates for each location, with no evidence from either test that more prolonged pumping would have induced substantially greater levels of unconfined specific yield.
NASA Technical Reports Server (NTRS)
Hass, Neal; Mizukami, Masashi; Neal, Bradford A.; St. John, Clinton; Beil, Robert J.; Griffin, Timothy P.
1999-01-01
This paper presents pertinent results and assessment of propellant feed system leak detection as applied to the Linear Aerospike SR-71 Experiment (LASRE) program flown at the NASA Dryden Flight Research Center, Edwards, California. The LASRE was a flight test of an aerospike rocket engine using liquid oxygen and high-pressure gaseous hydrogen as propellants. The flight safety of the crew and the experiment demanded proven technologies and techniques that could detect leaks and assess the integrity of hazardous propellant feed systems. Point source detection and systematic detection were used. Point source detection was adequate for catching gross leakage from components of the propellant feed systems, but insufficient for clearing LASRE to levels of acceptability. Systematic detection, which used high-resolution instrumentation to evaluate the health of the system within a closed volume, provided a better means for assessing leak hazards. Oxygen sensors detected a leak rate of approximately 0.04 cubic inches per second of liquid oxygen. Pressure sensor data revealed speculated cryogenic boiloff through the fittings of the oxygen system, but location of the source(s) was indeterminable. Ultimately, LASRE was cancelled because leak detection techniques were unable to verify that oxygen levels could be maintained below flammability limits.
Probing quantum correlation functions through energy-absorption interferometry
NASA Astrophysics Data System (ADS)
Withington, S.; Thomas, C. N.; Goldie, D. J.
2017-08-01
An interferometric technique is described for determining the spatial forms of the individual degrees of freedom through which a many-body system can absorb energy from its environment. The method separates out the spatial forms of the coherent excitations present at any single frequency; it is not necessary to sweep the frequency and then infer the spatial forms of possible excitations from resonant absorption features. The system under test is excited with two external sources, which create generalized forces, and the fringe in the total power dissipated is measured as the relative phase between the sources is varied. If the complex fringe visibility is measured for different pairs of source locations, the anti-Hermitian part of the complex-valued nonlocal correlation tensor can be determined, which can then be decomposed to give the natural dynamical modes of the system and their relative responsivities. If each source in the interferometer creates a different kind of force, the spatial forms of the individual excitations that are responsible for cross-correlated response can be found. The technique is related to holography, but measures the state of coherence to which the system is maximally sensitive. It can be applied across a wide range of wavelengths, in a variety of ways, to homogeneous media, thin films, patterned structures, and components such as sensors, detectors, and energy-harvesting absorbers.
Apportionment of urban aerosol sources in Cork (Ireland) by synergistic measurement techniques.
Dall'Osto, Manuel; Hellebust, Stig; Healy, Robert M; O'Connor, Ian P; Kourtchev, Ivan; Sodeau, John R; Ovadnevaite, Jurgita; Ceburnis, Darius; O'Dowd, Colin D; Wenger, John C
2014-09-15
The sources of ambient fine particulate matter (PM2.5) during wintertime at a background urban location in Cork city (Ireland) have been determined. Aerosol chemical analyses were performed by multiple techniques including on-line high resolution aerosol time-of-flight mass spectrometry (Aerodyne HR-ToF-AMS), on-line single particle aerosol time-of-flight mass spectrometry (TSI ATOFMS), on-line elemental carbon-organic carbon analysis (Sunset_EC-OC), and off-line gas chromatography/mass spectrometry and ion chromatography analysis of filter samples collected at 6-h resolution. Positive matrix factorization (PMF) has been carried out to better elucidate aerosol sources not clearly identified when analyzing results from individual aerosol techniques on their own. Two datasets have been considered: on-line measurements averaged over 2-h periods, and both on-line and off-line measurements averaged over 6-h periods. Five aerosol sources were identified by PMF in both datasets, with excellent agreement between the two solutions: (1) regional domestic solid fuel burning--"DSF_Regional," 24-27%; (2) local urban domestic solid fuel burning--"DSF_Urban," 22-23%; (3) road vehicle emissions--"Traffic," 15-20%; (4) secondary aerosols from regional anthropogenic sources--"SA_Regional" 9-13%; and (5) secondary aged/processed aerosols related to urban anthropogenic sources--"SA_Urban," 21-26%. The results indicate that, despite regulations for restricting the use of smoky fuels, solid fuel burning is the major source (46-50%) of PM2.5 in wintertime in Cork, and also likely other areas of Ireland. Whilst wood combustion is strongly associated with OC and EC, it was found that peat and coal combustion is linked mainly with OC and the aerosol from these latter sources appears to be more volatile than that produced by wood combustion. Ship emissions from the nearby port were found to be mixed with the SA_Regional factor. The PMF analysis allowed us to link the AMS cooking organic aerosol factor (AMS_PMF_COA) to oxidized organic aerosol, chloride and locally produced nitrate, indicating that AMS_PMF_COA cannot be attributed to primary cooking emissions only. Overall, there are clear benefits from factor analysis applied to results obtained from multiple techniques, which allows better association of aerosols with sources and atmospheric processes. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
King, R. B.; Fordyce, J. S.; Antoine, A. C.; Leibecki, H. F.; Neustadter, H. E.; Sidik, S. M.; Burr, J. C.; Craig, G. T.; Cornett, C. L.
1974-01-01
Beginning in 1971 a cooperative program has been carried on by the City of Cleveland Division of Air Pollution Control and NASA Lewis Research Center to study the trace element and compound concentrations in the ambient suspended particulate matter in Cleveland Ohio as a function of source, monitoring location and meteorological conditions. The major objectives were to determine the ambient concentration levels at representative urban sites and to develop a technique using trace element and compound data in conjunction with meteorological conditions to identify specific pollution sources which could be developed into a practical system that could be readily utilized by an enforcement agency.
Analysis of medieval limestone sculpture from southwestern France and the Paris Basin by NAA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holmes, L.; Harbottle, G.
1994-12-31
Compositional characterization of limestone from sources known to medieval craftsmen and from the monuments they built can be used in conjunction with stylistic and iconographic criteria to infer geographic origin of sculptures that have lost their histories. Limestone from 47 quarrying locations in France and from numerous medieval monuments have been subjected to neutron activation analysis (NAA) to form the nucleus of the Brookhaven Limestone Database. Even though the method and techniques of NAA are well established, this paper briefly summarizes the parameters and experimental conditions useful for determining those concentration variables for which limestone from different sources exhibits significantmore » and reproducible differences.« less
Takaki, Yasuhiro; Hayashi, Yuki
2008-07-01
The narrow viewing zone angle is one of the problems associated with electronic holography. We propose a technique that enables the ratio of horizontal and vertical resolutions of a spatial light modulator (SLM) to be altered. This technique increases the horizontal resolution of a SLM several times, so that the horizontal viewing zone angle is also increased several times. A SLM illuminated by a slanted point light source array is imaged by a 4f imaging system in which a horizontal slit is located on the Fourier plane. We show that the horizontal resolution was increased four times and that the horizontal viewing zone angle was increased approximately four times.
Propagating Neural Source Revealed by Doppler Shift of Population Spiking Frequency
Zhang, Mingming; Shivacharan, Rajat S.; Chiang, Chia-Chu; Gonzalez-Reyes, Luis E.
2016-01-01
Electrical activity in the brain during normal and abnormal function is associated with propagating waves of various speeds and directions. It is unclear how both fast and slow traveling waves with sometime opposite directions can coexist in the same neural tissue. By recording population spikes simultaneously throughout the unfolded rodent hippocampus with a penetrating microelectrode array, we have shown that fast and slow waves are causally related, so a slowly moving neural source generates fast-propagating waves at ∼0.12 m/s. The source of the fast population spikes is limited in space and moving at ∼0.016 m/s based on both direct and Doppler measurements among 36 different spiking trains among eight different hippocampi. The fact that the source is itself moving can account for the surprising direction reversal of the wave. Therefore, these results indicate that a small neural focus can move and that this phenomenon could explain the apparent wave reflection at tissue edges or multiple foci observed at different locations in neural tissue. SIGNIFICANCE STATEMENT The use of novel techniques with an unfolded hippocampus and penetrating microelectrode array to record and analyze neural activity has revealed the existence of a source of neural signals that propagates throughout the hippocampus. The source itself is electrically silent, but its location can be inferred by building isochrone maps of population spikes that the source generates. The movement of the source can also be tracked by observing the Doppler frequency shift of these spikes. These results have general implications for how neural signals are generated and propagated in the hippocampus; moreover, they have important implications for the understanding of seizure generation and foci localization. PMID:27013678
3D synthetic aperture for controlled-source electromagnetics
NASA Astrophysics Data System (ADS)
Knaak, Allison
Locating hydrocarbon reservoirs has become more challenging with smaller, deeper or shallower targets in complicated environments. Controlled-source electromagnetics (CSEM), is a geophysical electromagnetic method used to detect and derisk hydrocarbon reservoirs in marine settings, but it is limited by the size of the target, low-spatial resolution, and depth of the reservoir. To reduce the impact of complicated settings and improve the detecting capabilities of CSEM, I apply synthetic aperture to CSEM responses, which virtually increases the length and width of the CSEM source by combining the responses from multiple individual sources. Applying a weight to each source steers or focuses the synthetic aperture source array in the inline and crossline directions. To evaluate the benefits of a 2D source distribution, I test steered synthetic aperture on 3D diffusive fields and view the changes with a new visualization technique. Then I apply 2D steered synthetic aperture to 3D noisy synthetic CSEM fields, which increases the detectability of the reservoir significantly. With more general weighting, I develop an optimization method to find the optimal weights for synthetic aperture arrays that adapts to the information in the CSEM data. The application of optimally weighted synthetic aperture to noisy, simulated electromagnetic fields reduces the presence of noise, increases detectability, and better defines the lateral extent of the target. I then modify the optimization method to include a term that minimizes the variance of random, independent noise. With the application of the modified optimization method, the weighted synthetic aperture responses amplifies the anomaly from the reservoir, lowers the noise floor, and reduces noise streaks in noisy CSEM responses from sources offset kilometers from the receivers. Even with changes to the location of the reservoir and perturbations to the physical properties, synthetic aperture is still able to highlight targets correctly, which allows use of the method in locations where the subsurface models are built from only estimates. In addition to the technical work in this thesis, I explore the interface between science, government, and society by examining the controversy over hydraulic fracturing and by suggesting a process to aid the debate and possibly other future controversies.
Use of multidimensional, multimodal imaging and PACS to support neurological diagnoses
NASA Astrophysics Data System (ADS)
Wong, Stephen T. C.; Knowlton, Robert C.; Hoo, Kent S.; Huang, H. K.
1995-05-01
Technological advances in brain imaging have revolutionized diagnosis in neurology and neurological surgery. Major imaging techniques include magnetic resonance imaging (MRI) to visualize structural anatomy, positron emission tomography (PET) to image metabolic function and cerebral blood flow, magnetoencephalography (MEG) to visualize the location of physiologic current sources, and magnetic resonance spectroscopy (MRS) to measure specific biochemicals. Each of these techniques studies different biomedical aspects of the brain, but there lacks an effective means to quantify and correlate the disparate imaging datasets in order to improve clinical decision making processes. This paper describes several techniques developed in a UNIX-based neurodiagnostic workstation to aid the noninvasive presurgical evaluation of epilepsy patients. These techniques include online access to the picture archiving and communication systems (PACS) multimedia archive, coregistration of multimodality image datasets, and correlation and quantitation of structural and functional information contained in the registered images. For illustration, we describe the use of these techniques in a patient case of nonlesional neocortical epilepsy. We also present out future work based on preliminary studies.
Time reversal imaging and cross-correlations techniques by normal mode theory
NASA Astrophysics Data System (ADS)
Montagner, J.; Fink, M.; Capdeville, Y.; Phung, H.; Larmat, C.
2007-12-01
Time-reversal methods were successfully applied in the past to acoustic waves in many fields such as medical imaging, underwater acoustics, non destructive testing and recently to seismic waves in seismology for earthquake imaging. The increasing power of computers and numerical methods (such as spectral element methods) enables one to simulate more and more accurately the propagation of seismic waves in heterogeneous media and to develop new applications, in particular time reversal in the three-dimensional Earth. Generalizing the scalar approach of Draeger and Fink (1999), the theoretical understanding of time-reversal method can be addressed for the 3D- elastic Earth by using normal mode theory. It is shown how to relate time- reversal methods on one hand, with auto-correlation of seismograms for source imaging and on the other hand, with cross-correlation between receivers for structural imaging and retrieving Green function. The loss of information will be discussed. In the case of source imaging, automatic location in time and space of earthquakes and unknown sources is obtained by time reversal technique. In the case of big earthquakes such as the Sumatra-Andaman earthquake of december 2004, we were able to reconstruct the spatio-temporal history of the rupture. We present here some new applications at the global scale of these techniques on synthetic tests and on real data.
How Seismology can help to know the origin of gases at Lastarria Volcano, Chile-Argentina?
NASA Astrophysics Data System (ADS)
Legrand, Denis; Spica, Zack; Iglesias, Arturo; Walter, Thomas; Heimann, Sebastian; Dahm, Torsten; Froger, Jean-Luc; Remy, Dominique; Bonvalot, Sylvain; West, Michael; Pardo, Mario
2015-04-01
Gases at Lastarria volcano have a double origin: hydrothermal and magmatic, as revealed by geochemistry analysis. Nevertheless, the exact location (especially the depth) of degassing is not well known. We show here how seismology may help to answer this question. Hydrothermal and magmatic reservoirs have been revealed by a 3-D high-resolution S-wave velocity tomography deduced from a ambient seismic noise technique at Lazufre (an acronym for Lastarria and Cordón del Azufre), one of the largest worldwide volcanic uplift, both in space and amplitude, located in the Altiplano-Puna Plateau in the central Andes (Chile, Argentine). Past deformation data (InSAR and GPS) and geochemical gas analysis showed a double-wide uplift region and a double-hydrothermal/magmatic source respectively. Nevertheless the location and shape of these sources were not well defined. In this study, we defined them better using seismological data. Three very low S-wave velocity zones are identified. Two of them (with S-wave velocity of about 1.2-1.3 km/s) are located below the Lastarria volcano. One is located between 0 and 1 km below its base. It has a funnel-like shape, and suggests a hydrothermal reservoir. The other one is located between 3 and 6 km depth. Its dyke-shape and depth suggest a magma reservoir that is supposed to feed the shallow hydrothermal system. This double hydrothermal and magmatic source is in agreement with the double-origin found by previous geochemical and magneto-telluric studies. Both anomalies can explain the small uplift deformation of about 1 cm/yr deduced from InSAR data at Lastarria volcano. The third low-velocity zone (with S-wave velocity of about 2.7 km/s) located below 6 km depth, is located beneath the center of the main uplift deformation of about 3 cm/yr at Lazufre zone. We suggest it is the top of a large magma chamber that has been previously modeled by InSAR/GPS data to explain this uplift. We show here for the first time the exact geometry and location of the hydrothermal and magmatic reservoirs at Lazufre volcanic area, helping understanding the origin of one of the largest worldwide uplift, revealed by past InSAR/GPS, magneto-telluric and geochemical data.
Examination system utilizing ionizing radiation and a flexible, miniature radiation detector probe
Majewski, S.; Kross, B.J.; Zorn, C.J.; Majewski, L.A.
1996-10-22
An optimized examination system and method based on the Reverse Geometry X-Ray{trademark} (RGX{trademark}) radiography technique are presented. The examination system comprises a radiation source, at least one flexible, miniature radiation detector probe positioned in appropriate proximity to the object to be examined and to the radiation source with the object located between the source and the probe, a photodetector device attachable to an end of the miniature radiation probe, and a control unit integrated with a display device connected to the photodetector device. The miniature radiation detector probe comprises a scintillation element, a flexible light guide having a first end optically coupled to the scintillation element and having a second end attachable to the photodetector device, and an opaque, environmentally-resistant sheath surrounding the flexible light guide. The probe may be portable and insertable, or may be fixed in place within the object to be examined. An enclosed, flexible, liquid light guide is also presented, which comprises a thin-walled flexible tube, a liquid, preferably mineral oil, contained within the tube, a scintillation element located at a first end of the tube, closures located at both ends of the tube, and an opaque, environmentally-resistant sheath surrounding the flexible tube. The examination system and method have applications in non-destructive material testing for voids, cracks, and corrosion, and may be used in areas containing hazardous materials. In addition, the system and method have applications for medical and dental imaging. 5 figs.
Examination system utilizing ionizing radiation and a flexible, miniature radiation detector probe
Majewski, Stanislaw; Kross, Brian J.; Zorn, Carl J.; Majewski, Lukasz A.
1996-01-01
An optimized examination system and method based on the Reverse Geometry X-Ray.RTM. (RGX.RTM.) radiography technique are presented. The examination system comprises a radiation source, at least one flexible, miniature radiation detector probe positioned in appropriate proximity to the object to be examined and to the radiation source with the object located between the source and the probe, a photodetector device attachable to an end of the miniature radiation probe, and a control unit integrated with a display device connected to the photodetector device. The miniature radiation detector probe comprises a scintillation element, a flexible light guide having a first end optically coupled to the scintillation element and having a second end attachable to the photodetector device, and an opaque, environmentally-resistant sheath surrounding the flexible light guide. The probe may be portable and insertable, or may be fixed in place within the object to be examined. An enclosed, flexible, liquid light guide is also presented, which comprises a thin-walled flexible tube, a liquid, preferably mineral oil, contained within the tube, a scintillation element located at a first end of the tube, closures located at both ends of the tube, and an opaque, environmentally-resistant sheath surrounding the flexible tube. The examination system and method have applications in non-destructive material testing for voids, cracks, and corrosion, and may be used in areas containing hazardous materials. In addition, the system and method have applications for medical and dental imaging.
Huang, Ming-Xiong; Anderson, Bill; Huang, Charles W.; Kunde, Gerd J.; Vreeland, Erika C.; Huang, Jeffrey W.; Matlashov, Andrei N.; Karaulanov, Todor; Nettles, Christopher P.; Gomez, Andrew; Minser, Kayla; Weldon, Caroline; Paciotti, Giulio; Harsh, Michael; Lee, Roland R.; Flynn, Edward R.
2017-01-01
Superparamagnetic Relaxometry (SPMR) is a highly sensitive technique for the in vivo detection of tumor cells and may improve early stage detection of cancers. SPMR employs superparamagnetic iron oxide nanoparticles (SPION). After a brief magnetizing pulse is used to align the SPION, SPMR measures the time decay of SPION using Super-conducting Quantum Interference Device (SQUID) sensors. Substantial research has been carried out in developing the SQUID hardware and in improving the properties of the SPION. However, little research has been done in the pre-processing of sensor signals and post-processing source modeling in SPMR. In the present study, we illustrate new pre-processing tools that were developed to: 1) remove trials contaminated with artifacts, 2) evaluate and ensure that a single decay process associated with bounded SPION exists in the data, 3) automatically detect and correct flux jumps, and 4) accurately fit the sensor signals with different decay models. Furthermore, we developed an automated approach based on multi-start dipole imaging technique to obtain the locations and magnitudes of multiple magnetic sources, without initial guesses from the users. A regularization process was implemented to solve the ambiguity issue related to the SPMR source variables. A procedure based on reduced chi-square cost-function was introduced to objectively obtain the adequate number of dipoles that describe the data. The new pre-processing tools and multi-start source imaging approach have been successfully evaluated using phantom data. In conclusion, these tools and multi-start source modeling approach substantially enhance the accuracy and sensitivity in detecting and localizing sources from the SPMR signals. Furthermore, multi-start approach with regularization provided robust and accurate solutions for a poor SNR condition similar to the SPMR detection sensitivity in the order of 1000 cells. We believe such algorithms will help establishing the industrial standards for SPMR when applying the technique in pre-clinical and clinical settings. PMID:28072579
Building Quakes: Detection of Weld Fractures in Buildings using High-Frequency Seismic Techniques
NASA Astrophysics Data System (ADS)
Heckman, V.; Kohler, M. D.; Heaton, T. H.
2009-12-01
Catastrophic fracture of welded beam-column connections in buildings was observed in the Northridge and Kobe earthquakes. Despite the structural importance of such connections, it can be difficult to locate damage in structural members underneath superficial building features. We have developed a novel technique to locate fracturing welds in buildings in real time using high-frequency information from seismograms. Numerical and experimental methods were used to investigate an approach for detecting the brittle fracture of welds of beam-column connections in instrumented steel moment-frame buildings through the use of time-reversed Green’s functions and wave propagation reciprocity. The approach makes use of a prerecorded catalogue of Green’s functions for an instrumented building to detect high-frequency failure events in the building during a later earthquake by screening continuous data for the presence of one or more of the events. This was explored experimentally by comparing structural responses of a small-scale laboratory structure under a variety of loading conditions. Experimentation was conducted on a polyvinyl chloride frame model structure with data recorded at a sample rate of 2000 Hz using piezoelectric accelerometers and a 24-bit digitizer. Green’s functions were obtained by applying impulsive force loads at various locations along the structure with a rubber-tipped force transducer hammer. We performed a blind test using cross-correlation techniques to determine if it was possible to use the catalogue of Green’s functions to pinpoint the absolute times and locations of subsequent, induced failure events in the structure. A finite-element method was used to simulate the response of the model structure to various source mechanisms in order to determine the types of elastic waves that were produced as well as to obtain a general understanding of the structural response to localized loading and fracture.
Automated strip-mine and reclamation mapping from ERTS
NASA Technical Reports Server (NTRS)
Rogers, R. H. (Principal Investigator); Reed, L. E.; Pettyjohn, W. A.
1974-01-01
The author has identified the following significant results. Computer processing techniques were applied to ERTS-1 computer-compatible tape (CCT) data acquired in August 1972 on the Ohio Power Company's coal mining operation in Muskingum County, Ohio. Processing results succeeded in automatically classifying, with an accuracy greater than 90%: (1) stripped earth and major sources of erosion; (2) partially reclaimed areas and minor sources of erosion; (3) water with sedimentation; (4) water without sedimentation; and (5) vegetation. Computer-generated tables listing the area in acres and square kilometers were produced for each target category. Processing results also included geometrically corrected map overlays, one for each target category, drawn on a transparent material by a pen under computer control. Each target category is assigned a distinctive color on the overlay to facilitate interpretation. The overlays, drawn at a scale of 1:250,000 when placed over an AMS map of the same area, immediately provided map locations for each target. These mapping products were generated at a tenth of the cost of conventional mapping techniques.
Ford Motor Company NDE facility shielding design.
Metzger, Robert L; Van Riper, Kenneth A; Jones, Martin H
2005-01-01
Ford Motor Company proposed the construction of a large non-destructive evaluation laboratory for radiography of automotive power train components. The authors were commissioned to design the shielding and to survey the completed facility for compliance with radiation doses for occupationally and non-occupationally exposed personnel. The two X-ray sources are Varian Linatron 3000 accelerators operating at 9-11 MV. One performs computed tomography of automotive transmissions, while the other does real-time radiography of operating engines and transmissions. The shield thickness for the primary barrier and all secondary barriers were determined by point-kernel techniques. Point-kernel techniques did not work well for skyshine calculations and locations where multiple sources (e.g. tube head leakage and various scatter fields) impacted doses. Shielding for these areas was determined using transport calculations. A number of MCNP [Briesmeister, J. F. MCNPCA general Monte Carlo N-particle transport code version 4B. Los Alamos National Laboratory Manual (1997)] calculations focused on skyshine estimates and the office areas. Measurements on the operational facility confirmed the shielding calculations.
NASA Technical Reports Server (NTRS)
Deguchi, Shuji; Watson, William D.
1988-01-01
Statistical methods are developed for gravitational lensing in order to obtain analytic expressions for the average surface brightness that include the effects of microlensing by stellar (or other compact) masses within the lensing galaxy. The primary advance here is in utilizing a Markoff technique to obtain expressions that are valid for sources of finite size when the surface density of mass in the lensing galaxy is large. The finite size of the source is probably the key consideration for the occurrence of microlensing by individual stars. For the intensity from a particular location, the parameter which governs the importance of microlensing is determined. Statistical methods are also formulated to assess the time variation of the surface brightness due to the random motion of the masses that cause the microlensing.
The Chandra Source Catalog 2.0: the Galactic center region
NASA Astrophysics Data System (ADS)
Civano, Francesca Maria; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula
2018-01-01
The second release of the Chandra Source Catalog (CSC 2.0) comprises all the 10,382 ACIS and HRC-I imaging observations taken by Chandra and released publicly through the end of 2014. Among these, 534 single observations surrounding the Galactic center are included, covering a total area of ~19deg2 and a total exposure time of ~9 Ms.The single 534 observations were merged into 379 stacks (overlapping observations with aim-points within 60") to increase the flux limit for source detection purposes.Thanks to the combination of the point source detection algorithm with the maximum likelihood technique used to asses the source significance, ~21,000 detections are listed in the CSC 2.0 for this field only, 80% of which are unique sources. The central region of this field around the SgrA* location has the deepest exposure of 2.2 Ms and the highest source density with ~5000 sources. In this poster, we present details about this region including source distribution and density, coverage, exposure.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the ChandraX-ray Center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tweeton, D.R.; Hanson, J.C.; Friedel, M.J.
1994-01-01
The U.S. Bureau of Mines, the University of Arizona, Sandia National Laboratory, and Zonge Engineering and Research, Inc., conducted cooperative field tests of six electromagnetic geophysical methods to compare their effectiveness in locating a brine solution simulating in situ leach solution or a high-conductivity plume of contamination. The brine was approximately 160 meters below the surface. The test site was the University's San Xavier experimental mine near Tucson, Arizona. Geophysical surveys using surface and surface-borehole time-domain electromagnetics (TEM), surface controlled source audio-frequency magnetotellurics (CSAMT), surface-borehole frequency-domain electromagnetics (FEM), crosshole FEM and surface magnetic field ellipticity were conducted before and duringmore » brine injection.« less
Lessons Learned from OMI Observations of Point Source SO2 Pollution
NASA Technical Reports Server (NTRS)
Krotkov, N.; Fioletov, V.; McLinden, Chris
2011-01-01
The Ozone Monitoring Instrument (OMI) on NASA Aura satellite makes global daily measurements of the total column of sulfur dioxide (SO2), a short-lived trace gas produced by fossil fuel combustion, smelting, and volcanoes. Although anthropogenic SO2 signals may not be detectable in a single OMI pixel, it is possible to see the source and determine its exact location by averaging a large number of individual measurements. We describe new techniques for spatial and temporal averaging that have been applied to the OMI SO2 data to determine the spatial distributions or "fingerprints" of SO2 burdens from top 100 pollution sources in North America. The technique requires averaging of several years of OMI daily measurements to observe SO2 pollution from typical anthropogenic sources. We found that the largest point sources of SO2 in the U.S. produce elevated SO2 values over a relatively small area - within 20-30 km radius. Therefore, one needs higher than OMI spatial resolution to monitor typical SO2 sources. TROPOMI instrument on the ESA Sentinel 5 precursor mission will have improved ground resolution (approximately 7 km at nadir), but is limited to once a day measurement. A pointable geostationary UVB spectrometer with variable spatial resolution and flexible sampling frequency could potentially achieve the goal of daily monitoring of SO2 point sources and resolve downwind plumes. This concept of taking the measurements at high frequency to enhance weak signals needs to be demonstrated with a GEOCAPE precursor mission before 2020, which will help formulating GEOCAPE measurement requirements.
NASA Technical Reports Server (NTRS)
Panda, Jayanta; Seasholtz, Richard G.; Elam, Kristie A.
2002-01-01
To locate noise sources in high-speed jets, the sound pressure fluctuations p', measured at far field locations, were correlated with each of radial velocity v, density rho, and phov(exp 2) fluctuations measured from various points in jet plumes. The experiments follow the cause-and-effect method of sound source identification, where
The acoustics of ducted propellers
NASA Astrophysics Data System (ADS)
Ali, Sherif F.
The return of the propeller to the long haul commercial service may be rapidly approaching in the form of advanced "prop fans". It is believed that the advanced turboprop will considerably reduce the operational cost. However, such aircraft will come into general use only if their noise levels meet the standards of community acceptability currently applied to existing aircraft. In this work a time-marching boundary-element technique is developed, and used to study the acoustics of ducted propeller. The numerical technique is developed in this work eliminated the inherent instability suffered by conventional approaches. The methodology is validated against other numerical and analytical results. The results show excellent agreement with the analytical solution and show no indication of unstable behavior. For the ducted propeller problem, the propeller is modeled by a rotating source-sink pairs, and the duct is modeled by rigid annular body of elliptical cross-section. Using the model and the developed technique, the effect of different parameters on the acoustic field is predicted and analyzed. This includes the effect of duct length, propeller axial location, and source Mach number. The results of this study show that installing a short duct around the propeller can reduce the noise that reaches an observer on a side line.
Hansen, Scott K.; Vesselinov, Velimir Valentinov
2016-10-01
We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less
Tracking magma volume recovery at okmok volcano using GPS and an unscented kalman filter
Fournier, T.; Freymueller, Jeffrey T.; Cervelli, Peter
2009-01-01
Changes beneath a volcano can be observed through position changes in a GPS network, but distinguishing the source of site motion is not always straightforward. The records of continuous GPS sites provide a favorable data set for tracking magma migration. Dense campaign observations usually provide a better spatial picture of the overall deformation field, at the expense of an episodic temporal record. Combining these observations provides the best of both worlds. A Kalman filter provides a means for integrating discrete and continuous measurements and for interpreting subtle signals. The unscented Kalman filter (UKF) is a nonlinear method for time-dependent observations. We demonstrate the application of this technique to deformation data by applying it to GPS data collected at Okmok volcano. Seven years of GPS observations at Okmok are analyzed using a Mogi source model and the UKF. The deformation source at Okmok is relatively stable at 2.5 km depth below sea level, located beneath the center of the caldera, which means the surface deformation is caused by changes in the strength of the source. During the 7 years of GPS observations more than 0.5 m of uplift has occurred, a majority of that during the time period January 2003 to July 2004. The total volume recovery at Okmok since the last eruption in 1997 is ??60-80%. The UKF allows us to solve simultaneously for the time-dependence of the source strength and for the location without a priori information about the source. ?? 2009 by the American Geophysical Union.
Struk, S; Schaff, J-B; Qassemyar, Q
2018-04-01
The medial sural artery perforator (MSAP) flap is defined as a thin cutaneo-adipose perforator flap harvested on the medial aspect of the leg. The aims of this study were to describe the anatomical basis as well as the surgical technique and discuss the indications in head and neck reconstructive surgery. We harvested 10 MSAP flap on 5 fresh cadavers. For each case, the number and the location of the perforators were recorded. For each flap, the length of pedicle, the diameter of source vessels and the thickness of the flap were studied. Finally, we performed a clinical application of a MSAP flap. A total of 23 perforators with a diameter superior than 1mm were dissected on 10 legs. The medial sural artery provided between 1 and 4 musculocutaneous perforators. Perforators were located in average at 10.3cm±2cm from the popliteal fossa and at 3.6cm±1cm from the median line of the calf. The mean pedicle length was 12.1cm±2.5cm. At its origin, the source artery diameter was 1.8mm±0.25mm and source veins diameters were 2.45mm±0.9mm in average. There was no complication in our clinical application. This study confirms the reliability of previous anatomical descriptions of the medial sural artery perforator flap. This flap was reported as thin and particularly adapted for oral cavity reconstruction and for facial or limb resurfacing. Sequelae might be reduced as compared to those of the radial forearm flap with comparable results. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Elastic Velocity Updating through Image-Domain Tomographic Inversion of Passive Seismic Data
NASA Astrophysics Data System (ADS)
Witten, B.; Shragge, J. C.
2014-12-01
Seismic monitoring at injection sites (e.g., CO2sequestration, waste water disposal, hydraulic fracturing) has become an increasingly important tool for hazard identification and avoidance. The information obtained from this data is often limited to seismic event properties (e.g., location, approximate time, moment tensor), the accuracy of which greatly depends on the estimated elastic velocity models. However, creating accurate velocity models from passive array data remains a challenging problem. Common techniques rely on picking arrivals or matching waveforms requiring high signal-to-noise data that is often not available for the magnitude earthquakes observed over injection sites. We present a new method for obtaining elastic velocity information from earthquakes though full-wavefield wave-equation imaging and adjoint-state tomography. The technique exploits images of the earthquake source using various imaging conditions based upon the P- and S-wavefield data. We generate image volumes by back propagating data through initial models and then applying a correlation-based imaging condition. We use the P-wavefield autocorrelation, S-wavefield autocorrelation, and P-S wavefield cross-correlation images. Inconsistencies in the images form the residuals, which are used to update the P- and S-wave velocity models through adjoint-state tomography. Because the image volumes are constructed from all trace data, the signal-to-noise in this space is increased when compared to the individual traces. Moreover, it eliminates the need for picking and does not require any estimation of the source location and timing. Initial tests show that with reasonable source distribution and acquisition array, velocity anomalies can be recovered. Future tests will apply this methodology to other scales from laboratory to global.
[Wet deposition of atmospheric nitrogen in Jiulong River Watershed].
Chen, Neng-Wang; Hong, Hua-Sheng; Zhang, Luo-Ping
2008-01-01
Spatio-temporal distributions and sources of atmospheric nitrogen (N) in precipitation were examined for Jiulong River Watershed (JRW), an agricultural-dominated watershed located in southeastern China with a drainage area of 1.47 x 10(4) km2. During 2004-2005, 847 rain samples were collected in seventeen sites and analyzed for ammonium N, nitrate N and dissolved total N (DTN) followed by filtration through 0.45 microm nucleopore membranes. Atmospheric N deposition flux was calculated using GIS interpolation technique (Universal Kriging method for precipitation, Inverse distance weighted technique for N) based on measured N value and precipitation data from eight weather stations located in the JRW. ArcView GIS 3.2 was used for surface analysis, interpolation and statistical work. It was found that mean DTN concentration in all sites ranged between 2.20 +/- 1.69 and 3.26 +/- 1.37 mg x L(-1). Ammonium, nitrate and dissolved organic N formed 39%, 25% and 36% of DTN, respectively. N concentration decreased with precipitation intensity as a result of dilution, and showed a significant difference between dry season and wet season. The low isotope value of nitrate delta 15N ranging between -7.48 per thousand and -0.27 per thousand (mean: -3.61 per thousand) indicated that the increasing agricultural and soil emissions together with fossil combustions contributed to atmospheric nitrate sources. The annual wet deposition of atmospheric N flux amounted to 9.9 kg x hm(-2), which accounts for 66% of total atmospheric N deposition flux (14.9 kg x hm(-2)). About 91% of wet atmospheric deposition occurred in spring and summer. The spatio-temporal variation of atmospheric N deposition indicated that intensive precipitation, higher ammonia volatilization from fertilizer application in the growing season, and livestock productions together provided the larger N source.
De Rosario, Helios; Page, Álvaro; Besa, Antonio
2017-09-06
The accurate location of the main axes of rotation (AoR) is a crucial step in many applications of human movement analysis. There are different formal methods to determine the direction and position of the AoR, whose performance varies across studies, depending on the pose and the source of errors. Most methods are based on minimizing squared differences between observed and modelled marker positions or rigid motion parameters, implicitly assuming independent and uncorrelated errors, but the largest error usually results from soft tissue artefacts (STA), which do not have such statistical properties and are not effectively cancelled out by such methods. However, with adequate methods it is possible to assume that STA only account for a small fraction of the observed motion and to obtain explicit formulas through differential analysis that relate STA components to the resulting errors in AoR parameters. In this paper such formulas are derived for three different functional calibration techniques (Geometric Fitting, mean Finite Helical Axis, and SARA), to explain why each technique behaves differently from the others, and to propose strategies to compensate for those errors. These techniques were tested with published data from a sit-to-stand activity, where the true axis was defined using bi-planar fluoroscopy. All the methods were able to estimate the direction of the AoR with an error of less than 5°, whereas there were errors in the location of the axis of 30-40mm. Such location errors could be reduced to less than 17mm by the methods based on equations that use rigid motion parameters (mean Finite Helical Axis, SARA) when the translation component was calculated using the three markers nearest to the axis. Copyright © 2017 Elsevier Ltd. All rights reserved.
Time-resolved acoustic emission tomography in the laboratory: tracking localised damage in rocks
NASA Astrophysics Data System (ADS)
Brantut, N.
2017-12-01
Over the past three decades, there has been tremendous technological developments of laboratory equipment and studies using acoustic emission and ultrasonic monitoring of rock samples during deformation. Using relatively standard seismological techniques, acoustic emissions can be detected, located in space and time, and source mechanisms can be obtained. In parallel, ultrasonic velocities can be measured routinely using standard pulse-receiver techniques.Despite these major developments, current acoustic emission and ultrasonic monitoring systems are typically used separately, and the poor spatial coverage of acoustic transducers precludes performing active 3D tomography in typical laboratory settings.Here, I present an algorithm and software package that uses both passive acoustic emission data and active ultrasonic measurements to determine acoustic emission locations together with the 3D, anisotropic P-wave structure of rock samples during deformation. The technique is analogous to local earthquake tomography, but tailored to the specificities of small scale laboratory tests. The fast marching method is employed to compute the forward problem. The acoustic emission locations and the anisotropic P-wave field are jointly inverted using the Quasi-Newton method.The method is used to track the propagation of compaction bands in a porous sandstone deformed in the ductile, cataclastic flow regime under triaxial stress conditions. Near the yield point, a compaction front forms at one end of the sample, and slowly progresses towards the other end. The front is illuminated by clusters of Acoustic Emissions, and leaves behind a heavily damaged material where the P-wave speed has dropped by up to 20%.The technique opens new possibilities to track in-situ strain localisation and damage around laboratory faults, and preliminary results on quasi-static rupture in granite will be presented.
Vender, John; Waller, Jennifer; Dhandapani, Krishnan; McDonnell, Dennis
2011-08-01
Intracranial pressure measurements have become one of the mainstays of traumatic brain injury management. Various technologies exist to monitor intracranial pressure from a variety of locations. Transducers are usually placed to assess pressure in the brain parenchyma and the intra-ventricular fluid, which are the two most widely accepted compartmental monitoring sites. The individual reliability and inter-reliability of these devices with and without cerebrospinal fluid diversion is not clear. The predictive capability of monitors in both of these sites to local, regional, and global changes also needs further clarification. The technique of monitoring intraventricular pressure with a fluid-coupled transducer system is also reviewed. There has been little investigation into the relationship among pressure measurements obtained from these two sources using these three techniques. Eleven consecutive patients with severe, closed traumatic brain injury not requiring intracranial mass lesion evacuation were admitted into this prospective study. Each patient underwent placement of a parenchymal and intraventricular pressure monitor. The ventricular catheter tubing was also connected to a sensor for fluid-coupled measurement. Pressure from all three sources was measured hourly with and without ventricular drainage. Statistically significant correlation within each monitoring site was seen. No monitoring location was more predictive of global pressure changes or more responsive to pressure changes related to patient stimulation. However, the intraventricular pressure measurements were not reliable in the presence of cerebrospinal fluid drainage whereas the parenchymal measurements remained unaffected. Intraparenchymal pressure monitoring provides equivalent, statistically similar pressure measurements when compared to intraventricular monitors in all care and clinical settings. This is particularly valuable when uninterrupted cerebrospinal fluid drainage is desirable.
Small Hot Jet Acoustic Rig Validation
NASA Technical Reports Server (NTRS)
Brown, Cliff; Bridges, James
2006-01-01
The Small Hot Jet Acoustic Rig (SHJAR), located in the Aeroacoustic Propulsion Laboratory (AAPL) at the NASA Glenn Research Center in Cleveland, Ohio, was commissioned in 2001 to test jet noise reduction concepts at low technology readiness levels (TRL 1-3) and develop advanced measurement techniques. The first series of tests on the SHJAR were designed to prove its capabilities and establish the quality of the jet noise data produced. Towards this goal, a methodology was employed dividing all noise sources into three categories: background noise, jet noise, and rig noise. Background noise was directly measured. Jet noise and rig noise were separated by using the distance and velocity scaling properties of jet noise. Effectively, any noise source that did not follow these rules of jet noise was labeled as rig noise. This method led to the identification of a high frequency noise source related to the Reynolds number. Experiments using boundary layer treatment and hot wire probes documented this noise source and its removal, allowing clean testing of low Reynolds number jets. Other tests performed characterized the amplitude and frequency of the valve noise, confirmed the location of the acoustic far field, and documented the background noise levels under several conditions. Finally, a full set of baseline data was acquired. This paper contains the methodology and test results used to verify the quality of the SHJAR rig.
Time reversal imaging, Inverse problems and Adjoint Tomography}
NASA Astrophysics Data System (ADS)
Montagner, J.; Larmat, C. S.; Capdeville, Y.; Kawakatsu, H.; Fink, M.
2010-12-01
With the increasing power of computers and numerical techniques (such as spectral element methods), it is possible to address a new class of seismological problems. The propagation of seismic waves in heterogeneous media is simulated more and more accurately and new applications developed, in particular time reversal methods and adjoint tomography in the three-dimensional Earth. Since the pioneering work of J. Claerbout, theorized by A. Tarantola, many similarities were found between time-reversal methods, cross-correlations techniques, inverse problems and adjoint tomography. By using normal mode theory, we generalize the scalar approach of Draeger and Fink (1999) and Lobkis and Weaver (2001) to the 3D- elastic Earth, for theoretically understanding time-reversal method on global scale. It is shown how to relate time-reversal methods on one hand, with auto-correlations of seismograms for source imaging and on the other hand, with cross-correlations between receivers for structural imaging and retrieving Green function. Time-reversal methods were successfully applied in the past to acoustic waves in many fields such as medical imaging, underwater acoustics, non destructive testing and to seismic waves in seismology for earthquake imaging. In the case of source imaging, time reversal techniques make it possible an automatic location in time and space as well as the retrieval of focal mechanism of earthquakes or unknown environmental sources . We present here some applications at the global scale of these techniques on synthetic tests and on real data, such as Sumatra-Andaman (Dec. 2004), Haiti (Jan. 2010), as well as glacial earthquakes and seismic hum.
Choi, Young-Chul; Park, Jin-Ho; Choi, Kyoung-Sik
2011-01-01
In a nuclear power plant, a loose part monitoring system (LPMS) provides information on the location and the mass of a loosened or detached metal impacted onto the inner surface of the primary pressure boundary. Typically, accelerometers are mounted on the surface of a reactor vessel to localize the impact location caused by the impact of metallic substances on the reactor system. However, in some cases, the number of accelerometers is not sufficient to estimate the impact location precisely. In such a case, one of useful methods is to utilize other types of sensor that can measure the vibration of the reactor structure. For example, acoustic emission (AE) sensors are installed on the reactor structure to detect leakage or cracks on the primary pressure boundary. However, accelerometers and AE sensors have a different frequency range. The frequency of interest of AE sensors is higher than that of accelerometers. In this paper, we propose a method of impact source localization by using both accelerometer signals and AE signals, simultaneously. The main concept of impact location estimation is based on the arrival time difference of the impact stress wave between different sensor locations. However, it is difficult to find the arrival time difference between sensors, because the primary frequency ranges of accelerometers and AE sensors are different. To overcome the problem, we used phase delays of an envelope of impact signals. This is because the impact signals from the accelerometer and the AE sensor are similar in the whole shape (envelope). To verify the proposed method, we have performed experiments for a reactor mock-up model and a real nuclear power plant. The experimental results demonstrate that we can enhance the reliability and precision of the impact source localization. Therefore, if the proposed method is applied to a nuclear power plant, we can obtain the effect of additional installed sensors. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Walsh, Braden; Jolly, Arthur; Procter, Jonathan
2017-04-01
Using active seismic sources on Tongariro Volcano, New Zealand, the amplitude source location (ASL) method is calibrated and optimized through a series of sensitivity tests. By applying a geologic medium velocity of 1500 m/s and an attenuation value of Q=60 for surface waves along with amplification factors computed from regional earthquakes, the ASL produced location discrepancies larger than 1.0 km horizontally and up to 0.5 km in depth. Through the use of sensitivity tests on input parameters, we show that velocity and attenuation models have moderate to strong influences on the location results, but can be easily constrained. Changes in locations are accommodated through either lateral or depth movements. Station corrections (amplification factors) and station geometry strongly affect the ASL locations laterally, horizontally and in depth. Calibrating the amplification factors through the exploitation of the active seismic source events reduced location errors for the sources by up to 50%.
Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette
Huang, Wenzhu; Zhang, Wentao; Li, Fang
2013-01-01
This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location. PMID:24141266
Code of Federal Regulations, 2011 CFR
2011-07-01
... Ignition Stationary RICE Located at a Major Source of HAP Emissions and Existing Spark Ignition Stationary RICE ⤠500 HP Located at a Major Source of HAP Emissions 2c Table 2c to Subpart ZZZZ of Part 63... Stationary RICE Located at a Major Source of HAP Emissions and Existing Spark Ignition Stationary RICE ≤ 500...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franco, Manuel
The objective of this work was to characterize the neutron irradiation system consisting of americium-241 beryllium (241AmBe) neutron sources placed in a polyethylene shielding for use at Sandia National Laboratories (SNL) Low Dose Rate Irradiation Facility (LDRIF). With a total activity of 0.3 TBq (9 Ci), the source consisted of three recycled 241AmBe sources of different activities that had been combined into a single source. The source in its polyethylene shielding will be used in neutron irradiation testing of components. The characterization of the source-shielding system was necessary to evaluate the radiation environment for future experiments. Characterization of the sourcemore » was also necessary because the documentation for the three component sources and their relative alignment within the Special Form Capsule (SFC) was inadequate. The system consisting of the source and shielding was modeled using Monte Carlo N-Particle transport code (MCNP). The model was validated by benchmarking it against measurements using multiple techniques. To characterize the radiation fields over the full spatial geometry of the irradiation system, it was necessary to use a number of instruments of varying sensitivities. First, the computed photon radiography assisted in determining orientation of the component sources. With the capsule properly oriented inside the shielding, the neutron spectra were measured using a variety of techniques. A N-probe Microspec and a neutron Bubble Dosimeter Spectrometer (BDS) set were used to characterize the neutron spectra/field in several locations. In the third technique, neutron foil activation was used to ascertain the neutron spectra. A high purity germanium (HPGe) detector was used to characterize the photon spectrum. The experimentally measured spectra and the MCNP results compared well. Once the MCNP model was validated to an adequate level of confidence, parametric analyses was performed on the model to optimize for potential experimental configurations and neutron spectra for component irradiation. The final product of this work is a MCNP model validated by measurements, an overall understanding of neutron irradiation system including photon/neutron transport and effective dose rates throughout the system, and possible experimental configurations for future irradiation of components.« less
Time-of-flight mass measurements for nuclear processes in neutron star crusts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estrade, Alfredo; Matos, M.; Schatz, Hendrik
2011-01-01
The location of electron capture heat sources in the crust of accreting neutron stars depends on the masses of extremely neutron-rich nuclei. We present first results from a new implementation of the time-of-flight technique to measure nuclear masses of rare isotopes at the National Supercon- ducting Cyclotron Laboratory. The masses of 16 neutron-rich nuclei in the Sc Ni element range were determined simultaneously, improving the accuracy compared to previous data in 12 cases. The masses of 61V, 63Cr, 66Mn, and 74Ni were measured for the first time with mass excesses of 30.510(890) MeV, 35.280(650) MeV, 36.900(790) MeV, and 49.210(990) MeV,more » respectively. With the measurement of the 66Mn mass, the location of the two dominant heat sources in the outer crust of accreting neutron stars, which exhibit so called superbursts, is now experimentally constrained. We find that the location of the 66Fe 66Mn electron capture transition occurs sig- nificantly closer to the surface than previously assumed because our new experimental Q-value is 2.1 MeV smaller than predicted by the FRDM mass model. The results also provide new insights into the structure of neutron-rich nuclei around N = 40.« less
Rodrigues, G; Baskaran, R; Kukrety, S; Mathur, Y; Kumar, Sarvesh; Mandal, A; Kanjilal, D; Roy, A
2012-03-01
Plasma potentials for various heavy ions have been measured using the retarding field technique in the 18 GHz high temperature superconducting ECR ion source, PKDELIS [C. Bieth, S. Kantas, P. Sortais, D. Kanjilal, G. Rodrigues, S. Milward, S. Harrison, and R. McMahon, Nucl. Instrum. Methods B 235, 498 (2005); D. Kanjilal, G. Rodrigues, P. Kumar, A. Mandal, A. Roy, C. Bieth, S. Kantas, and P. Sortais, Rev. Sci. Instrum. 77, 03A317 (2006)]. The ion beam extracted from the source is decelerated close to the location of a mesh which is polarized to the source potential and beams having different plasma potentials are measured on a Faraday cup located downstream of the mesh. The influence of various source parameters, viz., RF power, gas pressure, magnetic field, negative dc bias, and gas mixing on the plasma potential is studied. The study helped to find an upper limit of the energy spread of the heavy ions, which can influence the design of the longitudinal optics of the high current injector being developed at the Inter University Accelerator Centre. It is observed that the plasma potentials are decreasing for increasing charge states and a mass effect is clearly observed for the ions with similar operating gas pressures. In the case of gas mixing, it is observed that the plasma potential minimizes at an optimum value of the gas pressure of the mixing gas and the mean charge state maximizes at this value. Details of the measurements carried out as a function of various source parameters and its impact on the longitudinal optics are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigues, G.; Mathur, Y.; Kumar, Sarvesh
2012-03-15
Plasma potentials for various heavy ions have been measured using the retarding field technique in the 18 GHz high temperature superconducting ECR ion source, PKDELIS [C. Bieth, S. Kantas, P. Sortais, D. Kanjilal, G. Rodrigues, S. Milward, S. Harrison, and R. McMahon, Nucl. Instrum. Methods B 235, 498 (2005); D. Kanjilal, G. Rodrigues, P. Kumar, A. Mandal, A. Roy, C. Bieth, S. Kantas, and P. Sortais, Rev. Sci. Instrum. 77, 03A317 (2006)]. The ion beam extracted from the source is decelerated close to the location of a mesh which is polarized to the source potential and beams having different plasmamore » potentials are measured on a Faraday cup located downstream of the mesh. The influence of various source parameters, viz., RF power, gas pressure, magnetic field, negative dc bias, and gas mixing on the plasma potential is studied. The study helped to find an upper limit of the energy spread of the heavy ions, which can influence the design of the longitudinal optics of the high current injector being developed at the Inter University Accelerator Centre. It is observed that the plasma potentials are decreasing for increasing charge states and a mass effect is clearly observed for the ions with similar operating gas pressures. In the case of gas mixing, it is observed that the plasma potential minimizes at an optimum value of the gas pressure of the mixing gas and the mean charge state maximizes at this value. Details of the measurements carried out as a function of various source parameters and its impact on the longitudinal optics are presented.« less
The Star Blended with the MOA-2008-BLG-310 Source Is Not the Exoplanet Host Star
NASA Astrophysics Data System (ADS)
Bhattacharya, A.; Bennett, D. P.; Anderson, J.; Bond, I. A.; Gould, A.; Batista, V.; Beaulieu, J. P.; Fouqué, P.; Marquette, J. B.; Pogge, R.
2017-08-01
High-resolution Hubble Space Telescope (HST) image analysis of the MOA-2008-BLG-310 microlens system indicates that the excess flux at the location of the source found in the discovery paper cannot primarily be due to the lens star because it does not match the lens-source relative proper motion, {μ }{rel}, predicted by the microlens models. This excess flux is most likely to be due to an unrelated star that happens to be located in close proximity to the source star. Two epochs of HST observations indicate proper motion for this blend star that is typical of a random bulge star but is not consistent with a companion to the source or lens stars if the flux is dominated by only one star, aside from the lens. We consider models in which the excess flux is due to a combination of an unrelated star and the lens star, and this yields a 95% confidence level upper limit on the lens star brightness of {I}L> 22.44 and {V}L> 23.62. A Bayesian analysis using a standard Galactic model and these magnitude limits yields a host star mass of {M}h={0.21}-0.09+0.21 {M}⊙ and a planet mass of {m}p={23.4}-9.9+23.9 {M}\\oplus at a projected separation of {a}\\perp ={1.12}-0.17+0.16 au. This result illustrates that excess flux in a high-resolution image of a microlens-source system need not be due to the lens. It is important to check that the lens-source relative proper motion is consistent with the microlensing prediction. The high-resolution image analysis techniques developed in this paper can be used to verify the WFIRST exoplanet microlensing survey mass measurements.
Vertical Cable Seismic Survey for Hydrothermal Deposit
NASA Astrophysics Data System (ADS)
Asakawa, E.; Murakami, F.; Sekino, Y.; Okamoto, T.; Ishikawa, K.; Tsukahara, H.; Shimura, T.
2012-04-01
The vertical cable seismic is one of the reflection seismic methods. It uses hydrophone arrays vertically moored from the seafloor to record acoustic waves generated by surface, deep-towed or ocean bottom sources. Analyzing the reflections from the sub-seabed, we could look into the subsurface structure. This type of survey is generally called VCS (Vertical Cable Seismic). Because VCS is an efficient high-resolution 3D seismic survey method for a spatially-bounded area, we proposed the method for the hydrothermal deposit survey tool development program that the Ministry of Education, Culture, Sports, Science and Technology (MEXT) started in 2009. We are now developing a VCS system, including not only data acquisition hardware but data processing and analysis technique. Our first experiment of VCS surveys has been carried out in Lake Biwa, JAPAN in November 2009 for a feasibility study. Prestack depth migration is applied to the 3D VCS data to obtain a high quality 3D depth volume. Based on the results from the feasibility study, we have developed two autonomous recording VCS systems. After we carried out a trial experiment in the actual ocean at a water depth of about 400m and we carried out the second VCS survey at Iheya Knoll with a deep-towed source. In this survey, we could establish the procedures for the deployment/recovery of the system and could examine the locations and the fluctuations of the vertical cables at a water depth of around 1000m. The acquired VCS data clearly shows the reflections from the sub-seafloor. Through the experiment, we could confirm that our VCS system works well even in the severe circumstances around the locations of seafloor hydrothermal deposits. We have, however, also confirmed that the uncertainty in the locations of the source and of the hydrophones could lower the quality of subsurface image. It is, therefore, strongly necessary to develop a total survey system that assures a accurate positioning and a deployment techniques. We have carried out two field surveys in FY2011. One is a 3D survey with a boomer for a high-resolution surface source and the other one for an actual field survey in the Izena Cauldron an active hydrothermal area in the Okinawa Trough. Through these surveys, the VCS will become a practical exploration tool for the exploration of seafloor hydrothermal deposits.
Surface Location In Scene Content Analysis
NASA Astrophysics Data System (ADS)
Hall, E. L.; Tio, J. B. K.; McPherson, C. A.; Hwang, J. J.
1981-12-01
The purpose of this paper is to describe techniques and algorithms for the location in three dimensions of planar and curved object surfaces using a computer vision approach. Stereo imaging techniques are demonstrated for planar object surface location using automatic segmentation, vertex location and relational table matching. For curved surfaces, the locations of corresponding 'points is very difficult. However, an example using a grid projection technique for the location of the surface of a curved cup is presented to illustrate a solution. This method consists of first obtaining the perspective transformation matrix from the images, then using these matrices to compute the three dimensional point locations of the grid points on the surface. These techniques may be used in object location for such applications as missile guidance, robotics, and medical diagnosis and treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akahori, Takuya; Gaensler, B. M.; Ryu, Dongsu, E-mail: akahori@physics.usyd.edu.au, E-mail: bryan.gaensler@sydney.edu.au, E-mail: ryu@sirius.unist.ac.kr
2014-08-01
Rotation measure (RM) grids of extragalactic radio sources have been widely used for studying cosmic magnetism. However, their potential for exploring the intergalactic magnetic field (IGMF) in filaments of galaxies is unclear, since other Faraday-rotation media such as the radio source itself, intervening galaxies, and the interstellar medium of our Galaxy are all significant contributors. We study statistical techniques for discriminating the Faraday rotation of filaments from other sources of Faraday rotation in future large-scale surveys of radio polarization. We consider a 30° × 30° field of view toward the south Galactic pole, while varying the number of sources detectedmore » in both present and future observations. We select sources located at high redshifts and toward which depolarization and optical absorption systems are not observed so as to reduce the RM contributions from the sources and intervening galaxies. It is found that a high-pass filter can satisfactorily reduce the RM contribution from the Galaxy since the angular scale of this component toward high Galactic latitudes would be much larger than that expected for the IGMF. Present observations do not yet provide a sufficient source density to be able to estimate the RM of filaments. However, from the proposed approach with forthcoming surveys, we predict significant residuals of RM that should be ascribable to filaments. The predicted structure of the IGMF down to scales of 0.°1 should be observable with data from the Square Kilometre Array, if we achieve selections of sources toward which sightlines do not contain intervening galaxies and RM errors are less than a few rad m{sup –2}.« less
Optimal Use of TDOA Geo-Location Techniques Within the Mountainous Terrain of Turkey
2012-09-01
Cross -Correlation TDOA Estimation Technique ................. 49 3. Standard Deviation...76 Figure 32. The Effect of Noise on Accuracy ........................................................ 77 Figure 33. The Effect of Noise to...finding techniques. In contrast, people have been using active location finding techniques, such as radar , for decades. When active location finding
Source Repeatability of Time-Lapse Offset VSP Surveys for Monitoring CO2 Injection
NASA Astrophysics Data System (ADS)
Zhang, Z.; Huang, L.; Rutledge, J. T.; Denli, H.; Zhang, H.; McPherson, B. J.; Grigg, R.
2009-12-01
Time-lapse vertical seismic profiling (VSP) surveys have the potential to remotely track the migration of injected CO2 within a geologic formation. To accurately detect small changes due to CO2 injection, the sources of time-lapse VSP surveys must be located exactly at the same positions. However, in practice, the source locations can vary from one survey to another survey. Our numerical simulations demonstrate that a variation of a few meters in the VSP source locations can result in significant changes in time-lapse seismograms. To address the source repeatability issue, we apply double-difference tomography to downgoing waves of time-lapse offset VSP data to invert for the source locations and the velocity structures simultaneously. In collaboration with Resolute Natural Resources, Navajo National Oil and Gas Company, and the Southwest Regional Partnership on Carbon Sequestration under the support of the U.S. Department of Energy’s National Energy Technology Laboratory, one baseline and two repeat offset VSP datasets were acquired in 2007-2009 for monitoring CO2 injection at the Aneth oil field in Utah. A cemented geophone string was used to acquire the data for one zero-offset and seven offset source locations. During the data acquisition, there was some uncertainty in the repeatability of the source locations relative to the baseline survey. Our double-difference tomography results of the Aneth time-lapse VSP data show that the source locations for different surveys are separated up to a few meters. Accounting for these source location variations during VSP data analysis will improve reliability of time-lapse VSP monitoring.
NASA Astrophysics Data System (ADS)
Han, Young-Ji; Holsen, Thomas M.; Hopke, Philip K.
Ambient gaseous phase mercury concentrations (TGM) were measured at three locations in NY State including Potsdam, Stockton, and Sterling from May 2000 to March 2005. Using these data, three hybrid receptor models incorporating backward trajectories were used to identify source areas for TGM. The models used were potential source contribution function (PSCF), residence time weighted concentration (RTWC), and simplified quantitative transport bias analysis (SQTBA). Each model was applied using multi-site measurements to resolve the locations of important mercury sources for New York State. PSCF results showed that southeastern New York, Ohio, Indiana, Tennessee, Louisiana, and Virginia were important TGM source areas for these sites. RTWC identified Canadian sources including the metal production facilities in Ontario and Quebec, but US regional sources including the Ohio River Valley were also resolved. Sources in southeastern NY, Massachusetts, western Pennsylvania, Indiana, and northern Illinois were identified to be significant by SQTBA. The three modeling results were combined to locate the most important probable source locations, and those are Ohio, Indiana, Illinois, and Wisconsin. The Atlantic Ocean was suggested to be a possible source as well.
Microseismic imaging using a source function independent full waveform inversion method
NASA Astrophysics Data System (ADS)
Wang, Hanchen; Alkhalifah, Tariq
2018-07-01
At the heart of microseismic event measurements is the task to estimate the location of the source microseismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional microseismic source locating methods require, in many cases, manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, FWI of microseismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modelled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers are calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
NASA Astrophysics Data System (ADS)
Sambell, K.; Evers, L. G.; Snellen, M.
2017-12-01
Deriving the deep-ocean temperature is a challenge. In-situ observations and satellite observations are hardly applicable. However, knowledge about changes in the deep ocean temperature is important in relation to climate change. Oceans are filled with low-frequency sound waves created by sources such as underwater volcanoes, earthquakes and seismic surveys. The propagation of these sound waves is temperature dependent and therefore carries valuable information that can be used for temperature monitoring. This phenomenon is investigated by applying interferometry to hydroacoustic data measured in the South Pacific Ocean. The data is measured at hydrophone station H03 which is part of the International Monitoring System (IMS). This network consists of several stations around the world and is in place for the verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The station consists of two arrays located north and south of Robinson Crusoe Island separated by 50 km. Both arrays consist of three hydrophones with an intersensor distance of 2 km located at a depth of 1200 m. This depth is in range of the SOFAR channel. Hydroacoustic data measured at the south station is cross-correlated for the time period 2014-2017. The results are improved by applying one-bit normalization as a preprocessing step. Furthermore, beamforming is applied to the hydroacoustic data in order to characterize ambient noise sources around the array. This shows the presence of a continuous source at a backazimuth between 180 and 200 degrees throughout the whole time period, which is in agreement with the results obtained by cross-correlation. Studies on source strength show a seasonal dependence. This is an indication that the sound is related to acoustic activity in Antarctica. Results on this are supported by acoustic propagation modeling. The normal mode technique is used to study the sound propagation from possible source locations towards station H03.
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Liang, Wen-Tzong; Cheng, Hui-Wen; Tu, Feng-Shan; Ma, Kuo-Fong; Tsuruoka, Hiroshi; Kawakatsu, Hitoshi; Huang, Bor-Shouh; Liu, Chun-Chi
2014-01-01
We have developed a real-time moment tensor monitoring system (RMT) which takes advantage of a grid-based moment tensor inversion technique and real-time broad-band seismic recordings to automatically monitor earthquake activities in the vicinity of Taiwan. The centroid moment tensor (CMT) inversion technique and a grid search scheme are applied to obtain the information of earthquake source parameters, including the event origin time, hypocentral location, moment magnitude and focal mechanism. All of these source parameters can be determined simultaneously within 117 s after the occurrence of an earthquake. The monitoring area involves the entire Taiwan Island and the offshore region, which covers the area of 119.3°E to 123.0°E and 21.0°N to 26.0°N, with a depth from 6 to 136 km. A 3-D grid system is implemented in the monitoring area with a uniform horizontal interval of 0.1° and a vertical interval of 10 km. The inversion procedure is based on a 1-D Green's function database calculated by the frequency-wavenumber (fk) method. We compare our results with the Central Weather Bureau (CWB) catalogue data for earthquakes occurred between 2010 and 2012. The average differences between event origin time and hypocentral location are less than 2 s and 10 km, respectively. The focal mechanisms determined by RMT are also comparable with the Broadband Array in Taiwan for Seismology (BATS) CMT solutions. These results indicate that the RMT system is realizable and efficient to monitor local seismic activities. In addition, the time needed to obtain all the point source parameters is reduced substantially compared to routine earthquake reports. By connecting RMT with a real-time online earthquake simulation (ROS) system, all the source parameters will be forwarded to the ROS to make the real-time earthquake simulation feasible. The RMT has operated offline (2010-2011) and online (since January 2012 to present) at the Institute of Earth Sciences (IES), Academia Sinica (http://rmt.earth.sinica.edu.tw). The long-term goal of this system is to provide real-time source information for rapid seismic hazard assessment during large earthquakes.
LLNL Location and Detection Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, S C; Harris, D B; Anderson, M L
2003-07-16
We present two LLNL research projects in the topical areas of location and detection. The first project assesses epicenter accuracy using a multiple-event location algorithm, and the second project employs waveform subspace Correlation to detect and identify events at Fennoscandian mines. Accurately located seismic events are the bases of location calibration. A well-characterized set of calibration events enables new Earth model development, empirical calibration, and validation of models. In a recent study, Bondar et al. (2003) develop network coverage criteria for assessing the accuracy of event locations that are determined using single-event, linearized inversion methods. These criteria are conservative andmore » are meant for application to large bulletins where emphasis is on catalog completeness and any given event location may be improved through detailed analysis or application of advanced algorithms. Relative event location techniques are touted as advancements that may improve absolute location accuracy by (1) ensuring an internally consistent dataset, (2) constraining a subset of events to known locations, and (3) taking advantage of station and event correlation structure. Here we present the preliminary phase of this work in which we use Nevada Test Site (NTS) nuclear explosions, with known locations, to test the effect of travel-time model accuracy on relative location accuracy. Like previous studies, we find that the reference velocity-model and relative-location accuracy are highly correlated. We also find that metrics based on travel-time residual of relocated events are not a reliable for assessing either velocity-model or relative-location accuracy. In the topical area of detection, we develop specialized correlation (subspace) detectors for the principal mines surrounding the ARCES station located in the European Arctic. Our objective is to provide efficient screens for explosions occurring in the mines of the Kola Peninsula (Kovdor, Zapolyarny, Olenogorsk, Khibiny) and the major iron mines of northern Sweden (Malmberget, Kiruna). In excess of 90% of the events detected by the ARCES station are mining explosions, and a significant fraction are from these northern mining groups. The primary challenge in developing waveform correlation detectors is the degree of variation in the source time histories of the shots, which can result in poor correlation among events even in close proximity. Our approach to solving this problem is to use lagged subspace correlation detectors, which offer some prospect of compensating for variation and uncertainty in source time functions.« less
Image Size Scalable Full-parallax Coloured Three-dimensional Video by Electronic Holography
NASA Astrophysics Data System (ADS)
Sasaki, Hisayuki; Yamamoto, Kenji; Ichihashi, Yasuyuki; Senoh, Takanori
2014-02-01
In electronic holography, various methods have been considered for using multiple spatial light modulators (SLM) to increase the image size. In a previous work, we used a monochrome light source for a method that located an optical system containing lens arrays and other components in front of multiple SLMs. This paper proposes a colourization technique for that system based on time division multiplexing using laser light sources of three colours (red, green, and blue). The experimental device we constructed was able to perform video playback (20 fps) in colour of full parallax holographic three-dimensional (3D) images with an image size of 63 mm and a viewing-zone angle of 5.6 degrees without losing any part of the 3D image.
Multi-distance diffuse optical spectroscopy with a single optode via hypotrochoidal scanning.
Applegate, Matthew B; Roblyer, Darren
2018-02-15
Frequency-domain diffuse optical spectroscopy (FD-DOS) is an established technique capable of determining optical properties and chromophore concentrations in biological tissue. Most FD-DOS systems use either manually positioned, handheld probes or complex arrays of source and detector fibers to acquire data from many tissue locations, allowing for the generation of 2D or 3D maps of tissue. Here, we present a new method to rapidly acquire a wide range of source-detector (SD) separations by mechanically scanning a single SD pair. The source and detector fibers are mounted on a scan head that traces a hypotrochoidal pattern over the sample that, when coupled with a high-speed FD-DOS system, enables the rapid collection of dozens of SD separations for depth-resolved imaging. We demonstrate that this system has an average error of 4±2.6% in absorption and 2±1.8% in scattering across all SD separations. Additionally, by linearly translating the device, the size and location of an absorbing inhomogeneity can be determined through the generation of B-scan images in a manner conceptually analogous to ultrasound imaging. This work demonstrates the potential of single optode diffuse optical scanning for depth resolved visualization of heterogeneous biological tissues at near real-time rates.
Development of Techniques to Investigate Sonoluminescence as a Source of Energy Harvesting
NASA Technical Reports Server (NTRS)
Wrbanek, John D.; Fralick, Gustave C.; Wrbanek, Susan Y.
2007-01-01
Instrumentation techniques are being developed at NASA Glenn Research Center to measure optical, radiation, and thermal properties of the phenomena of sonoluminescence, the light generated using acoustic cavitation. Initial efforts have been directed to the generation of the effect and the imaging of the glow in water and solvents. Several images have been produced of the effect showing the location within containers, without the additions of light enhancers to the liquid. Evidence of high energy generation in the modification of thin films from sonoluminescence in heavy water was seen that was not seen in light water. Bright, localized sonoluminescence was generated using glycerin for possible applications to energy harvesting. Issues to be resolved for an energy harvesting concept will be addressed.
Karakaş, H M; Karakaş, S; Ozkan Ceylan, A; Tali, E T
2009-08-01
Event-related potentials (ERPs) have high temporal resolution, but insufficient spatial resolution; the converse is true for the functional imaging techniques. The purpose of the study was to test the utility of a multimodal EEG/ERP-MRI technique which combines electroencephalography (EEG) and magnetic resonance imaging (MRI) for a simultaneously high temporal and spatial resolution. The sample consisted of 32 healthy young adults of both sexes. Auditory stimuli were delivered according to the active and passive oddball paradigms in the MRI environment (MRI-e) and in the standard conditions of the electrophysiology laboratory environment (Lab-e). Tasks were presented in a fixed order. Participants were exposed to the recording environments in a counterbalanced order. EEG data were preprocessed for MRI-related artifacts. Source localization was made using a current density reconstruction technique. The ERP waveforms for the MRI-e were morphologically similar to those for the Lab-e. The effect of the recording environment, experimental paradigm and electrode location were analyzed using a 2x2x3 analysis of variance for repeated measures. The ERP components in the two environments showed parametric variations and characteristic topographical distributions. The calculated sources were in line with the related literature. The findings indicated effortful cognitive processing in MRI-e. The study provided preliminary data on the feasibility of the multimodal EEG/ERP-MRI technique. It also indicated lines of research that are to be pursued for a decisive testing of this technique and its implementation to clinical practice.
Multi-Detector Analysis System for Spent Nuclear Fuel Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reber, Edward Lawrence; Aryaeinejad, Rahmat; Cole, Jerald Donald
1999-09-01
The Spent Nuclear Fuel (SNF) Non-Destructive Analysis (NDA) program at INEEL is developing a system to characterize SNF for fissile mass, radiation source term, and fissile isotopic content. The system is based on the integration of the Fission Assay Tomography System (FATS) and the Gamma-Neutron Analysis Technique (GNAT) developed under programs supported by the DOE Office of Non-proliferation and National Security. Both FATS and GNAT were developed as separate systems to provide information on the location of special nuclear material in weapons configuration (FATS role), and to measure isotopic ratios of fissile material to determine if the material was frommore » a weapon (GNAT role). FATS is capable of not only determining the presence and location of fissile material but also the quantity of fissile material present to within 50%. GNAT determines the ratios of the fissile and fissionable material by coincidence methods that allow the two prompt (immediately) produced fission fragments to be identified. Therefore, from the combination of FATS and GNAT, MDAS is able to measure the fissile material, radiation source term, and fissile isotopics content.« less
Using x-ray mammograms to assist in microwave breast image interpretation.
Curtis, Charlotte; Frayne, Richard; Fear, Elise
2012-01-01
Current clinical breast imaging modalities include ultrasound, magnetic resonance (MR) imaging, and the ubiquitous X-ray mammography. Microwave imaging, which takes advantage of differing electromagnetic properties to obtain image contrast, shows potential as a complementary imaging technique. As an emerging modality, interpretation of 3D microwave images poses a significant challenge. MR images are often used to assist in this task, and X-ray mammograms are readily available. However, X-ray mammograms provide 2D images of a breast under compression, resulting in significant geometric distortion. This paper presents a method to estimate the 3D shape of the breast and locations of regions of interest from standard clinical mammograms. The technique was developed using MR images as the reference 3D shape with the future intention of using microwave images. Twelve breast shapes were estimated and compared to ground truth MR images, resulting in a skin surface estimation accurate to within an average Euclidean distance of 10 mm. The 3D locations of regions of interest were estimated to be within the same clinical area of the breast as corresponding regions seen on MR imaging. These results encourage investigation into the use of mammography as a source of information to assist with microwave image interpretation as well as validation of microwave imaging techniques.
Ravi, Logesh; Vairavasundaram, Subramaniyaswamy
2016-01-01
Rapid growth of web and its applications has created a colossal importance for recommender systems. Being applied in various domains, recommender systems were designed to generate suggestions such as items or services based on user interests. Basically, recommender systems experience many issues which reflects dwindled effectiveness. Integrating powerful data management techniques to recommender systems can address such issues and the recommendations quality can be increased significantly. Recent research on recommender systems reveals an idea of utilizing social network data to enhance traditional recommender system with better prediction and improved accuracy. This paper expresses views on social network data based recommender systems by considering usage of various recommendation algorithms, functionalities of systems, different types of interfaces, filtering techniques, and artificial intelligence techniques. After examining the depths of objectives, methodologies, and data sources of the existing models, the paper helps anyone interested in the development of travel recommendation systems and facilitates future research direction. We have also proposed a location recommendation system based on social pertinent trust walker (SPTW) and compared the results with the existing baseline random walk models. Later, we have enhanced the SPTW model for group of users recommendations. The results obtained from the experiments have been presented. PMID:27069468
Fryer, Michael O.; Hills, Andrea J.; Morrison, John L.
2000-01-01
A self calibrating method and apparatus for measuring butterfat and protein content based on measuring the microwave absorption of a sample of milk at several microwave frequencies. A microwave energy source injects microwave energy into the resonant cavity for absorption and reflection by the sample undergoing evaluation. A sample tube is centrally located in the resonant cavity passing therethrough and exposing the sample to the microwave energy. A portion of the energy is absorbed by the sample while another portion of the microwave energy is reflected back to an evaluation device such as a network analyzer. The frequency at which the reflected radiation is at a minimum within the cavity is combined with the scatter coefficient S.sub.11 as well as a phase change to calculate the butterfat content in the sample. The protein located within the sample may also be calculated in a likewise manner using the frequency, S.sub.11 and phase variables. A differential technique using a second resonant cavity containing a reference standard as a sample will normalize the measurements from the unknown sample and thus be self-calibrating. A shuttered mechanism will switch the microwave excitation between the unknown and the reference cavities. An integrated apparatus for measuring the butterfat content in milk using microwave absorption techniques is also presented.
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Turbelin, Gregory; Issartel, Jean-Pierre; Kumar, Pramod; Feiz, Amir Ali
2015-04-01
The fast growing urbanization, industrialization and military developments increase the risk towards the human environment and ecology. This is realized in several past mortality incidents, for instance, Chernobyl nuclear explosion (Ukraine), Bhopal gas leak (India), Fukushima-Daichi radionuclide release (Japan), etc. To reduce the threat and exposure to the hazardous contaminants, a fast and preliminary identification of unknown releases is required by the responsible authorities for the emergency preparedness and air quality analysis. Often, an early detection of such contaminants is pursued by a distributed sensor network. However, identifying the origin and strength of unknown releases following the sensor reported concentrations is a challenging task. This requires an optimal strategy to integrate the measured concentrations with the predictions given by the atmospheric dispersion models. This is an inverse problem. The measured concentrations are insufficient and atmospheric dispersion models suffer from inaccuracy due to the lack of process understanding, turbulence uncertainties, etc. These lead to a loss of information in the reconstruction process and thus, affect the resolution, stability and uniqueness of the retrieved source. An additional well known issue is the numerical artifact arisen at the measurement locations due to the strong concentration gradient and dissipative nature of the concentration. Thus, assimilation techniques are desired which can lead to an optimal retrieval of the unknown releases. In general, this is facilitated within the Bayesian inference and optimization framework with a suitable choice of a priori information, regularization constraints, measurement and background error statistics. An inversion technique is introduced here for an optimal reconstruction of unknown releases using limited concentration measurements. This is based on adjoint representation of the source-receptor relationship and utilization of a weight function which exhibits a priori information about the unknown releases apparent to the monitoring network. The properties of the weight function provide an optimal data resolution and model resolution to the retrieved source estimates. The retrieved source estimates are proved theoretically to be stable against the random measurement errors and their reliability can be interpreted in terms of the distribution of the weight functions. Further, the same framework can be extended for the identification of the point type releases by utilizing the maximum of the retrieved source estimates. The inversion technique has been evaluated with the several diffusion experiments, like, Idaho low wind diffusion experiment (1974), IIT Delhi tracer experiment (1991), European Tracer Experiment (1994), Fusion Field Trials (2007), etc. In case of point release experiments, the source parameters are mostly retrieved close to the true source parameters with least error. Primarily, the proposed technique overcomes two major difficulties incurred in the source reconstruction: (i) The initialization of the source parameters as required by the optimization based techniques. The converged solution depends on their initialization. (ii) The statistical knowledge about the measurement and background errors as required by the Bayesian inference based techniques. These are hypothetically assumed in case of no prior knowledge.
Yan, Zheng-Yu; Du, Qing-Qing; Qian, Jing; Wan, Dong-Yu; Wu, Sheng-Mei
2017-01-01
In the paper, a green and efficient biosynthetical technique was reported for preparing cadmium sulfide (CdS) quantum dots, in which Escherichia coli (E. coli) was chosen as a biomatrix. Fluorescence emission spectra and fluorescent microscopic photographs revealed that as-produced CdS quantum dots had an optimum fluorescence emission peak located at 470nm and emitted a blue-green fluorescence under ultraviolet excitation. After extracted from bacterial cells and located the nanocrystals' foci in vivo, the CdS quantum dots showed a uniform size distribution by transmission electron microscope. Through the systematical investigation of the biosynthetic conditions, including culture medium replacement, input time point of cadmium source, working concentrations of raw inorganic ions, and co-cultured time spans of bacteria and metal ions in the bio-manufacture, the results revealed that CdS quantum dots with the strongest fluorescence emission were successfully prepared when E. coli cells were in stationary phase, with the replacement of culture medium and following the incubation with 1.0×10 -3 mol/L cadmium source for 2 days. Results of antimicrobial susceptibility testing indicated that the sensitivities to eight types of antibiotics of E. coli were barely changed before and after CdS quantum dots were prepared in the mild temperature environment, though a slight fall of antibiotic resistance could be observed, suggesting hinted the proposed technique of producing quantum dots is a promising environmentally low-risk protocol. Copyright © 2016 Elsevier Inc. All rights reserved.
Error Analyses of the North Alabama Lightning Mapping Array (LMA)
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Non-invasive Investigation of Human Hippocampal Rhythms Using Magnetoencephalography: A Review.
Pu, Yi; Cheyne, Douglas O; Cornwell, Brian R; Johnson, Blake W
2018-01-01
Hippocampal rhythms are believed to support crucial cognitive processes including memory, navigation, and language. Due to the location of the hippocampus deep in the brain, studying hippocampal rhythms using non-invasive magnetoencephalography (MEG) recordings has generally been assumed to be methodologically challenging. However, with the advent of whole-head MEG systems in the 1990s and development of advanced source localization techniques, simulation and empirical studies have provided evidence that human hippocampal signals can be sensed by MEG and reliably reconstructed by source localization algorithms. This paper systematically reviews simulation studies and empirical evidence of the current capacities and limitations of MEG "deep source imaging" of the human hippocampus. Overall, these studies confirm that MEG provides a unique avenue to investigate human hippocampal rhythms in cognition, and can bridge the gap between animal studies and human hippocampal research, as well as elucidate the functional role and the behavioral correlates of human hippocampal oscillations.
Non-invasive Investigation of Human Hippocampal Rhythms Using Magnetoencephalography: A Review
Pu, Yi; Cheyne, Douglas O.; Cornwell, Brian R.; Johnson, Blake W.
2018-01-01
Hippocampal rhythms are believed to support crucial cognitive processes including memory, navigation, and language. Due to the location of the hippocampus deep in the brain, studying hippocampal rhythms using non-invasive magnetoencephalography (MEG) recordings has generally been assumed to be methodologically challenging. However, with the advent of whole-head MEG systems in the 1990s and development of advanced source localization techniques, simulation and empirical studies have provided evidence that human hippocampal signals can be sensed by MEG and reliably reconstructed by source localization algorithms. This paper systematically reviews simulation studies and empirical evidence of the current capacities and limitations of MEG “deep source imaging” of the human hippocampus. Overall, these studies confirm that MEG provides a unique avenue to investigate human hippocampal rhythms in cognition, and can bridge the gap between animal studies and human hippocampal research, as well as elucidate the functional role and the behavioral correlates of human hippocampal oscillations. PMID:29755314
NASA Technical Reports Server (NTRS)
Panda, Jayanta; Seasholtz, Richard G.
2003-01-01
Noise sources in high-speed jets were identified by directly correlating flow density fluctuation (cause) to far-field sound pressure fluctuation (effect). The experimental study was performed in a nozzle facility at the NASA Glenn Research Center in support of NASA s initiative to reduce the noise emitted by commercial airplanes. Previous efforts to use this correlation method have failed because the tools for measuring jet turbulence were intrusive. In the present experiment, a molecular Rayleigh-scattering technique was used that depended on laser light scattering by gas molecules in air. The technique allowed accurate measurement of air density fluctuations from different points in the plume. The study was conducted in shock-free, unheated jets of Mach numbers 0.95, 1.4, and 1.8. The turbulent motion, as evident from density fluctuation spectra was remarkably similar in all three jets, whereas the noise sources were significantly different. The correlation study was conducted by keeping a microphone at a fixed location (at the peak noise emission angle of 30 to the jet axis and 50 nozzle diameters away) while moving the laser probe volume from point to point in the flow. The following figure shows maps of the nondimensional coherence value measured at different Strouhal frequencies ([frequency diameter]/jet speed) in the supersonic Mach 1.8 and subsonic Mach 0.95 jets. The higher the coherence, the stronger the source was.
Poynting-vector based method for determining the bearing and location of electromagnetic sources
Simons, David J.; Carrigan, Charles R.; Harben, Philip E.; Kirkendall, Barry A.; Schultz, Craig A.
2008-10-21
A method and apparatus is utilized to determine the bearing and/or location of sources, such as, alternating current (A.C.) generators and loads, power lines, transformers and/or radio-frequency (RF) transmitters, emitting electromagnetic-wave energy for which a Poynting-Vector can be defined. When both a source and field sensors (electric and magnetic) are static, a bearing to the electromagnetic source can be obtained. If a single set of electric (E) and magnetic (B) sensors are in motion, multiple measurements permit location of the source. The method can be extended to networks of sensors allowing determination of the location of both stationary and moving sources.
Martins, César C; Doumer, Marta E; Gallice, Wellington C; Dauner, Ana Lúcia L; Cabral, Ana Caroline; Cardoso, Fernanda D; Dolci, Natiely N; Camargo, Luana M; Ferreira, Paulo A L; Figueira, Rubens C L; Mangrich, Antonio S
2015-10-01
Spectroscopic and chromatographic techniques can be used together to evaluate hydrocarbon inputs to coastal environments such as the Paranaguá estuarine system (PES), located in the SW Atlantic, Brazil. Historical inputs of aliphatic hydrocarbons (AHs) and polycyclic aromatic hydrocarbons (PAHs) were analyzed using two sediment cores from the PES. The AHs were related to the presence of biogenic organic matter and degraded oil residues. The PAHs were associated with mixed sources. The highest hydrocarbon concentrations were related to oil spills, while relatively low levels could be attributed to the decrease in oil usage during the global oil crisis. The results of electron paramagnetic resonance were in agreement with the absolute AHs and PAHs concentrations measured by chromatographic techniques, while near-infrared spectroscopy results were consistent with unresolved complex mixture (UCM)/total n-alkanes ratios. These findings suggest that the use of a combination of techniques can increase the accuracy of assessment of contamination in sediments. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kang, Sung-Ju; Kerton, C. R.
2014-01-01
KR 120 (Sh2-187) is a small Galactic HII region located at a distance of 1.4 kpc that shows evidence for triggered star formation in the surrounding molecular cloud. We present an analysis of the young stellar object (YSO) population of the molecular cloud as determined using a variety of classification techniques. YSO candidates are selected from the WISE all sky catalog and classified as Class I, Class II and Flat based on 1) spectral index, 2) color-color or color-magnitude plots, and 3) spectral energy distribution (SED) fits to radiative transfer models. We examine the discrepancies in YSO classification between the various techniques and explore how these discrepancies lead to uncertainty in such scientifically interesting quantities such as the ratio of Class I/Class II sources and the surface density of YSOs at various stages of evolution.
An overview of remote sensing and geodesy for epidemiology and public health application.
Hay, S I
2000-01-01
The techniques of remote sensing (RS) and geodesy have the potential to revolutionize the discipline of epidemiology and its application in human health. As a new departure from conventional epidemiological methods, these techniques require some detailed explanation. This review provides the theoretical background to RS including (i) its physical basis, (ii) an explanation of the orbital characteristics and specifications of common satellite sensor systems, (iii) details of image acquisition and procedures adopted to overcome inherent sources of data degradation, and (iv) a background to geophysical data preparation. This information allows RS applications in epidemiology to be readily interpreted. Some of the techniques used in geodesy, to locate features precisely on Earth so that they can be registered to satellite sensor-derived images, are also included. While the basic principles relevant to public health are presented here, inevitably many of the details must be left to specialist texts.
NASA Technical Reports Server (NTRS)
Buechler, W.; Tucker, A. G.
1981-01-01
Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.
An Overview of Remote Sensing and Geodesy for Epidemiology and Public Health Application
Hay, S.I.
2011-01-01
The techniques of remote sensing (RS) and geodesy have the potential to revolutionize the discipline of epidemiology and its application in human health. As a new departure from conventional epidemiological methods, these techniques require some detailed explanation. This review provides the theoretical background to RS including (i) its physical basis, (ii) an explanation of the orbital characteristics and specifications of common satellite sensor systems, (iii) details of image acquisition and procedures adopted to overcome inherent sources of data degradation, and (iv) a background to geophysical data preparation. This information allows RS applications in epidemiology to be readily interpreted. Some of the techniques used in geodesy, to locate features precisely on Earth so that they can be registered to satellite sensor-derived images, are also included. While the basic principles relevant to public health are presented here, inevitably many of the details must be left to specialist texts. PMID:10997203
Yost, William A; Zhong, Xuan; Najam, Anbar
2015-11-01
In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.
Torres Astorga, Romina; de Los Santos Villalobos, Sergio; Velasco, Hugo; Domínguez-Quintero, Olgioly; Pereira Cardoso, Renan; Meigikos Dos Anjos, Roberto; Diawara, Yacouba; Dercon, Gerd; Mabit, Lionel
2018-05-15
Identification of hot spots of land degradation is strongly related with the selection of soil tracers for sediment pathways. This research proposes the complementary and integrated application of two analytical techniques to select the most suitable fingerprint tracers for identifying the main sources of sediments in an agricultural catchment located in Central Argentina with erosive loess soils. Diffuse reflectance Fourier transformed in the mid-infrared range (DRIFT-MIR) spectroscopy and energy-dispersive X-ray fluorescence (EDXRF) were used for a suitable fingerprint selection. For using DRIFT-MIR spectroscopy as fingerprinting technique, calibration through quantitative parameters is needed to link and correlate DRIFT-MIR spectra with soil tracers. EDXRF was used in this context for determining the concentrations of geochemical elements in soil samples. The selected tracers were confirmed using two artificial mixtures composed of known proportions of soil collected in different sites with distinctive soil uses. These fingerprint elements were used as parameters to build a predictive model with the whole set of DRIFT-MIR spectra. Fingerprint elements such as phosphorus, iron, calcium, barium, and titanium were identified for obtaining a suitable reconstruction of the source proportions in the artificial mixtures. Mid-infrared spectra produced successful prediction models (R 2 = 0.91) for Fe content and moderate useful prediction (R 2 = 0.72) for Ti content. For Ca, P, and Ba, the R 2 were 0.44, 0.58, and 0.59 respectively.
Strengths and limitations of molecular subtyping in a community outbreak of Legionnaires' disease.
Kool, J L; Buchholz, U; Peterson, C; Brown, E W; Benson, R F; Pruckler, J M; Fields, B S; Sturgeon, J; Lehnkering, E; Cordova, R; Mascola, L M; Butler, J C
2000-12-01
An epidemiological and microbiological investigation of a cluster of eight cases of Legionnaires' disease in Los Angeles County in November 1997 yielded conflicting results. The epidemiological part of the investigation implicated one of several mobile cooling towers used by a film studio in the centre of the outbreak area. However, water sampled from these cooling towers contained L. pneumophila serogroup 1 of another subtype than the strain that was recovered from case-patients in the outbreak. Samples from two cooling towers located downwind from all of the case-patients contained a Legionella strain that was indistinguishable from the outbreak strain by four subtyping techniques (AP-PCR, PFGE, MAb, and MLEE). It is unlikely that these cooling towers were the source of infection for all the case-patients, and they were not associated with risk of disease in the case-control study. The outbreak strain also was not distinguishable, by three subtyping techniques (AP-PCR, PFGE, and MAb), from a L. pneumophila strain that had caused an outbreak in Providence, RI, in 1993. Laboratory cross-contamination was unlikely because the initial subtyping was done in different laboratories. In this investigation, microbiology was helpful for distinguishing the outbreak cluster from unrelated cases of Legionnaires' disease occurring elsewhere. However, multiple subtyping techniques failed to distinguish environmental sources that were probably not associated with the outbreak. Persons investigating Legionnaires' disease outbreaks should be aware that microbiological subtyping does not always identify a source with absolute certainty.
[Groundwater organic pollution source identification technology system research and application].
Wang, Xiao-Hong; Wei, Jia-Hua; Cheng, Zhi-Neng; Liu, Pei-Bin; Ji, Yi-Qun; Zhang, Gan
2013-02-01
Groundwater organic pollutions are found in large amount of locations, and the pollutions are widely spread once onset; which is hard to identify and control. The key process to control and govern groundwater pollution is how to control the sources of pollution and reduce the danger to groundwater. This paper introduced typical contaminated sites as an example; then carried out the source identification studies and established groundwater organic pollution source identification system, finally applied the system to the identification of typical contaminated sites. First, grasp the basis of the contaminated sites of geological and hydrogeological conditions; determine the contaminated sites characteristics of pollutants as carbon tetrachloride, from the large numbers of groundwater analysis and test data; then find the solute transport model of contaminated sites and compound-specific isotope techniques. At last, through groundwater solute transport model and compound-specific isotope technology, determine the distribution of the typical site of organic sources of pollution and pollution status; invest identified potential sources of pollution and sample the soil to analysis. It turns out that the results of two identified historical pollution sources and pollutant concentration distribution are reliable. The results provided the basis for treatment of groundwater pollution.
Micro-seismic imaging using a source function independent full waveform inversion method
NASA Astrophysics Data System (ADS)
Wang, Hanchen; Alkhalifah, Tariq
2018-03-01
At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
A computer model for the 30S ribosome subunit.
Kuntz, I D; Crippen, G M
1980-01-01
We describe a computer-generated model for the locations of the 21 proteins of the 30S subunit of the E. coli ribosome. The model uses a new method of incorporating experimental measurements based on a mathematical technique called distance geometry. In this paper, we use data from two sources: immunoelectron microscopy and neutron-scattering studies. The data are generally self-consistent and lead to a set of relatively well-defined structures in which individual protein coordinates differ by approximately 20 A from one structure to another. Two important features of this calculation are the use of extended proteins rather than just the centers of mass, and the ability to confine the protein locations within an arbitrary boundary surface so that only solutions with an approximate 30S "shape" are permitted. PMID:7020786
Ferrario, Damien; Grychtol, Bartłomiej; Adler, Andy; Solà, Josep; Böhm, Stephan H; Bodenstein, Marc
2012-11-01
Lung and cardiovascular monitoring applications of electrical impedance tomography (EIT) require localization of relevant functional structures or organs of interest within the reconstructed images. We describe an algorithm for automatic detection of heart and lung regions in a time series of EIT images. Using EIT reconstruction based on anatomical models, candidate regions are identified in the frequency domain and image-based classification techniques applied. The algorithm was validated on a set of simultaneously recorded EIT and CT data in pigs. In all cases, identified regions in EIT images corresponded to those manually segmented in the matched CT image. Results demonstrate the ability of EIT technology to reconstruct relevant impedance changes at their anatomical locations, provided that information about the thoracic boundary shape (and electrode positions) are used for reconstruction.
NASA Astrophysics Data System (ADS)
Nooshiri, Nima; Heimann, Sebastian; Saul, Joachim; Tilmann, Frederik; Dahm, Torsten
2015-04-01
Automatic earthquake locations are sometimes associated with very large residuals up to 10 s even for clear arrivals, especially for regional stations in subduction zones because of their strongly heterogeneous velocity structure associated. Although these residuals are most likely not related to measurement errors but unmodelled velocity heterogeneity, these stations are usually removed from or down-weighted in the location procedure. While this is possible for large events, it may not be useful if the earthquake is weak. In this case, implementation of travel-time station corrections may significantly improve the automatic locations. Here, the shrinking box source-specific station term method (SSST) [Lin and Shearer, 2005] has been applied to improve relative location accuracy of 1678 events that occurred in the Tonga subduction zone between 2010 and mid-2014. Picks were obtained from the GEOFON earthquake bulletin for all available station networks. We calculated a set of timing corrections for each station which vary as a function of source position. A separate time correction was computed for each source-receiver path at the given station by smoothing the residual field over nearby events. We begin with a very large smoothing radius essentially encompassing the whole event set and iterate by progressively shrinking the smoothing radius. In this way, we attempted to correct for the systematic errors, that are introduced into the locations by the inaccuracies in the assumed velocity structure, without solving for a new velocity model itself. One of the advantages of the SSST technique is that the event location part of the calculation is separate from the station term calculation and can be performed using any single event location method. In this study, we applied a non-linear, probabilistic, global-search earthquake location method using the software package NonLinLoc [Lomax et al., 2000]. The non-linear location algorithm implemented in NonLinLoc is less sensitive to the problem of local misfit minima in the model space. Moreover, the spatial errors estimated by NonLinLoc are much more reliable than those derived by linearized algorithms. According to the obtained results, the root-mean-square (RMS) residual decreased from 1.37 s for the original GEOFON catalog (using a global 1-D velocity model without station specific corrections) to 0.90 s for our SSST catalog. Our results show 45-70% reduction of the median absolute deviation (MAD) of the travel-time residuals at regional stations. Additionally, our locations exhibit less scatter in depth and a sharper image of the seismicity associated with the subducting slab compared to the initial locations.
Acoustic localization of triggered lightning
NASA Astrophysics Data System (ADS)
Arechiga, Rene O.; Johnson, Jeffrey B.; Edens, Harald E.; Thomas, Ronald J.; Rison, William
2011-05-01
We use acoustic (3.3-500 Hz) arrays to locate local (<20 km) thunder produced by triggered lightning in the Magdalena Mountains of central New Mexico. The locations of the thunder sources are determined by the array back azimuth and the elapsed time since discharge of the lightning flash. We compare the acoustic source locations with those obtained by the Lightning Mapping Array (LMA) from Langmuir Laboratory, which is capable of accurately locating the lightning channels. To estimate the location accuracy of the acoustic array we performed Monte Carlo simulations and measured the distance (nearest neighbors) between acoustic and LMA sources. For close sources (<5 km) the mean nearest-neighbors distance was 185 m compared to 100 m predicted by the Monte Carlo analysis. For far distances (>6 km) the error increases to 800 m for the nearest neighbors and 650 m for the Monte Carlo analysis. This work shows that thunder sources can be accurately located using acoustic signals.
Development of Vertical Cable Seismic System (2)
NASA Astrophysics Data System (ADS)
Asakawa, E.; Murakami, F.; Tsukahara, H.; Ishikawa, K.
2012-12-01
The vertical cable seismic is one of the reflection seismic methods. It uses hydrophone arrays vertically moored from the seafloor to record acoustic waves generated by surface, deep-towed or ocean bottom sources. Analyzing the reflections from the sub-seabed, we could look into the subsurface structure. This type of survey is generally called VCS (Vertical Cable Seismic). Because VCS is an efficient high-resolution 3D seismic survey method for a spatially-bounded area, we proposed the method for the hydrothermal deposit survey tool development program that the Ministry of Education, Culture, Sports, Science and Technology (MEXT) started in 2009. We are now developing a VCS system, including not only data acquisition hardware but data processing and analysis technique. Our first experiment of VCS surveys has been carried out in Lake Biwa, JAPAN in November 2009 for a feasibility study. Prestack depth migration is applied to the 3D VCS data to obtain a high quality 3D depth volume. Based on the results from the feasibility study, we have developed two autonomous recording VCS systems. After we carried out a trial experiment in the actual ocean at a water depth of about 400m and we carried out the second VCS survey at Iheya Knoll with a deep-towed source. In this survey, we could establish the procedures for the deployment/recovery of the system and could examine the locations and the fluctuations of the vertical cables at a water depth of around 1000m. The acquired VCS data clearly shows the reflections from the sub-seafloor. Through the experiment, we could confirm that our VCS system works well even in the severe circumstances around the locations of seafloor hydrothermal deposits. We have carried out two field surveys in 2011. One is a 3D survey with a boomer for a high-resolution surface source and the other one for an actual field survey in the Izena Cauldron an active hydrothermal area in the Okinawa Trough. Through these surveys, we have confirmed that the uncertainty in the locations of the source and of the hydrophones in water could lower the quality of subsurface image. It is, therefore, strongly necessary to develop a total survey system that assures an accurate positioning and a deployment techniques. In case of shooting on sea surface, GPS navigation system are available, but in case of deep-towed source or ocean bottom source, the accuracy of shot position with SSBL/USBL is not sufficient for the very high-resolution imaging as requested for the SMS survey. We will incorporate the accurate LBL navigation systems with VCs. The LBL navigation system has been developed by IIS of the University of Tokyo. The error is estimated less than 10cm at the water depth of 3000m. Another approach is that the shot points can be calculated using the first break of the VCS after the VCS locations are estimated by slant-ranging from the sea surface. Our VCS system has been designed as a survey tool for hydrothermal deposit, but it will be also applicable for deep water site surveys or geohazard assessment such as active faults.
NASA Astrophysics Data System (ADS)
Kelly, C. L.; Lawrence, J. F.
2014-12-01
During October 2012, 51 geophones and 6 broadband seismometers were deployed in an ~50x50m region surrounding a periodically erupting columnar geyser in the El Tatio Geyser Field, Chile. The dense array served as the seismic framework for a collaborative project to study the mechanics of complex hydrothermal systems. Contemporaneously, complementary geophysical measurements (including down-hole temperature and pressure, discharge rates, thermal imaging, water chemistry, and video) were also collected. Located on the western flanks of the Andes Mountains at an elevation of 4200m, El Tatio is the third largest geyser field in the world. Its non-pristine condition makes it an ideal location to perform minutely invasive geophysical studies. The El Jefe Geyser was chosen for its easily accessible conduit and extremely periodic eruption cycle (~120s). During approximately 2 weeks of continuous recording, we recorded ~2500 nighttime eruptions which lack cultural noise from tourism. With ample data, we aim to study how the source varies spatially and temporally during each phase of the geyser's eruption cycle. We are developing a new back-projection processing technique to improve source imaging for diffuse signals. Our method was previously applied to the Sierra Negra Volcano system, which also exhibits repeating harmonic and diffuse seismic sources. We back-project correlated seismic signals from the receivers back to their sources, assuming linear source to receiver paths and a known velocity model (obtained from ambient noise tomography). We apply polarization filters to isolate individual and concurrent geyser energy associated with P and S phases. We generate 4D, time-lapsed images of the geyser source field that illustrate how the source distribution changes through the eruption cycle. We compare images for pre-eruption, co-eruption, post-eruption and quiescent periods. We use our images to assess eruption mechanics in the system (i.e. top-down vs. bottom-up) and determine variations in source depth and distribution in the conduit and larger geyser field over many eruption cycles.
Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.
2008-05-01
The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.
An integrated approach to evaluate the Aji-Chai potash resources in Iran using potential field data
NASA Astrophysics Data System (ADS)
Abedi, Maysam
2018-03-01
This work presents an integrated application of potential field data to discover potash-bearing evaporite sources in Aji-Chai salt deposit, located in east Azerbaijan province, northwest of Iran. Low density and diamagnetic effect of salt minerals, i.e. potash, give rise to promising potential field anomalies that assist to localize sought blind targets. The halokinetic-type potash-bearing salts in the prospect zone have flowed upward and intruded into surrounded sedimentary sequences dominated frequently by marl, gypsum and alluvium terraces. Processed gravity and magnetic data delineated a main potash source with negative gravity and magnetic amplitude responses. To better visualize these evaporite deposits, 3D model of density contrast and magnetic susceptibility was constructed through constrained inversion of potential field data. A mixed-norm regularization technique was taken into account to generate sharp and compact geophysical models. Since tectonic pressure causes vertical movement of the potash in the studied region, a simple vertical cylindrical shape is an appropriate geometry to simulate these geological targets. Therefore, structural index (i.e. decay rate of potential field amplitude with distance) of such assumed source was embedded in the inversion program as a geometrical constraint to image these geologically plausible sources. In addition, the top depth of the main and the adjacent sources were estimated 39 m and 22 m, respectively, via the combination of the analytic signal and the Euler deconvolution techniques. Drilling result also indicated that the main source of potash starts at a depth of 38 m. The 3D models of the density contrast and the magnetic susceptibility (assuming a superficial sedimentary cover as a hard constraint in the inversion algorithm) demonstrated that potash source has an extension in depth less than 150 m.
Fiber Bragg grating based arterial localization device
NASA Astrophysics Data System (ADS)
Ho, Siu Chun Michael; Li, Weijie; Razavi, Mehdi; Song, Gangbing
2017-06-01
A critical first step to many surgical procedures is locating and gaining access to a patients vascular system. Vascular access allows the deployment of other surgical instruments and also the monitoring of many physiological parameters. Current methods to locate blood vessels are predominantly based on the landmark technique coupled with ultrasound, fluoroscopy, or Doppler. However, even with experience and technological assistance, locating the required blood vessel is not always an easy task, especially with patients that present atypical anatomy or suffer from conditions such as weak pulsation or obesity that make vascular localization difficult. With recent advances in fiber optic sensors, there is an opportunity to develop a new tool that can make vascular localization safer and easier. In this work, the authors present a new fiber Bragg grating (FBG) based vascular access device that specializes in arterial localization. The device estimates the location towards a local artery based on the bending of a needle inserted near the tissue surrounding the artery. Experimental results obtained from an artificial circulatory loop and a mock artery show the device works best for lower angles of needle insertion and can provide an approximately 40° range of estimation towards the location of a pulsating source (e.g. an artery).
Controlled-source seismic interferometry with one way wave fields
NASA Astrophysics Data System (ADS)
van der Neut, J.; Wapenaar, K.; Thorbecke, J. W.
2008-12-01
In Seismic Interferometry we generally cross-correlate registrations at two receiver locations and sum over an array of sources to retrieve a Green's function as if one of the receiver locations hosts a (virtual) source and the other receiver location hosts an actual receiver. One application of this concept is to redatum an area of surface sources to a downhole receiver location, without requiring information about the medium between the sources and receivers, thus providing an effective tool for imaging below complex overburden, which is also known as the Virtual Source method. We demonstrate how elastic wavefield decomposition can be effectively combined with controlled-source Seismic Interferometry to generate virtual sources in a downhole receiver array that radiate only down- or upgoing P- or S-waves with receivers sensing only down- or upgoing P- or S- waves. For this purpose we derive exact Green's matrix representations from a reciprocity theorem for decomposed wavefields. Required is the deployment of multi-component sources at the surface and multi- component receivers in a horizontal borehole. The theory is supported with a synthetic elastic model, where redatumed traces are compared with those of a directly modeled reflection response, generated by placing active sources at the virtual source locations and applying elastic wavefield decomposition on both source and receiver side.
NASA Astrophysics Data System (ADS)
Meng, L.; Ampuero, J. P.; Rendon, H.
2010-12-01
Back projection of teleseismic waves based on array processing has become a popular technique for earthquake source imaging,in particular to track the areas of the source that generate the strongest high frequency radiation. The technique has been previously applied to study the rupture process of the Sumatra earthquake and the supershear rupture of the Kunlun earthquakes. Here we attempt to image the Haiti earthquake using the data recorded by Venezuela National Seismic Network (VNSN). The network is composed of 22 broad-band stations with an East-West oriented geometry, and is located approximately 10 degrees away from Haiti in the perpendicular direction to the Enriquillo fault strike. This is the first opportunity to exploit the privileged position of the VNSN to study large earthquake ruptures in the Caribbean region. This is also a great opportunity to explore the back projection scheme of the crustal Pn phase at regional distances,which provides unique complementary insights to the teleseismic source inversions. The challenge in the analysis of the 2010 M7.0 Haiti earthquake is its very compact source region, possibly shorter than 30km, which is below the resolution limit of standard back projection techniques based on beamforming. Results of back projection analysis using the teleseismic USarray data reveal little details of the rupture process. To overcome the classical resolution limit we explored the Multiple Signal Classification method (MUSIC), a high-resolution array processing technique based on the signal-noise orthognality in the eigen space of the data covariance, which achieves both enhanced resolution and better ability to resolve closely spaced sources. We experiment with various synthetic earthquake scenarios to test the resolution. We find that MUSIC provides at least 3 times higher resolution than beamforming. We also study the inherent bias due to the interferences of coherent Green’s functions, which leads to a potential quantification of biased uncertainty of the back projection. Preliminary results from the Venezuela data set shows an East to West rupture propagation along the fault with sub-Rayleigh rupture speed, consistent with a compact source with two significant asperities which are confirmed by source time function obtained from Green’s function deconvolution and other source inversion results. These efforts could lead the Venezuela National Seismic Network to play a prominent role in the timely characterization of the rupture process of large earthquakes in the Caribbean, including the future ruptures along the yet unbroken segments of the Enriquillo fault system.
Using rare earth elements to trace wind-driven dispersion of sediments from a point source
NASA Astrophysics Data System (ADS)
Van Pelt, R. Scott; Barnes, Melanie C. W.; Strack, John E.
2018-06-01
The entrainment and movement of aeolian sediments is determined by the direction and intensity of erosive winds. Although erosive winds may blow from all directions, in most regions there is a predominant direction. Dust emission causes the removal preferentially of soil nutrients and contaminants which may be transported tens to even thousands of kilometers from the source and deposited into other ecosystems. It would be beneficial to understand spatially and temporally how the soil source may be degraded and depositional zones enriched. A stable chemical tracer not found in the soil but applied to the surface of all particles in the surface soil would facilitate this endeavor. This study examined whether solution-applied rare earth elements (REEs) could be used to trace aeolian sediment movement from a point source through space and time at the field scale. We applied erbium nitrate solution to a 5 m2 area in the center of a 100 m diameter field 7854 m2 on the Southern High Plains of Texas. The solution application resulted in a soil-borne concentration three orders of magnitude greater than natively found in the field soil. We installed BSNE sampler masts in circular configurations and collected the trapped sediment weekly. We found that REE-tagged sediment was blown into every sampler mast during the course of the study but that there was a predominant direction of transport during the spring. This preliminary investigation suggests that the REEs provide a viable and incisive technique to study spatial and temporal variation of aeolian sediment movement from specific sources to identifiable locations of deposition or locations through which the sediments were transported as horizontal mass flux and the relative contribution of the specific source to the total mass flux.
Waves on Thin Plates: A New (Energy Based) Method on Localization
NASA Astrophysics Data System (ADS)
Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Lengliné, Olivier; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut
2016-04-01
Noisy acoustic signal localization is a difficult problem having a wide range of application. We propose a new localization method applicable for thin plates which is based on energy amplitude attenuation and inversed source amplitude comparison. This inversion is tested on synthetic data using a direct model of Lamb wave propagation and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers, 1 - 26 kHz frequency range). We compare the performance of this technique with classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. The experimental setup consist of a glass / plexiglass plate having dimensions of 80 cm x 40 cm x 1 cm equipped with four accelerometers and an acquisition card. Signals are generated using a steel, glass or polyamide ball (having different sizes) quasi perpendicular hit (from a height of 2-3 cm) on the plate. Signals are captured by sensors placed on the plate on different locations. We measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, array geometry, signal to noise ratio and computational time. We show that this new technique, which is very versatile, works better than conventional techniques over a range of sampling rates 8 kHz - 1 MHz. It is possible to have a decent resolution (3cm mean error) using a very cheap equipment set. The numerical simulations allow us to track the contributions of different error sources in different methods. The effect of the reflections is also included in our simulation by using the imaginary sources outside the plate boundaries. This proposed method can easily be extended for applications in three dimensional environments, to monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).
NASA Astrophysics Data System (ADS)
Kim, G.; Che, I. Y.
2017-12-01
We evaluated relationship among source parameters of underground nuclear tests in northern Korean Peninsula using regional seismic data. Dense global and regional seismic networks are incorporated to measure locations and origin times precisely. Location analyses show that distance among the locations is tiny on a regional scale. The tiny location-differences validate a linear model assumption. We estimated source spectral ratios by excluding path effects based spectral ratios of the observed seismograms. We estimated empirical relationship among depth of burials and yields based on theoretical source models.
Simple equations guide high-frequency surface-wave investigation techniques
Xia, J.; Xu, Y.; Chen, C.; Kaufmann, R.D.; Luo, Y.
2006-01-01
We discuss five useful equations related to high-frequency surface-wave techniques and their implications in practice. These equations are theoretical results from published literature regarding source selection, data-acquisition parameters, resolution of a dispersion curve image in the frequency-velocity domain, and the cut-off frequency of high modes. The first equation suggests Rayleigh waves appear in the shortest offset when a source is located on the ground surface, which supports our observations that surface impact sources are the best source for surface-wave techniques. The second and third equations, based on the layered earth model, reveal a relationship between the optimal nearest offset in Rayleigh-wave data acquisition and seismic setting - the observed maximum and minimum phase velocities, and the maximum wavelength. Comparison among data acquired with different offsets at one test site confirms the better data were acquired with the suggested optimal nearest offset. The fourth equation illustrates that resolution of a dispersion curve image at a given frequency is directly proportional to the product of a length of a geophone array and the frequency. We used real-world data to verify the fourth equation. The last equation shows that the cut-off frequency of high modes of Love waves for a two-layer model is determined by shear-wave velocities and the thickness of the top layer. We applied this equation to Rayleigh waves and multi-layer models with the average velocity and obtained encouraging results. This equation not only endows with a criterion to distinguish high modes from numerical artifacts but also provides a straightforward means to resolve the depth to the half space of a layered earth model. ?? 2005 Elsevier Ltd. All rights reserved.
Applications of Advanced, Waveform Based AE Techniques for Testing Composite Materials
NASA Technical Reports Server (NTRS)
Prosser, William H.
1996-01-01
Advanced, waveform based acoustic emission (AE) techniques have been previously used to evaluate damage progression in laboratory tests of composite coupons. In these tests, broad band, high fidelity acoustic sensors were used to detect signals which were then digitized and stored for analysis. Analysis techniques were based on plate mode wave propagation characteristics. This approach, more recently referred to as Modal AE, provides an enhanced capability to discriminate and eliminate noise signals from those generated by damage mechanisms. This technique also allows much more precise source location than conventional, threshold crossing arrival time determination techniques. To apply Modal AE concepts to the interpretation of AE on larger composite structures, the effects of wave propagation over larger distances and through structural complexities must be well characterized and understood. In this research, measurements were made of the attenuation of the extensional and flexural plate mode components of broad band simulated AE signals in large composite panels. As these materials have applications in a cryogenic environment, the effects of cryogenic insulation on the attenuation of plate mode AE signals were also documented.
Use of multidimensional, multimodal imaging and PACS to support neurological diagnoses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, S.T.C.; Knowlton, R.; Hoo, K.S.
1995-12-31
Technological advances in brain imaging have revolutionized diagnosis in neurology and neurological surgery. Major imaging techniques include magnetic resonance imaging (MRI) to visualize structural anatomy, positron emission tomography (PET) to image metabolic function and cerebral blood flow, magnetoencephalography (MEG) to visualize the location of physiologic current sources, and magnetic resonance spectroscopy (MRS) to measure specific biochemicals. Each of these techniques studies different biomedical aspects of the grain, but there lacks an effective means to quantify and correlate the disparate imaging datasets in order to improve clinical decision making processes. This paper describes several techniques developed in a UNIX-based neurodiagnostic workstationmore » to aid the non-invasive presurgical evaluation of epilepsy patients. These techniques include on-line access to the picture archiving and communication systems (PACS) multimedia archive, coregistration of multimodality image datasets, and correlation and quantitative of structural and functional information contained in the registered images. For illustration, the authors describe the use of these techniques in a patient case of non-lesional neocortical epilepsy. They also present the future work based on preliminary studies.« less
Code of Federal Regulations, 2011 CFR
2011-07-01
... RICE Located at Area Sources of HAP Emissions 2d Table 2d to Subpart ZZZZ of Part 63 Protection of... 2d Table 2d to Subpart ZZZZ of Part 63—Requirements for Existing Stationary RICE Located at Area... requirements for existing stationary RICE located at area sources of HAP emissions: For each . . . You must...
An inexpensive active optical remote sensing instrument for assessing aerosol distributions.
Barnes, John E; Sharma, Nimmi C P
2012-02-01
Air quality studies on a broad variety of topics from health impacts to source/sink analyses, require information on the distributions of atmospheric aerosols over both altitude and time. An inexpensive, simple to implement, ground-based optical remote sensing technique has been developed to assess aerosol distributions. The technique, called CLidar (Charge Coupled Device Camera Light Detection and Ranging), provides aerosol altitude profiles over time. In the CLidar technique a relatively low-power laser transmits light vertically into the atmosphere. The transmitted laser light scatters off of air molecules, clouds, and aerosols. The entire beam from ground to zenith is imaged using a CCD camera and wide-angle (100 degree) optics which are a few hundred meters from the laser. The CLidar technique is optimized for low altitude (boundary layer and lower troposphere) measurements where most aerosols are found and where many other profiling techniques face difficulties. Currently the technique is limited to nighttime measurements. Using the CLidar technique aerosols may be mapped over both altitude and time. The instrumentation required is portable and can easily be moved to locations of interest (e.g. downwind from factories or power plants, near highways). This paper describes the CLidar technique, implementation and data analysis and offers specifics for users wishing to apply the technique for aerosol profiles.
Backscatter X-Ray Development for Space Vehicle Thermal Protection Systems
NASA Astrophysics Data System (ADS)
Bartha, Bence B.; Hope, Dale; Vona, Paul; Born, Martin; Corak, Tony
2011-06-01
The Backscatter X-Ray (BSX) imaging technique is used for various single sided inspection purposes. Previously developed BSX techniques for spray-on-foam insulation (SOFI) have been used for detecting defects in Space Shuttle External Tank foam insulation. The developed BSX hardware and techniques are currently being enhanced to advance Non-Destructive Evaluation (NDE) methods for future space vehicle applications. Various Thermal Protection System (TPS) materials were inspected using the enhanced BSX imaging techniques, investigating the capability of the method to detect voids and other discontinuities at various locations within each material. Calibration standards were developed for the TPS materials in order to characterize and develop enhanced BSX inspection capabilities. The ability of the BSX technique to detect both manufactured and natural defects was also studied and compared to through-transmission x-ray techniques. The energy of the x-ray, source to object distance, angle of x-ray, focal spot size and x-ray detector configurations were parameters playing a significant role in the sensitivity of the BSX technique to image various materials and defects. The image processing of the results also showed significant increase in the sensitivity of the technique. The experimental results showed BSX to be a viable inspection technique for space vehicle TPS systems.
NASA Astrophysics Data System (ADS)
Nealy, J. L.; Benz, H.; Hayes, G. P.; Bergman, E.; Barnhart, W. D.
2016-12-01
On February 21, 2008 at 14:16:02 (UTC), Wells, Nevada experienced a Mw 6.0 earthquake, the largest earthquake in the state within the past 50 years. Here, we re-analyze in detail the spatiotemporal variations of the foreshock and aftershock sequence and compare the distribution of seismicity to a recent slip model based on inversion of InSAR observations. A catalog of earthquakes for the time period of February 1, 2008 through August 31, 2008 was derived from a combination of arrival time picks using a kurtosis detector (primarily P arrival times), subspace detector (primarily S arrival times), associating the combined pick dataset, and applying multiple event relocation techniques using the 19 closest USArray Transportable Array stations, permanent regional seismic monitoring stations in Nevada and Utah, and temporary stations deployed for an aftershock study. We were able to detect several thousand earthquakes in the months following the mainshock as well as several foreshocks in the days leading up to the event. We reviewed the picks for the largest 986 earthquakes and relocated them using the Hypocentroidal Decomposition (HD) method. The HD technique provides both relative locations for the individual earthquakes and an absolute location for the earthquake cluster, resulting in absolute locations of the events in the cluster having minimal bias from unknown Earth structure. A subset of these "calibrated" earthquake locations that spanned the duration of the sequence and had small uncertainties in location were used as prior constraints within a second relocation effort using the entire dataset and the Bayesloc approach. Accurate locations (to within 2 km) were obtained using Bayesloc for 1,952 of the 2,157 events associated over the seven-month period of the study. The final catalog of earthquake hypocenters indicates that the aftershocks extend for about 20 km along the strike of the ruptured fault. The aftershocks occur primarily updip and along the southwestern edge of the zone of maximum slip as modeled by seismic waveform inversion (Dreger et al., 2011) and by InSAR. The aftershock locations illuminate areas of post-mainshock strain increase and their depths are consistent with InSAR imaging, which showed that the Wells earthquake was a buried source with no observable near-surface offset.
Automatic localization of backscattering events due to particulate in urban areas
NASA Astrophysics Data System (ADS)
Gaudio, P.; Gelfusa, M.; Malizia, Andrea; Parracino, Stefano; Richetta, M.; Murari, A.; Vega, J.
2014-10-01
Particulate matter (PM), emitted by vehicles in urban traffic, can greatly affect environment air quality and have direct implications on both human health and infrastructure integrity. The consequences for society are relevant and can impact also on national health. Limits and thresholds of pollutants emitted by vehicles are typically regulated by government agencies. In the last few years, the interest in PM emissions has grown substantially due to both air quality issues and global warming. Lidar-Dial techniques are widely recognized as a costeffective alternative to monitor large regions of the atmosphere. To maximize the effectiveness of the measurements and to guarantee reliable, automatic monitoring of large areas, new data analysis techniques are required. In this paper, an original tool, the Universal Multi-Event Locator (UMEL), is applied to the problem of automatically indentifying the time location of peaks in Lidar measurements for the detection of particulate matter emitted by anthropogenic sources like vehicles. The method developed is based on Support Vector Regression and presents various advantages with respect to more traditional techniques. In particular, UMEL is based on the morphological properties of the signals and therefore the method is insensitive to the details of the noise present in the detection system. The approach is also fully general, purely software and can therefore be applied to a large variety of problems without any additional cost. The potential of the proposed technique is exemplified with the help of data acquired during an experimental campaign in the field in Rome.
Atom probe field ion microscopy and related topics: A bibliography 1993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godfrey, R.D.; Miller, M.K.; Russell, K.F.
1994-10-01
This bibliography, covering the period 1993, includes references related to the following topics: atom probe field ion microscopy (APFIM), field emission (FE), and field ion microscopy (FIM). Technique-oriented studies and applications are included. The references contained in this document were compiled from a variety of sources including computer searches and personal lists of publications. To reduce the length of this document, the references have been reduced to the minimum necessary to locate the articles. The references are listed alphabetically by authors, an Addendum of references missed in previous bibliographies is included.
Solar rejection for an orbiting telescope
NASA Technical Reports Server (NTRS)
Rehnberg, J. D.
1975-01-01
The present work discusses some of the constraints that the optical designer must deal with in optimizing spaceborne sensors that must look at or near the sun. Analytical techniques are described for predicting the effects of stray radiation from sources such as mirror scatter, baffle scatter, diffraction, and ghost images. In addition, the paper describes a sensor design that has been flown on the Apollo Telescope Mount (Skylab) to aid astronauts in locating solar flares. In addition to keeping stray radiation to a minimum, the design had to be nondegradable by the direct solar heat load.
NASA Technical Reports Server (NTRS)
Loh, Yin C.; Boster, John; Hwu, Shian; Watson, John C.; deSilva, Kanishka; Piatek, Irene (Technical Monitor)
1999-01-01
The Wireless Video System (WVS) provides real-time video coverage of astronaut extra vehicular activities during International Space Station (ISS) assembly. The ISS wireless environment is unique due to the nature of the ISS structure and multiple RF interference sources. This paper describes how the system was developed to combat multipath, blockage, and interference using an automatic antenna switching system. Critical to system performance is the selection of receiver antenna installation locations determined using Uniform Geometrical Theory of Diffraction (GTD) techniques.
The Effect of Center of Gravity and Anthropometrics on Human Performance in Simulated Lunar Gravity
NASA Technical Reports Server (NTRS)
Mulugeta, Lealem; Chappell, Steven P.; Skytland, Nicholas G.
2009-01-01
NASA EVA Physiology, Systems and Performance (EPSP) Project at JSC has been investigating the effects of Center of Gravity and other factors on astronaut performance in reduced gravity. A subset of the studies have been performed with the water immersion technique. Study results show correlation between Center of Gravity location and performance. However, data variability observed between subjects for prescribed Center of Gravity configurations. The hypothesis is that Anthropometric differences between test subjects could be a source of the performance variability.
A martial arts exploration of elbow anatomy: Ikkyo (Aikido's first teaching).
Seitz, F C; Olson, G D; Stenzel, T E
1991-12-01
The Martial Art of Aikido, based on several effective anatomical principles, is used to subdue a training partner. One of these methods is Ikkyo (First Teaching). According to Saotome, the original intent of Ikkyo was to "break the elbow joint" of an enemy. Nowadays the intent is to secure or pin a training partner to the mat. This investigation focused on examining Ikkyo with the purpose of describing the nerves, bones, and muscles involved in receiving this technique. Particular focus was placed on the locations and sources of the reported pain.
Remote sensing impact on corridor selection and placement
NASA Technical Reports Server (NTRS)
Thomson, F. J.; Sellman, A. N.
1975-01-01
Computer-aided corridor selection techniques, utilizing digitized data bases of socio-economic, census, and cadastral data, and developed for highway corridor routing are considered. Land resource data generated from various remote sensing data sources were successfully merged with the ancillary data files of a corridor selection model and prototype highway corridors were designed using the combined data set. Remote sensing derived information considered useful for highway corridor location, special considerations in geometric correction of remote sensing data to facilitate merging it with ancillary data files, and special interface requirements are briefly discussed.
Simulation of oxygen saturation measurement in a single blood vein.
Duadi, Hamootal; Nitzan, Meir; Fixler, Dror
2016-09-15
The value of oxygen saturation in venous blood, SvO2, has important clinical significance since it is related to the tissue oxygen utilization, which is related to the blood flow to the tissue and to its metabolism rate. However, existing pulse oximetry techniques are not suitable for blood in veins. In the current study we examine the feasibility of difference oximetry to assess SvO2 by using two near-infrared wavelengths and collecting the backscattered light from two photodetectors located at different distances from the light source.
Connecting kinematic and dynamic reference frames by D-VLBI
NASA Astrophysics Data System (ADS)
Schuh, Harald; Plank, Lucia; Madzak, Matthias; Böhm, Johannes
2012-08-01
In geodetic and astrometric practice, terrestrial station coordinates are usually provided in the kinematic International Terrestrial Reference Frame (ITRF) and radio source coordinates in the International Celestial Reference Frame (ICRF), whereas measurements of space probes such as satellites and spacecrafts, or planetary ephemerides rest upon dynamical theories. To avoid inconsistencies and errors during measurement and calculation procedures, exact frame ties between quasi - inertial, kinematic and dynamic reference frames have to be secured. While the Earth Orientation Parameters (EOP), e.g. measured by VLBI, link the ITRF to the ICRF, the ties with the dynamic frames can be established with the differential Very Long Baseline Interferometry (D - VLBI) method. By observing space probes alternately t o radio sources, the relative position of the targets to each other on the sky can be determined with high accuracy. While D - VLBI is a common technique in astrophysics (source imaging) and deep space navigation, just recently there have been several effort s to use it for geodetic purposes. We present investigations concerning possible VLBI observations to satellites. This includes the potential usage of available GNNS satellites as well as specifically designed missions, as e.g. the GRASP mission proposed b y JPL/NASA and an international consortium, where the aspect of co - location in space of various techniques (VLBI, SLR, GNSS, DORIS) is the main focus.
Material and physical model for evaluation of deep brain activity contribution to EEG recordings
NASA Astrophysics Data System (ADS)
Ye, Yan; Li, Xiaoping; Wu, Tiecheng; Li, Zhe; Xie, Wenwen
2015-12-01
Deep brain activity is conventionally recorded with surgical implantation of electrodes. During the neurosurgery, brain tissue damage and the consequent side effects to patients are inevitably incurred. In order to eliminate undesired risks, we propose that deep brain activity should be measured using the noninvasive scalp electroencephalography (EEG) technique. However, the deeper the neuronal activity is located, the noisier the corresponding scalp EEG signals are. Thus, the present study aims to evaluate whether deep brain activity could be observed from EEG recordings. In the experiment, a three-layer cylindrical head model was constructed to mimic a human head. A single dipole source (sine wave, 10 Hz, altering amplitudes) was embedded inside the model to simulate neuronal activity. When the dipole source was activated, surface potential was measured via electrodes attached on the top surface of the model and raw data were recorded for signal analysis. Results show that the dipole source activity positioned at 66 mm depth in the model, equivalent to the depth of deep brain structures, is clearly observed from surface potential recordings. Therefore, it is highly possible that deep brain activity could be observed from EEG recordings and deep brain activity could be measured using the noninvasive scalp EEG technique.
NASA Astrophysics Data System (ADS)
Horowitz, F. G.; Gaede, O.
2014-12-01
Wavelet multiscale edge analysis of potential fields (a.k.a. "worms") has been known since Moreau et al. (1997) and was independently derived by Hornby et al. (1999). The technique is useful for producing a scale-explicit overview of the structures beneath a gravity or magnetic survey, including establishing the location and estimating the attitude of surface features, as well as incorporating information about the geometric class (point, line, surface, volume, fractal) of the underlying sources — in a fashion much like traditional structural indices from Euler solutions albeit with better areal coverage. Hornby et al. (2002) show that worms form the locally highest concentration of horizontal edges of a given strike — which in conjunction with the results from Mallat and Zhong (1992) induces a (non-unique!) inversion where the worms are physically interpretable as lateral boundaries in a source distribution that produces a close approximation of the observed potential field. The technique has enjoyed widespread adoption and success in the Australian mineral exploration community — including "ground truth" via successfully drilling structures indicated by the worms. Unfortunately, to our knowledge, all implementations of the code to calculate the worms/multiscale edges (including Horowitz' original research code) are either part of commercial software packages, or have copyright restrictions that impede the use of the technique by the wider community. The technique is completely described mathematically in Hornby et al. (1999) along with some later publications. This enables us to re-implement from scratch the code required to calculate and visualize the worms. We are freely releasing the results under an (open source) BSD two-clause software license. A git repository is available at
Behaviour and fluxes of natural radionuclides in the production process of a phosphoric acid plant.
Bolívar, J P; Martín, J E; García-Tenorio, R; Pérez-Moreno, J P; Mas, J L
2009-02-01
In recent years there has been an increasing awareness of the occupational and public hazards of the radiological impact of non-nuclear industries which process materials containing naturally occurring radionuclides. These include the industries devoted to the production of phosphoric acid by treating sedimentary phosphate rocks enriched in radionuclides from the uranium series. With the aim of evaluating the radiological impact of a phosphoric acid factory located in the south-western Spain, the distribution and levels of radionuclides in the materials involved in its production process have been analysed. In this way, it is possible to asses the flows of radionuclides at each step and to locate those points where a possible radionuclide accumulation could be produced. A set of samples collected along the whole production process were analysed to determine their radionuclide content by both alpha-particle and gamma spectrometry techniques. The radionuclide fractionation steps and enrichment sources have been located, allowing the establishment of their mass (activity) balances per year.
40 CFR Table 3 to Subpart Zzzz of... - Subsequent Performance Tests
Code of Federal Regulations, 2011 CFR
2011-07-01
... reconstructed 2SLB stationary RICE with a brake horsepower > 500 located at major sources; new or reconstructed 4SLB stationary RICE with a brake horsepower ≥ 250 located at major sources; and new or reconstructed CI stationary RICE with a brake horsepower > 500 located at major sources Reduce CO emissions and not...
Microseismic source locations with deconvolution migration
NASA Astrophysics Data System (ADS)
Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu
2018-03-01
Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.
Brady's Geothermal Field - March 2016 Vibroseis SEG-Y Files and UTM Locations
Kurt Feigl
2016-03-31
PoroTomo March 2016 (Task 6.4) Updated vibroseis source locations with UTM locations. Supersedes gdr.openei.org/submissions/824. Updated vibroseis source location data for Stages 1-4, PoroTomo March 2016. This revision includes source point locations in UTM format (meters) for all four Stages of active source acquisition. Vibroseis sweep data were collected on a Signature Recorder unit (mfr Seismic Source) mounted in the vibroseis cab during the March 2016 PoroTomo active seismic survey Stages 1 to 4. Each sweep generated a GPS timed SEG-Y file with 4 input channels and a 20 second record length. Ch1 = pilot sweep, Ch2 = accelerometer output from the vibe's mass, Ch3 = accel output from the baseplase, and Ch4 = weighted sum of the accelerometer outputs. SEG-Y files are available via the links below.
Noise Source Identification in a Reverberant Field Using Spherical Beamforming
NASA Astrophysics Data System (ADS)
Choi, Young-Chul; Park, Jin-Ho; Yoon, Doo-Byung; Kwon, Hyu-Sang
Identification of noise sources, their locations and strengths, has been taken great attention. The method that can identify noise sources normally assumes that noise sources are located at a free field. However, the sound in a reverberant field consists of that coming directly from the source plus sound reflected or scattered by the walls or objects in the field. In contrast to the exterior sound field, reflections are added to sound field. Therefore, the source location estimated by the conventional methods may give unacceptable error. In this paper, we explain the effects of reverberant field on interior source identification process and propose the method that can identify noise sources in the reverberant field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barber, T.R.; Lauren, D.J.; Dimitry, J.A.
1995-12-31
A bioaccumulation study was conducted following a release of Fuel Oil {number_sign}2 into Sugarland Run, a small northern Virginia stream. Caged clams (Corbicula sp.) were placed in 3 downstream locations and 2 upstream reference areas for an exposure period of approximately 28 days. In addition, resident clams from the Potomac River were sampled at the start of the study and at 4 and 8 weeks. Chemical fingerprinting techniques were employed to identify spill-related polycyclic aromatic hydrocarbons (PAHs) and to differentiate these compounds from background sources of contamination. The greatest concentration of spill-related PAHs (2 and 3-ring compounds) were measured inmore » clams placed immediately downstream of the spill site, and tissue concentrations systematically decreased with distance from the spill site. PAHs that were not related to Fuel Oil {number_sign}2 were found in all clams and accounted for up to 90% of the total body burden at downstream locations. Furthermore, the highest concentrations of 4-, 5-, and 6-ring PAH were found at the upstream reference location, and indicated an important source of PAHs into the environment. Body burdens measured in this study were compared to ambient concentrations reported for bivalves from a variety of environments. Tissue concentrations were also compared to concentrations that have been reported to cause adverse biological effects.« less
NASA Astrophysics Data System (ADS)
Clark, D.
2012-12-01
In the future, acquisition of magnetic gradient tensor data is likely to become routine. New methods developed for analysis of magnetic gradient tensor data can also be applied to high quality conventional TMI surveys that have been processed using Fourier filtering techniques, or otherwise, to calculate magnetic vector and tensor components. This approach is, in fact, the only practical way at present to analyze vector component data, as measurements of vector components are seriously afflicted by motion noise, which is not as serious a problem for gradient components. In many circumstances, an optimal approach to extracting maximum information from magnetic surveys would be to combine analysis of measured gradient tensor data with vector components calculated from TMI measurements. New methods for inverting gradient tensor surveys to obtain source parameters have been developed for a number of elementary, but useful, models. These include point dipole (sphere), vertical line of dipoles (narrow vertical pipe), line of dipoles (horizontal cylinder), thin dipping sheet, horizontal line current and contact models. A key simplification is the use of eigenvalues and associated eigenvectors of the tensor. The normalized source strength (NSS), calculated from the eigenvalues, is a particularly useful rotational invariant that peaks directly over 3D compact sources, 2D compact sources, thin sheets and contacts, and is independent of magnetization direction for these sources (and only very weakly dependent on magnetization direction in general). In combination the NSS and its vector gradient enable estimation of the Euler structural index, thereby constraining source geometry, and determine source locations uniquely. NSS analysis can be extended to other useful models, such as vertical pipes, by calculating eigenvalues of the vertical derivative of the gradient tensor. Once source locations are determined, information of source magnetizations can be obtained by simple linear inversion of measured or calculated vector and/or tensor data. Inversions based on the vector gradient of the NSS over the Tallawang magnetite deposit in central New South Wales obtained good agreement between the inferred geometry of the tabular magnetite skarn body and drill hole intersections. Inverted magnetizations are consistent with magnetic property measurements on drill core samples from this deposit. Similarly, inversions of calculated tensor data over the Mount Leyshold gold-mineralized porphyry system in Queensland yield good estimates of the centroid location, total magnetic moment and magnetization direction of the magnetite-bearing potassic alteration zone that are consistent with geological and petrophysical information.
Global Disease Monitoring and Forecasting with Wikipedia
Generous, Nicholas; Fairchild, Geoffrey; Deshpande, Alina; Del Valle, Sara Y.; Priedhorsky, Reid
2014-01-01
Infectious disease is a leading threat to public health, economic stability, and other key social structures. Efforts to mitigate these impacts depend on accurate and timely monitoring to measure the risk and progress of disease. Traditional, biologically-focused monitoring techniques are accurate but costly and slow; in response, new techniques based on social internet data, such as social media and search queries, are emerging. These efforts are promising, but important challenges in the areas of scientific peer review, breadth of diseases and countries, and forecasting hamper their operational usefulness. We examine a freely available, open data source for this use: access logs from the online encyclopedia Wikipedia. Using linear models, language as a proxy for location, and a systematic yet simple article selection procedure, we tested 14 location-disease combinations and demonstrate that these data feasibly support an approach that overcomes these challenges. Specifically, our proof-of-concept yields models with up to 0.92, forecasting value up to the 28 days tested, and several pairs of models similar enough to suggest that transferring models from one location to another without re-training is feasible. Based on these preliminary results, we close with a research agenda designed to overcome these challenges and produce a disease monitoring and forecasting system that is significantly more effective, robust, and globally comprehensive than the current state of the art. PMID:25392913
Global disease monitoring and forecasting with Wikipedia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Generous, Nicholas; Fairchild, Geoffrey; Deshpande, Alina
Infectious disease is a leading threat to public health, economic stability, and other key social structures. Efforts to mitigate these impacts depend on accurate and timely monitoring to measure the risk and progress of disease. Traditional, biologically-focused monitoring techniques are accurate but costly and slow; in response, new techniques based on social internet data, such as social media and search queries, are emerging. These efforts are promising, but important challenges in the areas of scientific peer review, breadth of diseases and countries, and forecasting hamper their operational usefulness. We examine a freely available, open data source for this use: accessmore » logs from the online encyclopedia Wikipedia. Using linear models, language as a proxy for location, and a systematic yet simple article selection procedure, we tested 14 location-disease combinations and demonstrate that these data feasibly support an approach that overcomes these challenges. Specifically, our proof-of-concept yields models with up to 0.92, forecasting value up to the 28 days tested, and several pairs of models similar enough to suggest that transferring models from one location to another without re-training is feasible. Based on these preliminary results, we close with a research agenda designed to overcome these challenges and produce a disease monitoring and forecasting system that is significantly more effective, robust, and globally comprehensive than the current state of the art.« less
Global disease monitoring and forecasting with Wikipedia.
Generous, Nicholas; Fairchild, Geoffrey; Deshpande, Alina; Del Valle, Sara Y; Priedhorsky, Reid
2014-11-01
Infectious disease is a leading threat to public health, economic stability, and other key social structures. Efforts to mitigate these impacts depend on accurate and timely monitoring to measure the risk and progress of disease. Traditional, biologically-focused monitoring techniques are accurate but costly and slow; in response, new techniques based on social internet data, such as social media and search queries, are emerging. These efforts are promising, but important challenges in the areas of scientific peer review, breadth of diseases and countries, and forecasting hamper their operational usefulness. We examine a freely available, open data source for this use: access logs from the online encyclopedia Wikipedia. Using linear models, language as a proxy for location, and a systematic yet simple article selection procedure, we tested 14 location-disease combinations and demonstrate that these data feasibly support an approach that overcomes these challenges. Specifically, our proof-of-concept yields models with r2 up to 0.92, forecasting value up to the 28 days tested, and several pairs of models similar enough to suggest that transferring models from one location to another without re-training is feasible. Based on these preliminary results, we close with a research agenda designed to overcome these challenges and produce a disease monitoring and forecasting system that is significantly more effective, robust, and globally comprehensive than the current state of the art.
Global disease monitoring and forecasting with Wikipedia
Generous, Nicholas; Fairchild, Geoffrey; Deshpande, Alina; ...
2014-11-13
Infectious disease is a leading threat to public health, economic stability, and other key social structures. Efforts to mitigate these impacts depend on accurate and timely monitoring to measure the risk and progress of disease. Traditional, biologically-focused monitoring techniques are accurate but costly and slow; in response, new techniques based on social internet data, such as social media and search queries, are emerging. These efforts are promising, but important challenges in the areas of scientific peer review, breadth of diseases and countries, and forecasting hamper their operational usefulness. We examine a freely available, open data source for this use: accessmore » logs from the online encyclopedia Wikipedia. Using linear models, language as a proxy for location, and a systematic yet simple article selection procedure, we tested 14 location-disease combinations and demonstrate that these data feasibly support an approach that overcomes these challenges. Specifically, our proof-of-concept yields models with up to 0.92, forecasting value up to the 28 days tested, and several pairs of models similar enough to suggest that transferring models from one location to another without re-training is feasible. Based on these preliminary results, we close with a research agenda designed to overcome these challenges and produce a disease monitoring and forecasting system that is significantly more effective, robust, and globally comprehensive than the current state of the art.« less
Lin, Risa J; Jaeger, Dieter
2011-05-01
In previous studies we used the technique of dynamic clamp to study how temporal modulation of inhibitory and excitatory inputs control the frequency and precise timing of spikes in neurons of the deep cerebellar nuclei (DCN). Although this technique is now widely used, it is limited to interpreting conductance inputs as being location independent; i.e., all inputs that are biologically distributed across the dendritic tree are applied to the soma. We used computer simulations of a morphologically realistic model of DCN neurons to compare the effects of purely somatic vs. distributed dendritic inputs in this cell type. We applied the same conductance stimuli used in our published experiments to the model. To simulate variability in neuronal responses to repeated stimuli, we added a somatic white current noise to reproduce subthreshold fluctuations in the membrane potential. We were able to replicate our dynamic clamp results with respect to spike rates and spike precision for different patterns of background synaptic activity. We found only minor differences in the spike pattern generation between focal or distributed input in this cell type even when strong inhibitory or excitatory bursts were applied. However, the location dependence of dynamic clamp stimuli is likely to be different for each cell type examined, and the simulation approach developed in the present study will allow a careful assessment of location dependence in all cell types.
The rush to drill for natural gas: a public health cautionary tale.
Finkel, Madelon L; Law, Adam
2011-05-01
Efforts to identify alternative sources of energy have focused on extracting natural gas from vast shale deposits. The Marcellus Shale, located in western New York, Pennsylvania, and Ohio, is estimated to contain enough natural gas to supply the United States for the next 45 years. New drilling technology-horizontal drilling and high-volume hydraulic fracturing of shale (fracking)-has made gas extraction much more economically feasible. However, this technique poses a threat to the environment and to the public's health. There is evidence that many of the chemicals used in fracking can damage the lungs, liver, kidneys, blood, and brain. We discuss the controversial technique of fracking and raise the issue of how to balance the need for energy with the protection of the public's health.
[Infrared spectroscopy based on quantum cascade lasers].
Wen, Zhong-Quan; Chen, Gang; Peng, Chen; Yuan, Wei-Qing
2013-04-01
Quantum cascade lasers (QCLs) are promising infrared coherent sources. Thanks to the quantum theory and band-gap engineering, QCL can access the wavelength in the range from 3 to 100 microm. Since the fingerprint spectrum of most gases are located in the mid-infrared range, mid-infrared quantum cascade laser based gas sensing technique has become the research focus world wide because of its high power, narrow linewidth and fast scanning. Recent progress in the QCL technology leads to a great improvement in laser output power and efficiency, which stimulates a fast development in the infrared laser spectroscopy. The present paper gives a broad review on the QCL based spectroscopy techniques according to their working principles. A discussion on their applications in gas sensing and explosive detecting is also given at the end of the paper.
The Rush to Drill for Natural Gas: A Public Health Cautionary Tale
Law, Adam
2011-01-01
Efforts to identify alternative sources of energy have focused on extracting natural gas from vast shale deposits. The Marcellus Shale, located in western New York, Pennsylvania, and Ohio, is estimated to contain enough natural gas to supply the United States for the next 45 years. New drilling technology—horizontal drilling and high-volume hydraulic fracturing of shale (fracking)—has made gas extraction much more economically feasible. However, this technique poses a threat to the environment and to the public's health. There is evidence that many of the chemicals used in fracking can damage the lungs, liver, kidneys, blood, and brain. We discuss the controversial technique of fracking and raise the issue of how to balance the need for energy with the protection of the public's health. PMID:21421959
NASA Astrophysics Data System (ADS)
Torres Astorga, Romina; Velasco, Hugo; Dercon, Gerd; Mabit, Lionel
2017-04-01
Soil erosion and associated sediment transportation and deposition processes are key environmental problems in Central Argentinian watersheds. Several land use practices - such as intensive grazing and crop cultivation - are considered likely to increase significantly land degradation and soil/sediment erosion processes. Characterized by highly erodible soils, the sub catchment Estancia Grande (12.3 km2) located 23 km north east of San Luis has been investigated by using sediment source fingerprinting techniques to identify critical hot spots of land degradation. The authors created 4 artificial mixtures using known quantities of the most representative sediment sources of the studied catchment. The first mixture was made using four rotation crop soil sources. The second and the third mixture were created using different proportions of 4 different soil sources including soils from a feedlot, a rotation crop, a walnut forest and a grazing soil. The last tested mixture contained the same sources as the third mixture but with the addition of a fifth soil source (i.e. a native bank soil). The Energy Dispersive X Ray Fluorescence (EDXRF) analytical technique has been used to reconstruct the source sediment proportion of the original mixtures. Besides using a traditional method of fingerprint selection such as Kruskal-Wallis H-test and Discriminant Function Analysis (DFA), the authors used the actual source proportions in the mixtures and selected from the subset of tracers that passed the statistical tests specific elemental tracers that were in agreement with the expected mixture contents. The selection process ended with testing in a mixing model all possible combinations of the reduced number of tracers obtained. Alkaline earth metals especially Strontium (Sr) and Barium (Ba) were identified as the most effective fingerprints and provided a reduced Mean Absolute Error (MAE) of approximately 2% when reconstructing the 4 artificial mixtures. This study demonstrates that the EDXRF fingerprinting approach performed very well in reconstructing our original mixtures especially in identifying and quantifying the contribution of the 4 rotation crop soil sources in the first mixture.
NASA Astrophysics Data System (ADS)
Magiera, Tadeusz; Szuszkiewisz, Marcin; Szuszkiewicz, Maria; Żogała, Bogdan
2017-04-01
The primary goal of this work was to distinguish between soil pollution from long-range and local transport of atmospheric pollutants using soil magnetometry in combination with geochemical analyses and precise delineation of polluted soil layers by using integrated magnetic (surface susceptibility, gradiometric measurement) and other geophysical techniques (conductivity and electrical resistivity tomography). The study area was located in the Izery region of Poland (within the "Black Triangle" region, which is the nickname for one of Europe's most polluted areas, where Germany, Poland and the Czech Republic meet). The study area was located in the Forest Glade where the historical local pollution source (glass factory) was active since and of 18th until the end of 19th century. The magnetic signal here was the combination of long-range transport of magnetic particles, local deposition and anthropogenic layers containing ashes and slags and partly comprising the subsoil of modern soil. Application of the set of different geophysical techniques enabled the precise location of these layers. The effect of the long-range pollution transport was observed on a neighboring hill (Granicznik) of which the western, northwestern and southwestern parts of the slope were exposed to the transport of atmospheric pollutants from the Czech Republic and Germany and Poland. Using soil magnetometry, it was possible to discriminate between long-range transport of atmospheric pollutants and anthropogenic pollution related to the former glasswork located in the Forest Glade. The magnetic susceptibility values (κ) as well as the number of "hot-spots" of volume magnetic susceptibility is significantly larger in the Forest Glade than on the Granicznik Hill where the κ is < 20 ×10-5 SI units. Generally, the western part of the Granicznik Hill is characterized by about two times higher k values than the southeastern part. This trend is attributed to the fact that the western part was subjected mostly to the long-range pollution originating from lignite power plants along the Polish border, while the southeastern part of the hill was shielded by crag and tail formation. Also the set of chemical elements connected with magnetic particles from long-range transport observed on the western slope an the top of Granicznik Hill (As, Cd, Hg, In, Mo, Sb, Se and U) is different than this observed on the Forest Glad connected with local pollution source (Cu, Nb, Ni, Pb, Sn and Zn).
Synchronization Tomography: Modeling and Exploring Complex Brain Dynamics
NASA Astrophysics Data System (ADS)
Fieseler, Thomas
2002-03-01
Phase synchronization (PS) plays an important role both under physiological and pathological conditions. With standard averaging techniques of MEG data, it is difficult to reliably detect cortico-cortical and cortico-muscular PS processes that are not time-locked to an external stimulus. For this reason, novel synchronization analysis techniques were developed and directly applied to MEG signals. Of course, due to the lack of an inverse modeling (i.e. source localization), the spatial resolution of this approach was limited. To detect and localize cerebral PS, we here present the synchronization tomography (ST): For this, we first estimate the cerebral current source density by means of the magnetic field tomography (MFT). We then apply the single-run PS analysis to the current source density in each voxel of the reconstruction space. In this way we study simulated PS, voxel by voxel in order to determine the spatio-temporal resolution of the ST. To this end different generators of ongoing rhythmic cerebral activity are simulated by current dipoles at different locations and directions, which are modeled by slightly detuned chaotic oscillators. MEG signals for these generators are simulated for a spherical head model and a whole-head MEG system. MFT current density solutions are calculated from these simulated signals within a hemispherical source space. We compare the spatial resolution of the ST with that of the MFT. Our results show that adjacent sources which are indistinguishable for the MFT, can nevertheless be separated with the ST, provided they are not strongly phase synchronized. This clearly demonstrates the potential of combining spatial information (i.e. source localization) with temporal information for the anatomical localization of phase synchronization in the human brain.
CMP reflection imaging via interferometry of distributed subsurface sources
NASA Astrophysics Data System (ADS)
Kim, D.; Brown, L. D.; Quiros, D. A.
2015-12-01
The theoretical foundations of recovering body wave energy via seismic interferometry are well established. However in practice, such recovery remains problematic. Here, synthetic seismograms computed for subsurface sources are used to evaluate the geometrical combinations of realistic ambient source and receiver distributions that result in useful recovery of virtual body waves. This study illustrates how surface receiver arrays that span a limited distribution suite of sources, can be processed to reproduce virtual shot gathers that result in CMP gathers which can be effectively stacked with traditional normal moveout corrections. To verify the feasibility of the approach in practice, seismic recordings of 50 aftershocks following the magnitude of 5.8 Virginia earthquake occurred in August, 2011 have been processed using seismic interferometry to produce seismic reflection images of the crustal structure above and beneath the aftershock cluster. Although monotonic noise proved to be problematic by significantly reducing the number of usable recordings, the edited dataset resulted in stacked seismic sections characterized by coherent reflections that resemble those seen on a nearby conventional reflection survey. In particular, "virtual" reflections at travel times of 3 to 4 seconds suggest reflector sat approximately 7 to 12 km depth that would seem to correspond to imbricate thrust structures formed during the Appalachian orogeny. The approach described here represents a promising new means of body wave imaging of 3D structure that can be applied to a wide array of geologic and energy problems. Unlike other imaging techniques using natural sources, this technique does not require precise source locations or times. It can thus exploit aftershocks too small for conventional analyses. This method can be applied to any type of microseismic cloud, whether tectonic, volcanic or man-made.
A double-correlation tremor-location method
NASA Astrophysics Data System (ADS)
Li, Ka Lok; Sgattoni, Giulia; Sadeghisorkhani, Hamzeh; Roberts, Roland; Gudmundsson, Olafur
2017-02-01
A double-correlation method is introduced to locate tremor sources based on stacks of complex, doubly-correlated tremor records of multiple triplets of seismographs back projected to hypothetical source locations in a geographic grid. Peaks in the resulting stack of moduli are inferred source locations. The stack of the moduli is a robust measure of energy radiated from a point source or point sources even when the velocity information is imprecise. Application to real data shows how double correlation focuses the source mapping compared to the common single correlation approach. Synthetic tests demonstrate the robustness of the method and its resolution limitations which are controlled by the station geometry, the finite frequency of the signal, the quality of the used velocity information and noise level. Both random noise and signal or noise correlated at time shifts that are inconsistent with the assumed velocity structure can be effectively suppressed. Assuming a surface wave velocity, we can constrain the source location even if the surface wave component does not dominate. The method can also in principle be used with body waves in 3-D, although this requires more data and seismographs placed near the source for depth resolution.
NASA Astrophysics Data System (ADS)
Dimakis, Nikolaos; Soldatos, John; Polymenakos, Lazaros; Sturm, Janienke; Neumann, Joachim; Casas, Josep R.
The CHIL Memory Jog service focuses on facilitating the collaboration of participants in meetings, lectures, presentations, and other human interactive events, occurring in indoor CHIL spaces. It exploits the whole set of the perceptual components that have been developed by the CHIL Consortium partners (e.g., person tracking, face identification, audio source localization, etc) along with a wide range of actuating devices such as projectors, displays, targeted audio devices, speakers, etc. The underlying set of perceptual components provides a constant flow of elementary contextual information, such as “person at location x0,y0”, “speech at location x0,y0”, information that alone is not of significant use. However, the CHIL Memory Jog service is accompanied by powerful situation identification techniques that fuse all the incoming information and creates complex states that drive the actuating logic.
Understanding human infectious Cryptosporidium risk in drinking water supply catchments.
Swaffer, Brooke; Abbott, Hayley; King, Brendon; van der Linden, Leon; Monis, Paul
2018-07-01
Treating drinking water appropriately depends, in part, on the robustness of source water quality risk assessments, however quantifying the proportion of infectious, human pathogenic Cryptosporidium oocysts remains a significant challenge. We analysed 962 source water samples across nine locations to profile the occurrence, rate and timing of infectious, human pathogenic Cryptosporidium in surface waters entering drinking water reservoirs during rainfall-runoff conditions. At the catchment level, average infectivity over the four-year study period reached 18%; however, most locations averaged <5%. The maximum recorded infectivity fraction within a single rainfall runoff event was 65.4%, and was dominated by C. parvum. Twenty-two Cryptosporidium species and genotypes were identified using PCR-based molecular techniques; the most common being C. parvum, detected in 23% of water samples. Associations between landuse and livestock stocking characteristics with Cryptosporidium were determined using a linear mixed-effects model. The concentration of pathogens in water were significantly influenced by flow and dominance of land-use by commercial grazing properties (as opposed to lifestyle properties) in the catchment (p < 0.01). Inclusion of measured infectivity and human pathogenicity data into a quantitative microbial risk assessment (QMRA) could reduce the source water treatment requirements by up to 2.67 log removal values, depending on the catchment, and demonstrated the potential benefit of collating such data for QMRAs. Copyright © 2018 Elsevier Ltd. All rights reserved.
Stochastic analysis of concentration field in a wake region.
Yassin, Mohamed F; Elmi, Abdirashid A
2011-02-01
Identifying geographic locations in urban areas from which air pollutants enter the atmosphere is one of the most important information needed to develop effective mitigation strategies for pollution control. Stochastic analysis is a powerful tool that can be used for estimating concentration fluctuation in plume dispersion in a wake region around buildings. Only few studies have been devoted to evaluate applications of stochastic analysis to pollutant dispersion in an urban area. This study was designed to investigate the concentration fields in the wake region using obstacle model such as an isolated building model. We measured concentration fluctuations at centerline of various downwind distances from the source, and different heights with the frequency of 1 KHz. Concentration fields were analyzed stochastically, using the probability density functions (pdf). Stochastic analysis was performed on the concentration fluctuation and the pdf of mean concentration, fluctuation intensity, and crosswind mean-plume dispersion. The pdf of the concentration fluctuation data have shown a significant non-Gaussian behavior. The lognormal distribution appeared to be the best fit to the shape of concentration measured in the boundary layer. We observed that the plume dispersion pdf near the source was shorter than the plume dispersion far from the source. Our findings suggest that the use of stochastic technique in complex building environment can be a powerful tool to help understand the distribution and location of air pollutants.
Glick, S J; Hawkins, W G; King, M A; Penney, B C; Soares, E J; Byrne, C L
1992-01-01
The application of stationary restoration techniques to SPECT images assumes that the modulation transfer function (MTF) of the imaging system is shift invariant. It was hypothesized that using intrinsic attenuation correction (i.e., methods which explicitly invert the exponential radon transform) would yield a three-dimensional (3-D) MTF which varies less with position within the transverse slices than the combined conjugate view two-dimensional (2-D) MTF varies with depth. Thus the assumption of shift invariance would become less of an approximation for 3-D post- than for 2-D pre-reconstruction restoration filtering. SPECT acquisitions were obtained from point sources located at various positions in three differently shaped, water-filled phantoms. The data were reconstructed with intrinsic attenuation correction, and 3-D MTFs were calculated. Four different intrinsic attenuation correction methods were compared: (1) exponentially weighted backprojection, (2) a modified exponentially weighted backprojection as described by Tanaka et al. [Phys. Med. Biol. 29, 1489-1500 (1984)], (3) a Fourier domain technique as described by Bellini et al. [IEEE Trans. ASSP 27, 213-218 (1979)], and (4) the circular harmonic transform (CHT) method as described by Hawkins et al. [IEEE Trans. Med. Imag. 7, 135-148 (1988)]. The dependence of the 3-D MTF obtained with these methods, on point source location within an attenuator, and on shape of the attenuator, was studied. These 3-D MTFs were compared to: (1) those MTFs obtained with no attenuation correction, and (2) the depth dependence of the arithmetic mean combined conjugate view 2-D MTFs.(ABSTRACT TRUNCATED AT 250 WORDS)
Ellipsoidal head model for fetal magnetoencephalography: forward and inverse solutions
NASA Astrophysics Data System (ADS)
Gutiérrez, David; Nehorai, Arye; Preissl, Hubert
2005-05-01
Fetal magnetoencephalography (fMEG) is a non-invasive technique where measurements of the magnetic field outside the maternal abdomen are used to infer the source location and signals of the fetus' neural activity. There are a number of aspects related to fMEG modelling that must be addressed, such as the conductor volume, fetal position and orientation, gestation period, etc. We propose a solution to the forward problem of fMEG based on an ellipsoidal head geometry. This model has the advantage of highlighting special characteristics of the field that are inherent to the anisotropy of the human head, such as the spread and orientation of the field in relationship with the localization and position of the fetal head. Our forward solution is presented in the form of a kernel matrix that facilitates the solution of the inverse problem through decoupling of the dipole localization parameters from the source signals. Then, we use this model and the maximum likelihood technique to solve the inverse problem assuming the availability of measurements from multiple trials. The applicability and performance of our methods are illustrated through numerical examples based on a real 151-channel SQUID fMEG measurement system (SARA). SARA is an MEG system especially designed for fetal assessment and is currently used for heart and brain studies. Finally, since our model requires knowledge of the best-fitting ellipsoid's centre location and semiaxes lengths, we propose a method for estimating these parameters through a least-squares fit on anatomical information obtained from three-dimensional ultrasound images.
NASA Astrophysics Data System (ADS)
Adam, J. M.-C.; Romanowicz, B.
2015-08-01
We have collected a global dataset of several thousands of high quality records of PKPdf, PKPbc, PKPbc-diff and PKPab phase arrivals in the distance range [149-178°]. Within this collection, we have identified an energy packet that arrives 5-20 s after the PKPbc (or PKPbc-diff) and represents a phase that is not predicted by 1D reference seismic models. We use array analysis techniques to enhance the signal of these scattered phases and show that they originate along the great-circle path in a consistent range of arrival times and narrow range of ray parameters. We therefore refer to this scattered energy the "M" phase. Using the cross-correlation technique to detect and measure the scattered energy arrival times, we compiled a dataset of 1116 records of this M phase. There are no obvious variations with source or station location, nor with the depth of the source. After exploration of possible location for this M phase, we show that its origin is most likely in the vicinity of the inner-core boundary. A tentative model is found that predicts an M-like phase, and produces good fits to its travel times as well as those of the main core phases. In this model, the P velocity profile with depth exhibits an increased gradient from about 400 km to 50 km above the ICB (i.e. slightly faster velocities than in AK135 or PREM), and a ∼ 50 km thick lower velocity layer right above the ICB.
Acoustic Manifestations of Natural versus Triggered Lightning
NASA Astrophysics Data System (ADS)
Arechiga, R. O.; Johnson, J. B.; Edens, H. E.; Rison, W.; Thomas, R. J.; Eack, K.; Eastvedt, E. M.; Aulich, G. D.; Trueblood, J.
2010-12-01
Positive leaders are rarely detected by VHF lightning detection systems; positive leader channels are usually outlined only by recoil events. Positive cloud-to-ground (CG) channels are usually not mapped. The goal of this work is to study the types of thunder produced by natural versus triggered lightning and to assess which types of thunder signals have electromagnetic activity detected by the lightning mapping array (LMA). Towards this end we are investigating the lightning detection capabilities of acoustic techniques, and comparing them with the LMA. In a previous study we used array beam forming and time of flight information to locate acoustic sources associated with lightning. Even though there was some mismatch, generally LMA and acoustic techniques saw the same phenomena. To increase the database of acoustic data from lightning, we deployed a network of three infrasound arrays (30 m aperture) during the summer of 2010 (August 3 to present) in the Magdalena mountains of New Mexico, to monitor infrasound (below 20 Hz) and audio range sources due to natural and triggered lightning. The arrays were located at a range of distances (60 to 1400 m) surrounding the triggering site, called the Kiva, used by Langmuir Laboratory to launch rockets. We have continuous acoustic measurements of lightning data from July 20 to September 18 of 2009, and from August 3 to September 1 of 2010. So far, lightning activity around the Kiva was higher during the summer of 2009. We will present acoustic data from several interesting lightning flashes including a comparison between a natural and a triggered one.
NASA Astrophysics Data System (ADS)
Koiter, A. J.; Owens, P. N.; Petticrew, E. L.; Lobb, D. A.
2013-10-01
Sediment fingerprinting is a technique that is increasingly being used to improve the understanding of sediment dynamics within river basins. At present, one of the main limitations of the technique is the ability to link sediment back to their sources due to the non-conservative nature of many of the sediment properties. The processes that occur between the sediment source locations and the point of collection downstream are not well understood or quantified and currently represent a black-box in the sediment fingerprinting approach. The literature on sediment fingerprinting tends to assume that there is a direct connection between sources and sinks, while much of the broader environmental sedimentology literature identifies that numerous chemical, biological and physical transformations and alterations can occur as sediment moves through the landscape. The focus of this paper is on the processes that drive particle size and organic matter selectivity and biological, geochemical and physical transformations and how understanding these processes can be used to guide sampling protocols, fingerprint selection and data interpretation. The application of statistical approaches without consideration of how unique sediment fingerprints have developed and how robust they are within the environment is a major limitation of many recent studies. This review summarises the current information, identifies areas that need further investigation and provides recommendations for sediment fingerprinting that should be considered for adoption in future studies if the full potential and utility of the approach are to be realised.
NASA Astrophysics Data System (ADS)
Nowak-Lovato, K.
2014-12-01
Seepage from enhanced oil recovery, carbon storage, and natural gas sites can emit trace gases such as carbon dioxide, methane, and hydrogen sulfide. Trace gas emission at these locations demonstrate unique light stable isotope signatures that provide information to enable source identification of the material. Light stable isotope detection through surface monitoring, offers the ability to distinguish between trace gases emitted from sources such as, biological (fertilizers and wastes), mineral (coal or seams), or liquid organic systems (oil and gas reservoirs). To make light stable isotope measurements, we employ the ultra-sensitive technique, frequency modulation spectroscopy (FMS). FMS is an absorption technique with sensitivity enhancements approximately 100-1000x more than standard absorption spectroscopy with the advantage of providing stable isotope signature information. We have developed an integrated in situ (point source) system that measures carbon dioxide, methane and hydrogen sulfide with isotopic resolution and enhanced sensitivity. The in situ instrument involves the continuous collection of air and records the stable isotope ratio for the gas being detected. We have included in-line flask collection points to obtain gas samples for validation of isotopic concentrations using our in-house isotope ratio mass spectroscopy (IRMS). We present calibration curves for each species addressed above to demonstrate the sensitivity and accuracy of the system. We also show field deployment data demonstrating the capabilities of the system in making live dynamic measurements from an active source.
Limited angle C-arm tomosynthesis reconstruction algorithms
NASA Astrophysics Data System (ADS)
Malalla, Nuhad A. Y.; Xu, Shiyu; Chen, Ying
2015-03-01
In this paper, C-arm tomosynthesis with digital detector was investigated as a novel three dimensional (3D) imaging technique. Digital tomosythses is an imaging technique to provide 3D information of the object by reconstructing slices passing through the object, based on a series of angular projection views with respect to the object. C-arm tomosynthesis provides two dimensional (2D) X-ray projection images with rotation (-/+20 angular range) of both X-ray source and detector. In this paper, four representative reconstruction algorithms including point by point back projection (BP), filtered back projection (FBP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were investigated. Dataset of 25 projection views of 3D spherical object that located at center of C-arm imaging space was simulated from 25 angular locations over a total view angle of 40 degrees. With reconstructed images, 3D mesh plot and 2D line profile of normalized pixel intensities on focus reconstruction plane crossing the center of the object were studied with each reconstruction algorithm. Results demonstrated the capability to generate 3D information from limited angle C-arm tomosynthesis. Since C-arm tomosynthesis is relatively compact, portable and can avoid moving patients, it has been investigated for different clinical applications ranging from tumor surgery to interventional radiology. It is very important to evaluate C-arm tomosynthesis for valuable applications.
NASA Astrophysics Data System (ADS)
Manousakas, M.; Diapouli, E.; Papaefthymiou, H.; Migliori, A.; Karydas, A. G.; Padilla-Alvarez, R.; Bogovac, M.; Kaiser, R. B.; Jaksic, M.; Bogdanovic-Radovic, I.; Eleftheriadis, K.
2015-04-01
Particulate matter (PM) is an important constituent of atmospheric pollution especially in areas under the influence of industrial emissions. Megalopolis is a small city of 10,000 inhabitants located in central Peloponnese in close proximity to three coal opencast mines and two lignite fired power plants. 50 PM10 samples were collected in Megalopolis during the years 2009-11 for elemental and multivariate analysis. For the elemental analysis PIXE was used as one of the most effective techniques in APM analytical characterization. Altogether, the concentrations of 22 elements (Z = 11-33), whereas Black Carbon was also determined for each sample using a reflectometer. Factorization software was used (EPA PMF 3.0) for source apportionment analysis. The analysis revealed that major emission sources were soil dust 33% (7.94 ± 0.27 μg/m3), biomass burning 19% (4.43 ± 0.27 μg/m3), road dust 15% (3.63 ± 0.37 μg/m3), power plant emissions 13% (3.01 ± 0.44 μg/m3), traffic 12% (2.82 ± 0.37 μg/m3), and sea spray 8% (1.99 ± 0.41 μg/m3). Wind trajectories have suggested that metals associated with emission from the power plants came mainly from west and were connected with the locations of the lignite mines located in this area. Soil resuspension, road dust and power plant emissions increased during the warm season of the year, while traffic/secondary, sea spray and biomass burning become dominant during the cold season.
Measuring and monitoring KIPT Neutron Source Facility Reactivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yan; Gohar, Yousry; Zhong, Zhaopeng
2015-08-01
Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on developing and constructing a neutron source facility at Kharkov, Ukraine. The facility consists of an accelerator-driven subcritical system. The accelerator has a 100 kW electron beam using 100 MeV electrons. The subcritical assembly has k eff less than 0.98. To ensure the safe operation of this neutron source facility, the reactivity of the subcritical core has to be accurately determined and continuously monitored. A technique which combines the area-ratio method and the flux-to-current ratio method is purposed to determine themore » reactivity of the KIPT subcritical assembly at various conditions. In particular, the area-ratio method can determine the absolute reactivity of the subcritical assembly in units of dollars by performing pulsed-neutron experiments. It provides reference reactivities for the flux-to-current ratio method to track and monitor the reactivity deviations from the reference state while the facility is at other operation modes. Monte Carlo simulations are performed to simulate both methods using the numerical model of the KIPT subcritical assembly. It is found that the reactivities obtained from both the area-ratio method and the flux-to-current ratio method are spatially dependent on the neutron detector locations and types. Numerical simulations also suggest optimal neutron detector locations to minimize the spatial effects in the flux-to-current ratio method. The spatial correction factors are calculated using Monte Carlo methods for both measuring methods at the selected neutron detector locations. Monte Carlo simulations are also performed to verify the accuracy of the flux-to-current ratio method in monitoring the reactivity swing during a fuel burnup cycle.« less
Acoustical Emission Source Location in Thin Rods Through Wavelet Detail Crosscorrelation
1998-03-01
NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS ACOUSTICAL EMISSION SOURCE LOCATION IN THIN RODS THROUGH WAVELET DETAIL CROSSCORRELATION...ACOUSTICAL EMISSION SOURCE LOCATION IN THIN RODS THROUGH WAVELET DETAIL CROSSCORRELATION 6. AUTHOR(S) Jerauld, Joseph G. 5. FUNDING NUMBERS Grant...frequency characteristics of Wavelet Analysis. Software implementation now enables the exploration of the Wavelet Transform to identify the time of
Tholkappian, M; Ravisankar, R; Chandrasekaran, A; Jebakumar, J Prince Prakash; Kanagasabapathy, K V; Prasad, M V R; Satapathy, K K
2018-01-01
The concentration of some heavy metals: Al, Ca, K, Fe, Ti, Mg, Mn, V, Cr, Zn, Ni and Co in sediments from Pulicat Lake to Vadanemmeli along Chennai Coast, Tamil Nadu has been determined using EDXRF technique. The mean concentrations of Mg, Al, K, Ca, Ti, Fe, V, Cr, Mn, Co, Ni, and Zn were found to be 1918, 25436, 9832, 9859, 2109, 8209, 41.58, 34.14, 160.80, 2.85. 18.79 and 29.12 mg kg -1 respectively. These mean concentrations do not exceed the world crustal average. The level of pollution attributed to heavy metals was evaluated using several pollution indicators in order to determine anthropogenically derived contaminations. Enrichment Factor (EF), Geoaccumulation Index (I geo ), Contamination Factor (CF) and Pollution Load Index (PLI) were used in evaluating the contamination status of sediments. Enrichment Factors (EF) reveal the anthropogenic sources of V, Cr, Ni and Zn Geoaccumulation Index (I geo ) results reveal that the study area is not contaminated by the heavy metals. Similar results were also obtained by using pollution load index (PLI). The results of pollution indices indicates that most of the locations were not polluted by heavy metals. Multivariate statistical analysis performed using principal components and clustering techniques were used to identify the source of the heavy metals. The result of statistical procedures indicate that heavy metals in sediments are mainly of natural origin. This study provides a relatively novel technique for identifying and mapping the distribution of metal pollutants and their sources in sediment.
NASA Astrophysics Data System (ADS)
Vergallo, P.; Lay-Ekuakille, A.
2013-08-01
Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to have a significant improvement compared to the classical MUSIC method, with a small margin of uncertainty about the exact location of the sources. In fact, the constraints of the spatial sparsity on the signal field allow to concentrate power in the directions of active sources, and consequently it is possible to calculate the position of the sources within the considered volume conductor. Later, the method is tested on the real EEG data too. The result is in accordance with the clinical report even if improvements are necessary to have further accurate estimates of the positions of the sources.
NASA Astrophysics Data System (ADS)
Tomlinson, Michael S.; De Carlo, Eric Heinen
2016-06-01
The Department of Defense disposed of conventional and chemical munitions as well as bulk containers of chemical agents in US coastal waters including those surrounding the State of Hawai´i. The Hawai´i Undersea Military Munitions Assessment has been collecting biota, water, and sediment samples from two disposal areas south of the island of O´ahu in waters 500 to 600 m deep known to have received both conventional munitions and chemical agents (specifically sulfur mustard). Unlike a number of other sea-disposed munitions investigations which used grabs or corers lowered from surface vessels, we used manned submersibles to collect the samples. Using this approach, we were able to visually identify the munitions and precisely locate our samples in relation to the munitions on the seafloor. This paper focuses on the occurrence and possible sources of arsenic found in the sediments surrounding the disposed military munitions and chemical agents. Using nonparametric multivariate statistical techniques, we looked for patterns in the chemical data obtained from these sediment samples in order to determine the possible sources of the arsenic found in these sediments. The results of the ordination technique nonmetric multidimensional scaling indicate that the arsenic is associated with terrestrial sources and not munitions. This was not altogether surprising given that: (1) the chemical agents disposed of in this area supposedly did not contain arsenic, and (2) the disposal areas studied were under terrestrial influence or served as dredge spoil disposal sites. The sediment arsenic concentrations during this investigation ranged from <1.3 to 40 mg/kg-dry weight with the lower concentrations typically found around control sites and munitions (not located in dredge disposal areas) and the higher values found at dredge disposal sites (with or without munitions). During the course of our investigation we did, however, discover that mercury appears to be loosely associated with munitions. Given that mercury contamination has been seen in about 20% of the munitions and ton containers of sulfur mustard, the association of mercury with chemical agents is not totally unexpected.
NASA Astrophysics Data System (ADS)
Lonzaga, Joel Barci
Both modulated ultrasonic radiation pressure and oscillating Maxwell stress from a voltage-modulated ring electrode are employed to excite low-frequency capillary modes of a weakly tapered liquid jet issuing from a nozzle. The capillary modes are waves formed at the surface of the liquid jet. The ultrasound is internally applied to the liquid jet waveguide and is cut off at a location resulting in a significantly enhanced oscillating radiation stress near the cutoff location. Alternatively, the thin electrode can generate a highly localized oscillating Maxwell stress on the jet surface. Experimental evidence shows that a spatially unstable mode with positive group velocity (propagating downstream from the excitation source) and a neutral mode with negative group velocity are both excited. Reflection at the nozzle boundary converts the neutral mode into an unstable one that interferes with the original unstable mode. The interference effect is observed downstream from the source using a laser-based optical extinction technique that detects the surface waves while the modulation frequency is scanned. This technique is very sensitive to small-amplitude disturbances. Existing linear, convective stability analyses on liquid jets accounting for the gravitational effect (i.e. varying radius and velocity) appear to be not applicable to non-slender, slow liquid jets considered here where the gravitational effect is found substantial at low flow rates. The multiple-scales method, asymptotic expansion and WKB approximation are used to derive a dispersion relation for the capillary wave similar to one obtained by Rayleigh but accounting for the gravitational effect. These mathematical tools aided by Langer's transformation are also used to derive a uniformly valid approximation for the acoustic wave propagation in a tapered cylindrical waveguide. The acoustic analytical approximation is validated by finite-element calculations. The jet response is modeled using a hybrid of Fourier analysis and the WKB-type analysis as proposed by Lighthill. The former derives the mode response to a highly localized source while the latter governs the mode propagation in a weakly inhomogeneous jet away from the source.
Earthquake sources near Uturuncu Volcano
NASA Astrophysics Data System (ADS)
Keyson, L.; West, M. E.
2013-12-01
Uturuncu, located in southern Bolivia near the Chile and Argentina border, is a dacitic volcano that was last active 270 ka. It is a part of the Altiplano-Puna Volcanic Complex, which spans 50,000 km2 and is comprised of a series of ignimbrite flare-ups since ~23 ma. Two sets of evidence suggest that the region is underlain by a significant magma body. First, seismic velocities show a low velocity layer consistent with a magmatic sill below depths of 15-20 km. This inference is corroborated by high electrical conductivity between 10km and 30km. This magma body, the so called Altiplano-Puna Magma Body (APMB) is the likely source of volcanic activity in the region. InSAR studies show that during the 1990s, the volcano experienced an average uplift of about 1 to 2 cm per year. The deformation is consistent with an expanding source at depth. Though the Uturuncu region exhibits high rates of crustal seismicity, any connection between the inflation and the seismicity is unclear. We investigate the root causes of these earthquakes using a temporary network of 33 seismic stations - part of the PLUTONS project. Our primary approach is based on hypocenter locations and magnitudes paired with correlation-based relative relocation techniques. We find a strong tendency toward earthquake swarms that cluster in space and time. These swarms often last a few days and consist of numerous earthquakes with similar source mechanisms. Most seismicity occurs in the top 10 kilometers of the crust and is characterized by well-defined phase arrivals and significant high frequency content. The frequency-magnitude relationship of this seismicity demonstrates b-values consistent with tectonic sources. There is a strong clustering of earthquakes around the Uturuncu edifice. Earthquakes elsewhere in the region align in bands striking northwest-southeast consistent with regional stresses.
Joint body- and surface-wave tomography of Yucca Flat, Nevada
NASA Astrophysics Data System (ADS)
Toney, L. D.; Abbott, R. E.; Preston, L. A.
2017-12-01
In 2015, Sandia National Laboratories conducted an active-source seismic survey of Yucca Flat (YF), Nevada, on the Nevada National Security Site. YF hosted over 650 underground nuclear tests (UGTs) between 1957 and 1992. Data from this survey will help characterize the geologic structure and bulk properties of the region, informing models for the next phase of the Source Physics Experiments. The survey source was a 13,000-kg weight drop at 91 locations along a 19-km N-S transect and 56 locations along an 11-km E-W transect. Over 350 three-component 2-Hz geophones were variably spaced at 10, 20, and 100 m along each line; we used a roll-along survey geometry to ensure 10-m receiver spacing within 2 km of the source. We applied the multiple filter technique to the dataset using a comb of 30 narrow bandpass filters with center frequencies ranging from 1 to 50 Hz. After manually windowing out the fundamental Rayleigh-wave arrival, we picked group-velocity dispersion curves for 50,000 source-receiver pairs. We performed a joint inversion of group-velocity dispersion and existing body-wave travel-time picks for the shear- and compressional-wave velocity structure of YF. Our final models reveal significant Vp / Vs anomalies in the vicinities of legacy UGT sites. The velocity structures corroborate existing seismo-stratigraphic models of YF derived from borehole and gravity data. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
Morrison, Deborah; Lin, Qing; Wiehe, Sarah; Liu, Gilbert; Rosenman, Marc; Fuller, Trevor; Wang, Jane; Filippelli, Gabriel
2013-04-01
Urban children remain disproportionately at risk of having higher blood lead levels than their suburban counterparts. The Westside Cooperative Organization (WESCO), located in Marion County, Indianapolis, Indiana, has a history of children with high blood lead levels as well as high soil lead (Pb) values. This study aims at determining the spatial relationship between soil Pb sources and children's blood lead levels. Soils have been identified as a source of chronic Pb exposure to children, but the spatial scale of the source-recipient relationship is not well characterized. Neighborhood-wide analysis of soil Pb distribution along with a furnace filter technique for sampling interior Pb accumulation for selected homes (n = 7) in the WESCO community was performed. Blood lead levels for children aged 0-5 years during the period 1999-2008 were collected. The study population's mean blood lead levels were higher than national averages across all ages, race, and gender. Non-Hispanic blacks and those individuals in the Wishard advantage program had the highest proportion of elevated blood lead levels. The results show that while there is not a direct relationship between soil Pb and children's blood lead levels at a spatial scale of ~100 m, resuspension of locally sourced soil is occurring based on the interior Pb accumulation. County-wide, the largest predictor of elevated blood lead levels is the location within the urban core. Variation in soil Pb and blood lead levels on the community level is high and not predicted by housing stock age or income. Race is a strong predictor for blood lead levels in the WESCO community.
Improving the Nulling Beamformer Using Subspace Suppression.
Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M
2018-01-01
Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qishi; Berry, M. L..; Grieme, M.
We propose a localization-based radiation source detection (RSD) algorithm using the Ratio of Squared Distance (ROSD) method. Compared with the triangulation-based method, the advantages of this ROSD method are multi-fold: i) source location estimates based on four detectors improve their accuracy, ii) ROSD provides closed-form source location estimates and thus eliminates the imaginary-roots issue, and iii) ROSD produces a unique source location estimate as opposed to two real roots (if any) in triangulation, and obviates the need to identify real phantom roots during clustering.
NASA Astrophysics Data System (ADS)
Beltrachini, L.; Blenkmann, A.; von Ellenrieder, N.; Petroni, A.; Urquina, H.; Manes, F.; Ibáñez, A.; Muravchik, C. H.
2011-12-01
The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1992-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
Infrasound detection of meteors
NASA Astrophysics Data System (ADS)
ElGabry, M. N.; Korrat, I. M.; Hussein, H. M.; Hamama, I. H.
2017-06-01
Meteorites that penetrate the atmosphere generate infrasound waves of very low frequency content. These waves can be detected even at large distances. In this study, we analyzed the infrasound waves produced by three meteors. The October 7, 2008 TC3 meteor fell over the north Sudan Nubian Desert, the February 15, 2013 Russian fireball, and the February 6, 2016 Atlantic meteor near to the Brazil coast. The signals of these three meteors were detected by the infrasound sensors of the International Monitoring System (IMS) of the Comprehensive Test Ban Treaty Organization (CTBTO). The progressive Multi Channel Technique is applied to the signals in order to locate these infrasound sources. Correlation of the recorded signals in the collocated elements of each array enables to calculate the delays at the different array element relative to a reference one as a way to estimate the azimuth and velocity of the coming infrasound signals. The meteorite infrasound signals show a sudden change in pressure with azimuth due to its track variation at different heights in the atmosphere. Due to movement of the source, a change in azimuth with time occurs. Our deduced locations correlate well with those obtained from the catalogues of the IDC of the CTBTO.
Pulse width modulation inverter with battery charger
Slicker, James M.
1985-01-01
An inverter is connected between a source of DC power and a three-phase AC induction motor, and a microprocessor-based circuit controls the inverter using pulse width modulation techniques. In the disclosed method of pulse width modulation, both edges of each pulse of a carrier pulse train are equally modulated by a time proportional to sin .theta., where .theta. is the angular displacement of the pulse center at the motor stator frequency from a fixed reference point on the carrier waveform. The carrier waveform frequency is a multiple of the motor stator frequency. The modulated pulse train is then applied to each of the motor phase inputs with respective phase shifts of 120.degree. at the stator frequency. Switching control commands for electronic switches in the inverter are stored in a random access memory (RAM) and the locations of the RAM are successively read out in a cyclic manner, each bit of a given RAM location controlling a respective phase input of the motor. The DC power source preferably comprises rechargeable batteries and all but one of the electronic switches in the inverter can be disabled, the remaining electronic switch being part of a "flyback" DC-DC converter circuit for recharging the battery.
Pulse width modulation inverter with battery charger
NASA Technical Reports Server (NTRS)
Slicker, James M. (Inventor)
1985-01-01
An inverter is connected between a source of DC power and a three-phase AC induction motor, and a microprocessor-based circuit controls the inverter using pulse width modulation techniques. In the disclosed method of pulse width modulation, both edges of each pulse of a carrier pulse train are equally modulated by a time proportional to sin .theta., where .theta. is the angular displacement of the pulse center at the motor stator frequency from a fixed reference point on the carrier waveform. The carrier waveform frequency is a multiple of the motor stator frequency. The modulated pulse train is then applied to each of the motor phase inputs with respective phase shifts of 120.degree. at the stator frequency. Switching control commands for electronic switches in the inverter are stored in a random access memory (RAM) and the locations of the RAM are successively read out in a cyclic manner, each bit of a given RAM location controlling a respective phase input of the motor. The DC power source preferably comprises rechargeable batteries and all but one of the electronic switches in the inverter can be disabled, the remaining electronic switch being part of a flyback DC-DC converter circuit for recharging the battery.
40 CFR 63.4081 - Am I subject to this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... source or group of stationary sources located within a contiguous area and under common control that... sources located within a contiguous area and under common control that is not a major source. (b) The..., spray guns or dip tanks; (4) Application of porcelain enamel, powder coating, and asphalt interior...
40 CFR 63.4081 - Am I subject to this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... source or group of stationary sources located within a contiguous area and under common control that... sources located within a contiguous area and under common control that is not a major source. (b) The..., spray guns or dip tanks; (4) Application of porcelain enamel, powder coating, and asphalt interior...
40 CFR 63.4081 - Am I subject to this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
... source or group of stationary sources located within a contiguous area and under common control that... sources located within a contiguous area and under common control that is not a major source. (b) The..., spray guns or dip tanks; (4) Application of porcelain enamel, powder coating, and asphalt interior...
Locating sources within a dense sensor array using graph clustering
NASA Astrophysics Data System (ADS)
Gerstoft, P.; Riahi, N.
2017-12-01
We develop a model-free technique to identify weak sources within dense sensor arrays using graph clustering. No knowledge about the propagation medium is needed except that signal strengths decay to insignificant levels within a scale that is shorter than the aperture. We then reinterpret the spatial coherence matrix of a wave field as a matrix whose support is a connectivity matrix of a graph with sensors as vertices. In a dense network, well-separated sources induce clusters in this graph. The geographic spread of these clusters can serve to localize the sources. The support of the covariance matrix is estimated from limited-time data using a hypothesis test with a robust phase-only coherence test statistic combined with a physical distance criterion. The latter criterion ensures graph sparsity and thus prevents clusters from forming by chance. We verify the approach and quantify its reliability on a simulated dataset. The method is then applied to data from a dense 5200 element geophone array that blanketed of the city of Long Beach (CA). The analysis exposes a helicopter traversing the array and oil production facilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daley, Tom; Majer, Ernie
2007-04-30
Seismic stimulation is a proposed enhanced oil recovery(EOR) technique which uses seismic energy to increase oil production. Aspart of an integrated research effort (theory, lab and field studies),LBNL has been measuring the seismic amplitude of various stimulationsources in various oil fields (Majer, et al., 2006, Roberts,et al.,2001, Daley et al., 1999). The amplitude of the seismic waves generatedby a stimulation source is an important parameter for increased oilmobility in both theoretical models and laboratory core studies. Theseismic amplitude, typically in units of seismic strain, can be measuredin-situ by use of a borehole seismometer (geophone). Measuring thedistribution of amplitudes within amore » reservoir could allow improved designof stimulation source deployment. In March, 2007, we provided in-fieldmonitoring of two stimulation sources operating in Occidental (Oxy)Permian Ltd's South Wasson Clear Fork (SWCU) unit, located near DenverCity, Tx. The stimulation source is a downhole fluid pulsation devicedeveloped by Applied Seismic Research Corp. (ASR). Our monitoring used aborehole wall-locking 3-component geophone operating in two nearbywells.« less
A Bayesian approach to multi-messenger astronomy: identification of gravitational-wave host galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, XiLong; Messenger, Christopher; Heng, Ik Siong
We present a general framework for incorporating astrophysical information into Bayesian parameter estimation techniques used by gravitational wave data analysis to facilitate multi-messenger astronomy. Since the progenitors of transient gravitational wave events, such as compact binary coalescences, are likely to be associated with a host galaxy, improvements to the source sky location estimates through the use of host galaxy information are explored. To demonstrate how host galaxy properties can be included, we simulate a population of compact binary coalescences and show that for ∼8.5% of simulations within 200 Mpc, the top 10 most likely galaxies account for a ∼50% ofmore » the total probability of hosting a gravitational wave source. The true gravitational wave source host galaxy is in the top 10 galaxy candidates ∼10% of the time. Furthermore, we show that by including host galaxy information, a better estimate of the inclination angle of a compact binary gravitational wave source can be obtained. We also demonstrate the flexibility of our method by incorporating the use of either the B or K band into our analysis.« less
Bayesian Inference for Source Reconstruction: A Real-World Application
Yee, Eugene; Hoffman, Ian; Ungar, Kurt
2014-01-01
This paper applies a Bayesian probabilistic inferential methodology for the reconstruction of the location and emission rate from an actual contaminant source (emission from the Chalk River Laboratories medical isotope production facility) using a small number of activity concentration measurements of a noble gas (Xenon-133) obtained from three stations that form part of the International Monitoring System radionuclide network. The sampling of the resulting posterior distribution of the source parameters is undertaken using a very efficient Markov chain Monte Carlo technique that utilizes a multiple-try differential evolution adaptive Metropolis algorithm with an archive of past states. It is shown that the principal difficulty in the reconstruction lay in the correct specification of the model errors (both scale and structure) for use in the Bayesian inferential methodology. In this context, two different measurement models for incorporation of the model error of the predicted concentrations are considered. The performance of both of these measurement models with respect to their accuracy and precision in the recovery of the source parameters is compared and contrasted. PMID:27379292
Almendros, J.; Chouet, B.; Dawson, P.; Bond, T.
2002-01-01
We analyzed 16 seismic events recorded by the Hawaiian broad-band seismic network at Kilauca Volcano during the period September 9-26, 1999. Two distinct types of event are identified based on their spectral content, very-long-period (VLP) waveform, amplitude decay pattern and particle motion. We locate the VLP signals with a method based on analyses of semblance and particle motion. Different source regions are identified for the two event types. One source region is located at depths of ~1 km beneath the northeast edge of the Halemaumau pit crater. A second region is located at depths of ~8 km below the northwest quadrant of Kilauea caldera. Our study represents the first time that such deep sources have been identified in VLP data at Kilauea. This discovery opens the possibility of obtaining a detailed image of the location and geometry of the magma plumbing system beneath this volcano based on source locations and moment tensor inversions of VLP signals recorded by a permanent, large-aperture broad-band network.
Locating multiple diffusion sources in time varying networks from sparse observations.
Hu, Zhao-Long; Shen, Zhesi; Cao, Shinan; Podobnik, Boris; Yang, Huijie; Wang, Wen-Xu; Lai, Ying-Cheng
2018-02-08
Data based source localization in complex networks has a broad range of applications. Despite recent progress, locating multiple diffusion sources in time varying networks remains to be an outstanding problem. Bridging structural observability and sparse signal reconstruction theories, we develop a general framework to locate diffusion sources in time varying networks based solely on sparse data from a small set of messenger nodes. A general finding is that large degree nodes produce more valuable information than small degree nodes, a result that contrasts that for static networks. Choosing large degree nodes as the messengers, we find that sparse observations from a few such nodes are often sufficient for any number of diffusion sources to be located for a variety of model and empirical networks. Counterintuitively, sources in more rapidly varying networks can be identified more readily with fewer required messenger nodes.
Rogers, Ian S.; Cury, Ricardo C.; Blankstein, Ron; Shapiro, Michael D.; Nieman, Koen; Hoffmann, Udo; Brady, Thomas J.; Abbara, Suhny
2010-01-01
Background Despite rapid advances in cardiac computed tomography (CT), a strategy for optimal visualization of perfusion abnormalities on CT has yet to be validated. Objective To evaluate the performance of several post-processing techniques of source data sets to detect and characterize perfusion defects in acute myocardial infarctions with cardiac CT. Methods Twenty-one subjects (18 men; 60 ± 13 years) that were successfully treated with percutaneous coronary intervention for ST-segment myocardial infarction underwent 64-slice cardiac CT and 1.5 Tesla cardiac MRI scans following revascularization. Delayed enhancement MRI images were analyzed to identify the location of infarcted myocardium. Contiguous short axis images of the left ventricular myocardium were created from the CT source images using 0.75mm multiplanar reconstruction (MPR), 5mm MPR, 5mm maximal intensity projection (MIP), and 5mm minimum intensity projection (MinIP) techniques. Segments already confirmed to contain infarction by MRI were then evaluated qualitatively and quantitatively with CT. Results Overall, 143 myocardial segments were analyzed. On qualitative analysis, the MinIP and thick MPR techniques had greater visibility and definition than the thin MPR and MIP techniques (p < 0.001). On quantitative analysis, the absolute difference in Hounsfield Unit (HU) attenuation between normal and infarcted segments was significantly greater for the MinIP (65.4 HU) and thin MPR (61.2 HU) techniques. However, the relative difference in HU attenuation was significantly greatest for the MinIP technique alone (95%, p < 0.001). Contrast to noise was greatest for the MinIP (4.2) and thick MPR (4.1) techniques (p < 0.001). Conclusion The results of our current investigation found that MinIP and thick MPR detected infarcted myocardium with greater visibility and definition than MIP and thin MPR. PMID:20579617
Stratification of a closed region containing two buoyancy sources
NASA Astrophysics Data System (ADS)
Thompson, Andrew; Linden, Paul
2005-11-01
Many closed systems such as lakes, ocean basins, rooms etc. have inputs of buoyancy at different levels. We address the question of how the resulting stratification depends on the location of these sources. For example a lake is heated and cooled at the surface, while for a room cool air may be applied at the ceiling but the heat source may be a person standing on the floor. We present an experimental study of convection in a finite box in which we systematically vary the vertical location of two well-separated, constant buoyancy sources. We specifically consider the case of a dense source and a light source so that there is no net buoyancy flux into the tank. We study the development of the large-time stratification in the tank, which falls between one of two limits. When the location of the dense source is significantly higher than the light source, the fluid is well mixed and the system remains largely unstratified. When the location of the light source is significantly higher than the dense source, a two- layer stratification develops. We find that the circulation pattern is dominated by counter-flowing shear layers (Wong, Griffiths & Hughes, 2001), whose number and strength are strongly influenced by the buoyancy source locations. The shear layers are the primary means of communication between the plumes and thus play a large role in the resulting stratification. We support our findings with a simple numerical model.
Code of Federal Regulations, 2010 CFR
2010-07-01
... demonstrated initial compliance if . . . 1. 2SLB and 4SLB stationary RICE >500 HP located at a major source and new or reconstructed CI stationary RICE >500 HP located at a major source a. Reduce CO emissions and... initial performance test. 2. 2SLB and 4SLB stationary RICE >500 HP located at a major source and new or...