Location of acoustic emission sources generated by air flow
Kosel; Grabec; Muzic
2000-03-01
The location of continuous acoustic emission sources is a difficult problem of non-destructive testing. This article describes one-dimensional location of continuous acoustic emission sources by using an intelligent locator. The intelligent locator solves a location problem based on learning from examples. To verify whether continuous acoustic emission caused by leakage air flow can be located accurately by the intelligent locator, an experiment on a thin aluminum band was performed. Results show that it is possible to determine an accurate location by using a combination of a cross-correlation function with an appropriate bandpass filter. By using this combination, discrete and continuous acoustic emission sources can be located by using discrete acoustic emission sources for locator learning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohd, Shukri; Holford, Karen M.; Pullin, Rhys
2014-02-12
Source location is an important feature of acoustic emission (AE) damage monitoring in nuclear piping. The ability to accurately locate sources can assist in source characterisation and early warning of failure. This paper describe the development of a novelAE source location technique termed 'Wavelet Transform analysis and Modal Location (WTML)' based on Lamb wave theory and time-frequency analysis that can be used for global monitoring of plate like steel structures. Source location was performed on a steel pipe of 1500 mm long and 220 mm outer diameter with nominal thickness of 5 mm under a planar location test setup usingmore » H-N sources. The accuracy of the new technique was compared with other AE source location methods such as the time of arrival (TOA) techniqueand DeltaTlocation. Theresults of the study show that the WTML method produces more accurate location resultscompared with TOA and triple point filtering location methods. The accuracy of the WTML approach is comparable with the deltaT location method but requires no initial acoustic calibration of the structure.« less
Acoustic localization of triggered lightning
NASA Astrophysics Data System (ADS)
Arechiga, Rene O.; Johnson, Jeffrey B.; Edens, Harald E.; Thomas, Ronald J.; Rison, William
2011-05-01
We use acoustic (3.3-500 Hz) arrays to locate local (<20 km) thunder produced by triggered lightning in the Magdalena Mountains of central New Mexico. The locations of the thunder sources are determined by the array back azimuth and the elapsed time since discharge of the lightning flash. We compare the acoustic source locations with those obtained by the Lightning Mapping Array (LMA) from Langmuir Laboratory, which is capable of accurately locating the lightning channels. To estimate the location accuracy of the acoustic array we performed Monte Carlo simulations and measured the distance (nearest neighbors) between acoustic and LMA sources. For close sources (<5 km) the mean nearest-neighbors distance was 185 m compared to 100 m predicted by the Monte Carlo analysis. For far distances (>6 km) the error increases to 800 m for the nearest neighbors and 650 m for the Monte Carlo analysis. This work shows that thunder sources can be accurately located using acoustic signals.
Microseismic imaging using Geometric-mean Reverse-Time Migration in Hydraulic Fracturing Monitoring
NASA Astrophysics Data System (ADS)
Yin, J.; Ng, R.; Nakata, N.
2017-12-01
Unconventional oil and gas exploration techniques such as hydraulic fracturing are associated with microseismic events related to the generation and development of fractures. For example, hydraulic fracturing, which is popular in Southern Oklahoma, produces earthquakes that are greater than magnitude 2.0. Finding the accurate locations, and mechanisms, of these events provides important information of local stress conditions, fracture distribution, hazard assessment, and economical impact. The accurate source location is also important to separate fracking-induced and wastewater disposal induced seismicity. Here, we implement a wavefield-based imaging method called Geometric-mean Reverse-Time Migration (GmRTM), which takes the advantage of accurate microseismic location based on wavefield back projection. We apply GmRTM to microseismic data collected during hydraulic fracturing for imaging microseismic source locations, and potentially, fractures. Assuming an accurate velocity model, GmRTM can improve the spatial resolution of source locations compared to HypoDD or P/S travel-time based methods. We will discuss the results from GmRTM and HypoDD using this field dataset and synthetic data.
Padilla, Mabel; Mattson, Christine L; Scheer, Susan; Udeagu, Chi-Chi N; Buskin, Susan E; Hughes, Alison J; Jaenicke, Thomas; Wohl, Amy Rock; Prejean, Joseph; Wei, Stanley C
Human immunodeficiency virus (HIV) case surveillance and other health care databases are increasingly being used for public health action, which has the potential to optimize the health outcomes of people living with HIV (PLWH). However, often PLWH cannot be located based on the contact information available in these data sources. We assessed the accuracy of contact information for PLWH in HIV case surveillance and additional data sources and whether time since diagnosis was associated with accurate contact information in HIV case surveillance and successful contact. The Case Surveillance-Based Sampling (CSBS) project was a pilot HIV surveillance system that selected a random population-based sample of people diagnosed with HIV from HIV case surveillance registries in 5 state and metropolitan areas. From November 2012 through June 2014, CSBS staff members attempted to locate and interview 1800 sampled people and used 22 data sources to search for contact information. Among 1063 contacted PLWH, HIV case surveillance data provided accurate telephone number, address, or HIV care facility information for 239 (22%), 412 (39%), and 827 (78%) sampled people, respectively. CSBS staff members used additional data sources, such as support services and commercial people-search databases, to locate and contact PLWH with insufficient contact information in HIV case surveillance. PLWH diagnosed <1 year ago were more likely to have accurate contact information in HIV case surveillance than were PLWH diagnosed ≥1 year ago ( P = .002), and the benefit from using additional data sources was greater for PLWH with more longstanding HIV infection ( P < .001). When HIV case surveillance cannot provide accurate contact information, health departments can prioritize searching additional data sources, especially for people with more longstanding HIV infection.
Hydrogen atoms can be located accurately and precisely by x-ray crystallography.
Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M; Woźniak, Krzysztof; Jayatilaka, Dylan
2016-05-01
Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A-H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A-H bond lengths with those from neutron measurements for A-H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors.
Hydrogen atoms can be located accurately and precisely by x-ray crystallography
Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M.; Woźniak, Krzysztof; Jayatilaka, Dylan
2016-01-01
Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A–H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A–H bond lengths with those from neutron measurements for A–H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors. PMID:27386545
Deep space target location with Hubble Space Telescope (HST) and Hipparcos data
NASA Technical Reports Server (NTRS)
Null, George W.
1988-01-01
Interplanetary spacecraft navigation requires accurate a priori knowledge of target positions. A concept is presented for attaining improved target ephemeris accuracy using two future Earth-orbiting optical observatories, the European Space Agency (ESA) Hipparcos observatory and the Nasa Hubble Space Telescope (HST). Assuming nominal observatory performance, the Hipparcos data reduction will provide an accurate global star catalog, and HST will provide a capability for accurate angular measurements of stars and solar system bodies. The target location concept employs HST to observe solar system bodies relative to Hipparcos catalog stars and to determine the orientation (frame tie) of these stars to compact extragalactic radio sources. The target location process is described, the major error sources discussed, the potential target ephemeris error predicted, and mission applications identified. Preliminary results indicate that ephemeris accuracy comparable to the errors in individual Hipparcos catalog stars may be possible with a more extensive HST observing program. Possible future ground and spacebased replacements for Hipparcos and HST astrometric capabilities are also discussed.
NASA Astrophysics Data System (ADS)
Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji
2017-12-01
Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.
Martina, E.F.
1958-10-14
An improved pulsed ion source of the type where the gas to be ionized is released within the source by momentary heating of an electrode occluded with the gas is presented. The other details of the ion source construction include an electron emitting filament and a positive reference grid, between which an electron discharge is set up, and electrode means for withdrawing the ions from the source. Due to the location of the gas source behind the electrode discharge region, and the positioning of the vacuum exhaust system on the opposite side of the discharge, the released gas is drawn into the electron discharge and ionized in accurately controlled amounts. Consequently, the output pulses of the ion source may be accurately controlled.
Locating the source of diffusion in complex networks by time-reversal backward spreading.
Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene
2016-03-01
Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.
Locating the source of diffusion in complex networks by time-reversal backward spreading
NASA Astrophysics Data System (ADS)
Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H. Eugene
2016-03-01
Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.
NASA Astrophysics Data System (ADS)
Gang, Yin; Yingtang, Zhang; Hongbo, Fan; Zhining, Li; Guoquan, Ren
2016-05-01
We have developed a method for automatic detection, localization and classification (DLC) of multiple dipole sources using magnetic gradient tensor data. First, we define modified tilt angles to estimate the approximate horizontal locations of the multiple dipole-like magnetic sources simultaneously and detect the number of magnetic sources using a fixed threshold. Secondly, based on the isotropy of the normalized source strength (NSS) response of a dipole, we obtain accurate horizontal locations of the dipoles. Then the vertical locations are calculated using magnitude magnetic transforms of magnetic gradient tensor data. Finally, we invert for the magnetic moments of the sources using the measured magnetic gradient tensor data and forward model. Synthetic and field data sets demonstrate effectiveness and practicality of the proposed method.
2018-01-01
ABSTRACT Population at risk of crime varies due to the characteristics of a population as well as the crime generator and attractor places where crime is located. This establishes different crime opportunities for different crimes. However, there are very few efforts of modeling structures that derive spatiotemporal population models to allow accurate assessment of population exposure to crime. This study develops population models to depict the spatial distribution of people who have a heightened crime risk for burglaries and robberies. The data used in the study include: Census data as source data for the existing population, Twitter geo-located data, and locations of schools as ancillary data to redistribute the source data more accurately in the space, and finally gridded population and crime data to evaluate the derived population models. To create the models, a density-weighted areal interpolation technique was used that disaggregates the source data in smaller spatial units considering the spatial distribution of the ancillary data. The models were evaluated with validation data that assess the interpolation error and spatial statistics that examine their relationship with the crime types. Our approach derived population models of a finer resolution that can assist in more precise spatial crime analyses and also provide accurate information about crime rates to the public. PMID:29887766
Simultaneous head tissue conductivity and EEG source location estimation.
Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott
2016-01-01
Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.
Simultaneous head tissue conductivity and EEG source location estimation
Acar, Can E.; Makeig, Scott
2015-01-01
Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675
Jun, James Jaeyoon; Longtin, André; Maler, Leonard
2013-01-01
In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI) and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT) to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF) requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal’s positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole source localization. PMID:23805244
NASA Astrophysics Data System (ADS)
Shen, Y.; Wang, N.; Bao, X.; Flinders, A. F.
2016-12-01
Scattered waves generated near the source contains energy converted from the near-field waves to the far-field propagating waves, which can be used to achieve location accuracy beyond the diffraction limit. In this work, we apply a novel full-wave location method that combines a grid-search algorithm with the 3D Green's tensor database to locate the Non-Proliferation Experiment (NPE) at the Nevada test site and the North Korean nuclear tests. We use the first arrivals (Pn/Pg) and their immediate codas, which are likely dominated by waves scattered at the surface topography near the source, to determine the source location. We investigate seismograms in the frequency of [1.0 2.0] Hz to reduce noises in the data and highlight topography scattered waves. High resolution topographic models constructed from 10 and 90 m grids are used for Nevada and North Korea, respectively. The reference velocity model is based on CRUST 1.0. We use the collocated-grid finite difference method on curvilinear grids to calculate the strain Green's tensor and obtain synthetic waveforms using source-receiver reciprocity. The `best' solution is found based on the least-square misfit between the observed and synthetic waveforms. To suppress random noises, an optimal weighting method for three-component seismograms is applied in misfit calculation. Our results show that the scattered waves are crucial in improving resolution and allow us to obtain accurate solutions with a small number of stations. Since the scattered waves depends on topography, which is known at the wavelengths of regional seismic waves, our approach yields absolute, instead of relative, source locations. We compare our solutions with those of USGS and other studies. Moreover, we use differential waveforms to locate pairs of the North Korea tests from years 2006, 2009, 2013 and 2016 to further reduce the effects of unmodeled heterogeneities and errors in the reference velocity model.
Site survey method and apparatus
Oldham, James G.; Spencer, Charles R.; Begley, Carl L.; Meyer, H. Robert
1991-06-18
The disclosure of the invention is directed to a site survey ground vehicle based apparatus and method for automatically detecting source materials, such as radioactivity, marking the location of the source materials, such as with paint, and mapping the location of the source materials on a site. The apparatus of the invention is also useful for collecting and analyzing samples. The apparatus includes a ground vehicle, detectors mounted at the front of the ground vehicle, and individual detector supports which follow somewhat irregular terrain to allow consistent and accurate detection, and autolocation equipment.
Site survey method and apparatus
Oldham, J.G.; Spencer, C.R.; Begley, C.L.; Meyer, H.R.
1991-06-18
The disclosure of the invention is directed to a site survey ground vehicle based apparatus and method for automatically detecting source materials, such as radioactivity, marking the location of the source materials, such as with paint, and mapping the location of the source materials on a site. The apparatus of the invention is also useful for collecting and analyzing samples. The apparatus includes a ground vehicle, detectors mounted at the front of the ground vehicle, and individual detector supports which follow somewhat irregular terrain to allow consistent and accurate detection, and autolocation equipment. 19 figures.
NASA Astrophysics Data System (ADS)
Guo, H.; Zhang, H.
2016-12-01
Relocating high-precision earthquakes is a central task for monitoring earthquakes and studying the structure of earth's interior. The most popular location method is the event-pair double-difference (DD) relative location method, which uses the catalog and/or more accurate waveform cross-correlation (WCC) differential times from event pairs with small inter-event separations to the common stations to reduce the effect of the velocity uncertainties outside the source region. Similarly, Zhang et al. [2010] developed a station-pair DD location method which uses the differential times from common events to pairs of stations to reduce the effect of the velocity uncertainties near the source region, to relocate the non-volcanic tremors (NVT) beneath the San Andreas Fault (SAF). To utilize advantages of both DD location methods, we have proposed and developed a new double-pair DD location method to use the differential times from pairs of events to pairs of stations. The new method can remove the event origin time and station correction terms from the inversion system and cancel out the effects of the velocity uncertainties near and outside the source region simultaneously. We tested and applied the new method on the northern California regular earthquakes to validate its performance. In comparison, among three DD location methods, the new double-pair DD method can determine more accurate relative locations and the station-pair DD method can better improve the absolute locations. Thus, we further proposed a new location strategy combining station-pair and double-pair differential times to determine accurate absolute and relative locations at the same time. For NVTs, it is difficult to pick the first arrivals and derive the WCC event-pair differential times, thus the general practice is to measure station-pair envelope WCC differential times. However, station-pair tremor locations are scattered due to the low-precision relative locations. The ability that double-pair data can be directly constructed from the station-pair data means that double-pair DD method can be used for improving NVT locations. We have applied the new method to the NVTs beneath the SAF near Cholame, California. Compared to the previous results, the new double-pair DD tremor locations are more concentrated and show more detailed structures.
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
Awakening the BALROG: BAyesian Location Reconstruction Of GRBs
NASA Astrophysics Data System (ADS)
Burgess, J. Michael; Yu, Hoi-Fung; Greiner, Jochen; Mortlock, Daniel J.
2018-05-01
The accurate spatial location of gamma-ray bursts (GRBs) is crucial for both accurately characterizing their spectra and follow-up observations by other instruments. The Fermi Gamma-ray Burst Monitor (GBM) has the largest field of view for detecting GRBs as it views the entire unocculted sky, but as a non-imaging instrument it relies on the relative count rates observed in each of its 14 detectors to localize transients. Improving its ability to accurately locate GRBs and other transients is vital to the paradigm of multimessenger astronomy, including the electromagnetic follow-up of gravitational wave signals. Here we present the BAyesian Location Reconstruction Of GRBs (BALROG) method for localizing and characterizing GBM transients. Our approach eliminates the systematics of previous approaches by simultaneously fitting for the location and spectrum of a source. It also correctly incorporates the uncertainties in the location of a transient into the spectral parameters and produces reliable positional uncertainties for both well-localized sources and those for which the GBM data cannot effectively constrain the position. While computationally expensive, BALROG can be implemented to enable quick follow-up of all GBM transient signals. Also, we identify possible response problems that require attention and caution when using standard, public GBM detector response matrices. Finally, we examine the effects of including the uncertainty in location on the spectral parameters of GRB 080916C. We find that spectral parameters change and no extra components are required when these effects are included in contrast to when we use a fixed location. This finding has the potential to alter both the GRB spectral catalogues and the reported spectral composition of some well-known GRBs.
Locating the source of spreading in temporal networks
NASA Astrophysics Data System (ADS)
Huang, Qiangjuan; Zhao, Chengli; Zhang, Xue; Yi, Dongyun
2017-02-01
The topological structure of many real networks changes with time. Thus, locating the sources of a temporal network is a creative and challenging problem, as the enormous size of many real networks makes it unfeasible to observe the state of all nodes. In this paper, we propose an algorithm to solve this problem, named the backward temporal diffusion process. The proposed algorithm calculates the shortest temporal distance to locate the transmission source. We assume that the spreading process can be modeled as a simple diffusion process and by consensus dynamics. To improve the location accuracy, we also adopt four strategies to select which nodes should be observed by ranking their importance in the temporal network. Our paper proposes a highly accurate method for locating the source in temporal networks and is, to the best of our knowledge, a frontier work in this field. Moreover, our framework has important significance for controlling the transmission of diseases or rumors and formulating immediate immunization strategies.
2012-09-01
State Award Nos. DE-AC52-07NA27344/24.2.3.2 and DOS_SIAA-11-AVC/NMA-1 ABSTRACT The Middle East is a tectonically complex and seismically...active region. The ability to accurately locate earthquakes and other seismic events in this region is complicated by tectonics , the uneven...and seismic source parameters show that this activity comes from tectonic events. This work is informed by continuous or event-based regional
Source Repeatability of Time-Lapse Offset VSP Surveys for Monitoring CO2 Injection
NASA Astrophysics Data System (ADS)
Zhang, Z.; Huang, L.; Rutledge, J. T.; Denli, H.; Zhang, H.; McPherson, B. J.; Grigg, R.
2009-12-01
Time-lapse vertical seismic profiling (VSP) surveys have the potential to remotely track the migration of injected CO2 within a geologic formation. To accurately detect small changes due to CO2 injection, the sources of time-lapse VSP surveys must be located exactly at the same positions. However, in practice, the source locations can vary from one survey to another survey. Our numerical simulations demonstrate that a variation of a few meters in the VSP source locations can result in significant changes in time-lapse seismograms. To address the source repeatability issue, we apply double-difference tomography to downgoing waves of time-lapse offset VSP data to invert for the source locations and the velocity structures simultaneously. In collaboration with Resolute Natural Resources, Navajo National Oil and Gas Company, and the Southwest Regional Partnership on Carbon Sequestration under the support of the U.S. Department of Energy’s National Energy Technology Laboratory, one baseline and two repeat offset VSP datasets were acquired in 2007-2009 for monitoring CO2 injection at the Aneth oil field in Utah. A cemented geophone string was used to acquire the data for one zero-offset and seven offset source locations. During the data acquisition, there was some uncertainty in the repeatability of the source locations relative to the baseline survey. Our double-difference tomography results of the Aneth time-lapse VSP data show that the source locations for different surveys are separated up to a few meters. Accounting for these source location variations during VSP data analysis will improve reliability of time-lapse VSP monitoring.
Campbell, W.H.
1986-01-01
Electric currents in long pipelines can contribute to corrosion effects that limit the pipe's lifetime. One cause of such electric currents is the geomagnetic field variations that have sources in the Earth's upper atmosphere. Knowledge of the general behavior of the sources allows a prediction of the occurrence times, favorable locations for the pipeline effects, and long-term projections of corrosion contributions. The source spectral characteristics, the Earth's conductivity profile, and a corrosion-frequency dependence limit the period range of the natural field changes that affect the pipe. The corrosion contribution by induced currents from geomagnetic sources should be evaluated for pipelines that are located at high and at equatorial latitudes. At midlatitude locations, the times of these natural current maxima should be avoided for the necessary accurate monitoring of the pipe-to-soil potential. ?? 1986 D. Reidel Publishing Company.
Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate
NASA Astrophysics Data System (ADS)
Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.
2008-08-01
The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.
Psychophysical investigation of an auditory spatial illusion in cats: the precedence effect.
Tollin, Daniel J; Yin, Tom C T
2003-10-01
The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a "phantom" sound located between the sources, for delays < +/-400 micros for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 micros to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.
Bowhead whale localization using asynchronous hydrophones in the Chukchi Sea.
Warner, Graham A; Dosso, Stan E; Hannay, David E; Dettmer, Jan
2016-07-01
This paper estimates bowhead whale locations and uncertainties using non-linear Bayesian inversion of their modally-dispersed calls recorded on asynchronous recorders in the Chukchi Sea, Alaska. Bowhead calls were recorded on a cluster of 7 asynchronous ocean-bottom hydrophones that were separated by 0.5-9.2 km. A warping time-frequency analysis is used to extract relative mode arrival times as a function of frequency for nine frequency-modulated whale calls that dispersed in the shallow water environment. Each call was recorded on multiple hydrophones and the mode arrival times are inverted for: the whale location in the horizontal plane, source instantaneous frequency (IF), water sound-speed profile, seabed geoacoustic parameters, relative recorder clock drifts, and residual error standard deviations, all with estimated uncertainties. A simulation study shows that accurate prior environmental knowledge is not required for accurate localization as long as the inversion treats the environment as unknown. Joint inversion of multiple recorded calls is shown to substantially reduce uncertainties in location, source IF, and relative clock drift. Whale location uncertainties are estimated to be 30-160 m and relative clock drift uncertainties are 3-26 ms.
Localizing Submarine Earthquakes by Listening to the Water Reverberations
NASA Astrophysics Data System (ADS)
Castillo, J.; Zhan, Z.; Wu, W.
2017-12-01
Mid-Ocean Ridge (MOR) earthquakes generally occur far from any land based station and are of moderate magnitude, making it complicated to detect and in most cases, locate accurately. This limits our understanding of how MOR normal and transform faults move and the manner in which they slip. Different from continental events, seismic records from earthquakes occurring beneath the ocean floor show complex reverberations caused by P-wave energy trapped in the water column that are highly dependent of the source location and the efficiency to which energy propagated to the near-source surface. These later arrivals are commonly considered to be only a nuisance as they might sometimes interfere with the primary arrivals. However, in this study, we take advantage of the wavefield's high sensitivity to small changes in the seafloor topography and the present-day availability of worldwide multi-beam bathymetry to relocate submarine earthquakes by modeling these water column reverberations in teleseismic signals. Using a three-dimensional hybrid method for modeling body wave arrivals, we demonstrate that an accurate hypocentral location of a submarine earthquake (<5 km) can be achieved if the structural complexities near the source region are appropriately accounted for. This presents a novel way of studying earthquake source properties and will serve as a means to explore the influence of physical fault structure on the seismic behavior of transform faults.
An FBG acoustic emission source locating system based on PHAT and GA
NASA Astrophysics Data System (ADS)
Shen, Jing-shi; Zeng, Xiao-dong; Li, Wei; Jiang, Ming-shun
2017-09-01
Using the acoustic emission locating technology to monitor the health of the structure is important for ensuring the continuous and healthy operation of the complex engineering structures and large mechanical equipment. In this paper, four fiber Bragg grating (FBG) sensors are used to establish the sensor array to locate the acoustic emission source. Firstly, the nonlinear locating equations are established based on the principle of acoustic emission, and the solution of these equations is transformed into an optimization problem. Secondly, time difference extraction algorithm based on the phase transform (PHAT) weighted generalized cross correlation provides the necessary conditions for the accurate localization. Finally, the genetic algorithm (GA) is used to solve the optimization model. In this paper, twenty points are tested in the marble plate surface, and the results show that the absolute locating error is within the range of 10 mm, which proves the accuracy of this locating method.
Forest research notes, Pacific Northwest Forest Experiment Station, No. 18, March 25, 1936.
William G. Morris; L.A. Isaac; G.S. Meagher; J.E. Lodewick; Axel J.F. Brandstrom; Donald N. Matthews
1936-01-01
Whenever the Tillamook burn and its problems are considered, there is need for accurate information on the location of the area, acreage burned, and the volume of timber killed. The data used by one agency are liable to be inconsistent with those used by another if a convenient source of accurate data is not available to all. In order to meet this need the best...
Towards an accurate real-time locator of infrasonic sources
NASA Astrophysics Data System (ADS)
Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.
2017-11-01
Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability distributions of the phase arrival time picks. To illustrate the improvements in both computation time and location accuracy achieved, we compare location results for the new algorithms, previously published BISL-type algorithms and the least-squares location technique. This comparison is provided via a case study of different typical spatial data distributions and statistical experiment using the database of 36 ground-truth explosions from the Utah Test and Training Range (UTTR) recorded during the US summer season at USArray transportable seismic stations when they were near the site between 2006 and 2008.
NASA Technical Reports Server (NTRS)
Hisamoto, Chuck (Inventor); Arzoumanian, Zaven (Inventor); Sheikh, Suneel I. (Inventor)
2015-01-01
A method and system for spacecraft navigation using distant celestial gamma-ray bursts which offer detectable, bright, high-energy events that provide well-defined characteristics conducive to accurate time-alignment among spatially separated spacecraft. Utilizing assemblages of photons from distant gamma-ray bursts, relative range between two spacecraft can be accurately computed along the direction to each burst's source based upon the difference in arrival time of the burst emission at each spacecraft's location. Correlation methods used to time-align the high-energy burst profiles are provided. The spacecraft navigation may be carried out autonomously or in a central control mode of operation.
Kurz, Jochen H
2015-12-01
The task of locating a source in space by measuring travel time differences of elastic or electromagnetic waves from the source to several sensors is evident in varying fields. The new concepts of automatic acoustic emission localization presented in this article are based on developments from geodesy and seismology. A detailed description of source location determination in space is given with the focus on acoustic emission data from concrete specimens. Direct and iterative solvers are compared. A concept based on direct solvers from geodesy extended by a statistical approach is described which allows a stable source location determination even for partly erroneous onset times. The developed approach is validated with acoustic emission data from a large specimen leading to travel paths up to 1m and therefore to noisy data with errors in the determined onsets. The adaption of the algorithms from geodesy to the localization procedure of sources of elastic waves offers new possibilities concerning stability, automation and performance of localization results. Fracture processes can be assessed more accurately. Copyright © 2015 Elsevier B.V. All rights reserved.
SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization
Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah
2014-01-01
Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roussel-Dupre, R.; Symbalisty, E.; Fox, C.
2009-08-01
The location of a radiating source can be determined by time-tagging the arrival of the radiated signal at a network of spatially distributed sensors. The accuracy of this approach depends strongly on the particular time-tagging algorithm employed at each of the sensors. If different techniques are used across the network, then the time tags must be referenced to a common fiducial for maximum location accuracy. In this report we derive the time corrections needed to temporally align leading-edge, time-tagging techniques with peak-picking algorithms. We focus on broadband radio frequency (RF) sources, an ionospheric propagation channel, and narrowband receivers, but themore » final results can be generalized to apply to any source, propagation environment, and sensor. Our analytic results are checked against numerical simulations for a number of representative cases and agree with the specific leading-edge algorithm studied independently by Kim and Eng (1995) and Pongratz (2005 and 2007).« less
NASA Astrophysics Data System (ADS)
Wang, Ji; Zhang, Ru; Yan, Yuting; Dong, Xiaoqiang; Li, Jun Ming
2017-05-01
Hazardous gas leaks in the atmosphere can cause significant economic losses in addition to environmental hazards, such as fires and explosions. A three-stage hazardous gas leak source localization method was developed that uses movable and stationary gas concentration sensors. The method calculates a preliminary source inversion with a modified genetic algorithm (MGA) and has the potential to crossover with eliminated individuals from the population, following the selection of the best candidate. The method then determines a search zone using Markov Chain Monte Carlo (MCMC) sampling, utilizing a partial evaluation strategy. The leak source is then accurately localized using a modified guaranteed convergence particle swarm optimization algorithm with several bad-performing individuals, following selection of the most successful individual with dynamic updates. The first two stages are based on data collected by motionless sensors, and the last stage is based on data from movable robots with sensors. The measurement error adaptability and the effect of the leak source location were analyzed. The test results showed that this three-stage localization process can localize a leak source within 1.0 m of the source for different leak source locations, with measurement error standard deviation smaller than 2.0.
Masthi, N R Ramesh; Madhusudan, M; Puthussery, Yannick P
2015-11-01
The global positioning system (GPS) technology along with Google Earth is used to measure (spatial map) the accurate distribution of morbidity, mortality and planning of interventions in the community. We used this technology to find out its role in the investigation of a cholera outbreak, and also to identify the cause of the outbreak. This study was conducted in a village near Bengaluru, Karnataka in June 2013 during a cholera outbreak. House-to-house survey was done to identify acute watery diarrhoea cases. A hand held GPS receiver was used to record north and east coordinates of the households of cases and these values were subsequently plotted on Google Earth map. Water samples were collected from suspected sources for microbiological analysis. A total of 27 cases of acute watery diarrhoea were reported. Fifty per cent of cases were in the age group of 14-44 yr and one death was reported. GPS technology and Google Earth described the accurate location of household of cases and spot map generated showed clustering of cases around the suspected water sources. The attack rate was 6.92 per cent and case fatality rate was 3.7 per cent. Water samples collected from suspected sources showed the presence of Vibrio cholera O1 Ogawa. GPS technology and Google Earth were easy to use, helpful to accurately pinpoint the location of household of cases, construction of spot map and follow up of cases. Outbreak was found to be due to contamination of drinking water sources.
NASA Astrophysics Data System (ADS)
Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.
2016-04-01
Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.
Room temperature acoustic transducers for high-temperature thermometry
NASA Astrophysics Data System (ADS)
Ripple, D. C.; Murdock, W. E.; Strouse, G. F.; Gillis, K. A.; Moldover, M. R.
2013-09-01
We have successfully conducted highly-accurate, primary acoustic thermometry at 600 K using a sound source and a sound detector located outside the thermostat, at room temperature. We describe the source, the detector, and the ducts that connected them to our cavity resonator. This transducer system preserved the purity of the argon gas, generated small, predictable perturbations to the acoustic resonance frequencies, and can be used well above 600 K.
Hancock, Penelope A.; Rehman, Yasmin; Hall, Ian M.; Edeghere, Obaghe; Danon, Leon; House, Thomas A.; Keeling, Matthew J.
2014-01-01
Prediction and control of the spread of infectious disease in human populations benefits greatly from our growing capacity to quantify human movement behavior. Here we develop a mathematical model for non-transmissible infections contracted from a localized environmental source, informed by a detailed description of movement patterns of the population of Great Britain. The model is applied to outbreaks of Legionnaires' disease, a potentially life-threatening form of pneumonia caused by the bacteria Legionella pneumophilia. We use case-report data from three recent outbreaks that have occurred in Great Britain where the source has already been identified by public health agencies. We first demonstrate that the amount of individual-level heterogeneity incorporated in the movement data greatly influences our ability to predict the source location. The most accurate predictions were obtained using reported travel histories to describe movements of infected individuals, but using detailed simulation models to estimate movement patterns offers an effective fast alternative. Secondly, once the source is identified, we show that our model can be used to accurately determine the population likely to have been exposed to the pathogen, and hence predict the residential locations of infected individuals. The results give rise to an effective control strategy that can be implemented rapidly in response to an outbreak. PMID:25211122
NASA Astrophysics Data System (ADS)
Al-Jumaili, Safaa Kh.; Pearson, Matthew R.; Holford, Karen M.; Eaton, Mark J.; Pullin, Rhys
2016-05-01
An easy to use, fast to apply, cost-effective, and very accurate non-destructive testing (NDT) technique for damage localisation in complex structures is key for the uptake of structural health monitoring systems (SHM). Acoustic emission (AE) is a viable technique that can be used for SHM and one of the most attractive features is the ability to locate AE sources. The time of arrival (TOA) technique is traditionally used to locate AE sources, and relies on the assumption of constant wave speed within the material and uninterrupted propagation path between the source and the sensor. In complex structural geometries and complex materials such as composites, this assumption is no longer valid. Delta T mapping was developed in Cardiff in order to overcome these limitations; this technique uses artificial sources on an area of interest to create training maps. These are used to locate subsequent AE sources. However operator expertise is required to select the best data from the training maps and to choose the correct parameter to locate the sources, which can be a time consuming process. This paper presents a new and improved fully automatic delta T mapping technique where a clustering algorithm is used to automatically identify and select the highly correlated events at each grid point whilst the "Minimum Difference" approach is used to determine the source location. This removes the requirement for operator expertise, saving time and preventing human errors. A thorough assessment is conducted to evaluate the performance and the robustness of the new technique. In the initial test, the results showed excellent reduction in running time as well as improved accuracy of locating AE sources, as a result of the automatic selection of the training data. Furthermore, because the process is performed automatically, this is now a very simple and reliable technique due to the prevention of the potential source of error related to manual manipulation.
Development of a Flight Instrument for in situ Measurements of Ethane and Methane
NASA Astrophysics Data System (ADS)
Wilkerson, J. P.; Sayres, D. S.; Anderson, J. G.
2015-12-01
Methane emissions data for natural gas and oil fields have high uncertainty. Better quantifying these emissions is crucial to establish an accurate methane budget for the United States. One obstacle is that these emissions often occur in areas near livestock facilities where biogenic methane abounds. Measuring ethane, which has no biogenic source, along with methane can tease these sources apart. However, ethane is typically measured by taking whole-air samples. This tactic has lower spatial resolution than making in situ measurements and requires the measurer to anticipate the location of emission plumes. This leaves unexpected plumes uncharacterized. Using Re-injection Mirror Integrated Cavity Output Spectroscopy (RIM-ICOS), we can measure both methane and ethane in flight, allowing us to establish more accurate fugitive emissions data that can more readily distinguish between different sources of this greenhouse gas.
Gollob, Stephan; Kocur, Georg Karl; Schumacher, Thomas; Mhamdi, Lassaad; Vogel, Thomas
2017-02-01
In acoustic emission analysis, common source location algorithms assume, independently of the nature of the propagation medium, a straight (shortest) wave path between the source and the sensors. For heterogeneous media such as concrete, the wave travels in complex paths due to the interaction with the dissimilar material contents and with the possible geometrical and material irregularities present in these media. For instance, cracks and large air voids present in concrete influence significantly the way the wave travels, by causing wave path deviations. Neglecting these deviations by assuming straight paths can introduce significant errors to the source location results. In this paper, a novel source localization method called FastWay is proposed. It accounts, contrary to most available shortest path-based methods, for the different effects of material discontinuities (cracks and voids). FastWay, based on a heterogeneous velocity model, uses the fastest rather than the shortest travel paths between the source and each sensor. The method was evaluated both numerically and experimentally and the results from both evaluation tests show that, in general, FastWay was able to locate sources of acoustic emissions more accurately and reliably than the traditional source localization methods. Copyright © 2016 Elsevier B.V. All rights reserved.
Liu, X; Zhai, Z
2008-02-01
Indoor pollutions jeopardize human health and welfare and may even cause serious morbidity and mortality under extreme conditions. To effectively control and improve indoor environment quality requires immediate interpretation of pollutant sensor readings and accurate identification of indoor pollution history and source characteristics (e.g. source location and release time). This procedure is complicated by non-uniform and dynamic contaminant indoor dispersion behaviors as well as diverse sensor network distributions. This paper introduces a probability concept based inverse modeling method that is able to identify the source location for an instantaneous point source placed in an enclosed environment with known source release time. The study presents the mathematical models that address three different sensing scenarios: sensors without concentration readings, sensors with spatial concentration readings, and sensors with temporal concentration readings. The paper demonstrates the inverse modeling method and algorithm with two case studies: air pollution in an office space and in an aircraft cabin. The predictions were successfully verified against the forward simulation settings, indicating good capability of the method in finding indoor pollutant sources. The research lays a solid ground for further study of the method for more complicated indoor contamination problems. The method developed can help track indoor contaminant source location with limited sensor outputs. This will ensure an effective and prompt execution of building control strategies and thus achieve a healthy and safe indoor environment. The method can also assist the design of optimal sensor networks.
Schmidt, F.H.
1958-08-12
An improved ion source is described for accurately presetting the size amd location of the gas and ion efflux opening. for determining the contour of the electrical field in the vicinity of the arc, and for generally improving the operation of the calutron source. The above features are accomplished by the use of a pair of electrically conductive coplanar plates mounted on opposite sides of the ion exit passage of the source ionization chamber and electrically connected to the source block. The plates are mounted on thc block for individual movement tramsversely of the exit slit and can be secured in place by clannping means.
Dawson, P.; Whilldin, D.; Chouet, B.
2004-01-01
Radial Semblance is applied to broadband seismic network data to provide source locations of Very-Long-Period (VLP) seismic energy in near real time. With an efficient algorithm and adequate network coverage, accurate source locations of VLP energy are derived to quickly locate the shallow magmatic conduit system at Kilauea Volcano, Hawaii. During a restart in magma flow following a brief pause in the current eruption, the shallow magmatic conduit is pressurized, resulting in elastic radiation from various parts of the conduit system. A steeply dipping distribution of VLP hypocenters outlines a region extending from sea level to about 550 m elevation below and just east of the Halemaumau Pit Crater. The distinct hypocenters suggest the shallow plumbing system beneath Halemaumau consists of a complex plexus of sills and dikes. An unconstrained location for a section of the conduit is also observed beneath the region between Kilauea Caldera and Kilauea Iki Crater.
Study of large adaptive arrays for space technology applications
NASA Technical Reports Server (NTRS)
Berkowitz, R. S.; Steinberg, B.; Powers, E.; Lim, T.
1977-01-01
The research in large adaptive antenna arrays for space technology applications is reported. Specifically two tasks were considered. The first was a system design study for accurate determination of the positions and the frequencies of sources radiating from the earth's surface that could be used for the rapid location of people or vehicles in distress. This system design study led to a nonrigid array about 8 km in size with means for locating the array element positions, receiving signals from the earth and determining the source locations and frequencies of the transmitting sources. It is concluded that this system design is feasible, and satisfies the desired objectives. The second task was an experiment to determine the largest earthbound array which could simulate a spaceborne experiment. It was determined that an 800 ft array would perform indistinguishably in both locations and it is estimated that one several times larger also would serve satisfactorily. In addition the power density spectrum of the phase difference fluctuations across a large array was measured. It was found that the spectrum falls off approximately as f to the minus 5/2 power.
Quantitative measurement of pass-by noise radiated by vehicles running at high speeds
NASA Astrophysics Data System (ADS)
Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin
2011-03-01
It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.
Pinpointing the North Korea Nuclear tests with body waves scattered by surface topography
NASA Astrophysics Data System (ADS)
Wang, N.; Shen, Y.; Bao, X.; Flinders, A. F.
2017-12-01
On September 3, 2017, North Korea conducted its sixth and by far the largest nuclear test at the Punggye-ri test site. In this work, we apply a novel full-wave location method that combines a non-linear grid-search algorithm with the 3D strain Green's tensor database to locate this event. We use the first arrivals (Pn waves) and their immediate codas, which are likely dominated by waves scattered by the surface topography near the source, to pinpoint the source location. We assess the solution in the search volume using a least-squares misfit between the observed and synthetic waveforms, which are obtained using the collocated-grid finite difference method on curvilinear grids. We calculate the one standard deviation level of the 'best' solution as a posterior error estimation. Our results show that the waveform based location method allows us to obtain accurate solutions with a small number of stations. The solutions are absolute locations as opposed to relative locations based on relative travel times, because topography-scattered waves depend on the geometric relations between the source and the unique topography near the source. Moreover, we use both differential waveforms and traveltimes to locate pairs of the North Korea tests in years 2016 and 2017 to further reduce the effects of inaccuracies in the reference velocity model (CRUST 1.0). Finally, we compare our solutions with those of other studies based on satellite images and relative traveltimes.
NASA Astrophysics Data System (ADS)
Horstmann, T.; Harrington, R. M.; Cochran, E. S.
2012-12-01
Frequently, the lack of distinctive phase arrivals makes locating tectonic tremor more challenging than locating earthquakes. Classic location algorithms based on travel times cannot be directly applied because impulsive phase arrivals are often difficult to recognize. Traditional location algorithms are often modified to use phase arrivals identified from stacks of recurring low-frequency events (LFEs) observed within tremor episodes, rather than single events. Stacking the LFE waveforms improves the signal-to-noise ratio for the otherwise non-distinct phase arrivals. In this study, we apply a different method to locate tectonic tremor: a modified time-reversal imaging approach that potentially exploits the information from the entire tremor waveform instead of phase arrivals from individual LFEs. Time reversal imaging uses the waveforms of a given seismic source recorded by multiple seismometers at discrete points on the surface and a 3D velocity model to rebroadcast the waveforms back into the medium to identify the seismic source location. In practice, the method works by reversing the seismograms recorded at each of the stations in time, and back-propagating them from the receiver location individually into the sub-surface as a new source time function. We use a staggered-grid, finite-difference code with 2.5 ms time steps and a grid node spacing of 50 m to compute the rebroadcast wavefield. We calculate the time-dependent curl field at each grid point of the model volume for each back-propagated seismogram. To locate the tremor, we assume that the source time function back-propagated from each individual station produces a similar curl field at the source position. We then cross-correlate the time dependent curl field functions and calculate a median cross-correlation coefficient at each grid point. The highest median cross-correlation coefficient in the model volume is expected to represent the source location. For our analysis, we use the velocity model of Thurber et al. (2006) interpolated to a grid spacing of 50 m. Such grid spacing corresponds to frequencies of up to 8 Hz, which is suitable to calculate the wave propagation of tremor. Our dataset contains continuous broadband data from 13 STS-2 seismometers deployed from May 2010 to July 2011 along the Cholame segment of the San Andreas Fault as well as data from the HRSN and PBO networks. Initial synthetic results from tests on a 2D plane using a line of 15 receivers suggest that we are able to recover accurate event locations to within 100 m horizontally and 300 m depth. We conduct additional synthetic tests to determine the influence of signal-to-noise ratio, number of stations used, and the uncertainty in the velocity model on the location result by adding noise to the seismograms and perturbations to the velocity model. Preliminary results show accurate show location results to within 400 m with a median signal-to-noise ratio of 3.5 and 5% perturbations in the velocity model. The next steps will entail performing the synthetic tests on the 3D velocity model, and applying the method to tremor waveforms. Furthermore, we will determine the spatial and temporal distribution of the source locations and compare our results to those by Sumy and others.
Abbott, M.; Einerson, J.; Schuster, P.; Susong, D.; Taylor, Howard E.; ,
2004-01-01
Snow sampling and analysis methods which produce accurate and ultra-low measurements of trace elements and common ion concentration in southeastern Idaho snow, were developed. Snow samples were collected over two winters to assess trace elements and common ion concentrations in air pollutant fallout across the southeastern Idaho. The area apportionment of apportionment of fallout concentrations measured at downwind location were investigated using pattern recognition and multivariate statistical technical techniques. Results show a high level of contribution from phosphates processing facilities located outside Pocatello in the southern portion of the Eastern Snake River Plain, and no obvious source area profiles other than at Pocatello.
Turbulence spectra in the noise source regions of the flow around complex surfaces
NASA Technical Reports Server (NTRS)
Olsen, W. A.; Boldman, D. R.
1983-01-01
The complex turbulent flow around three complex surfaces was measured in detail with a hot wire. The measured data include extensive spatial surveys of the mean velocity and turbulence intensity and measurements of the turbulence spectra and scale length at many locations. The publication of the turbulence data is completed by reporting a summary of the turbulence spectra that were measured within the noise source locations of the flow. The results suggest some useful simplifications in modeling the very complex turbulent flow around complex surfaces for aeroacoustic predictive models. The turbulence spectra also show that noise data from scale models of moderate size can be accurately scaled up to full size.
Limitations of Phased Array Beamforming in Open Rotor Noise Source Imaging
NASA Technical Reports Server (NTRS)
Horvath, Csaba; Envia, Edmane; Podboy, Gary G.
2013-01-01
Phased array beamforming results of the F31/A31 historical baseline counter-rotating open rotor blade set were investigated for measurement data taken on the NASA Counter-Rotating Open Rotor Propulsion Rig in the 9- by 15-Foot Low-Speed Wind Tunnel of NASA Glenn Research Center as well as data produced using the LINPROP open rotor tone noise code. The planar microphone array was positioned broadside and parallel to the axis of the open rotor, roughly 2.3 rotor diameters away. The results provide insight as to why the apparent noise sources of the blade passing frequency tones and interaction tones appear at their nominal Mach radii instead of at the actual noise sources, even if those locations are not on the blades. Contour maps corresponding to the sound fields produced by the radiating sound waves, taken from the simulations, are used to illustrate how the interaction patterns of circumferential spinning modes of rotating coherent noise sources interact with the phased array, often giving misleading results, as the apparent sources do not always show where the actual noise sources are located. This suggests that a more sophisticated source model would be required to accurately locate the sources of each tone. The results of this study also have implications with regard to the shielding of open rotor sources by airframe empennages.
Sato, Masashi; Yamashita, Okito; Sato, Masa-Aki; Miyawaki, Yoichi
2018-01-01
To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of "information spreading" may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined.
Sato, Masashi; Yamashita, Okito; Sato, Masa-aki
2018-01-01
To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of “information spreading” may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined. PMID:29912968
2010-09-01
locations for the two events, we made very precise arrival time measurements at 35 stations that recorded both explosions with good signal to noise... what we believe to be very reasonable and accurate locations for these two explosions. The corresponding source depths can not be reliably...of the 2009 and 2006 events as explosions based on high-frequency Pn/Lg ratios measured at regional stations are unambiguous; however, results for
Laan, Nick; de Bruin, Karla G.; Slenter, Denise; Wilhelm, Julie; Jermy, Mark; Bonn, Daniel
2015-01-01
Bloodstain Pattern Analysis is a forensic discipline in which, among others, the position of victims can be determined at crime scenes on which blood has been shed. To determine where the blood source was investigators use a straight-line approximation for the trajectory, ignoring effects of gravity and drag and thus overestimating the height of the source. We determined how accurately the location of the origin can be estimated when including gravity and drag into the trajectory reconstruction. We created eight bloodstain patterns at one meter distance from the wall. The origin’s location was determined for each pattern with: the straight-line approximation, our method including gravity, and our method including both gravity and drag. The latter two methods require the volume and impact velocity of each bloodstain, which we are able to determine with a 3D scanner and advanced fluid dynamics, respectively. We conclude that by including gravity and drag in the trajectory calculation, the origin’s location can be determined roughly four times more accurately than with the straight-line approximation. Our study enables investigators to determine if the victim was sitting or standing, or it might be possible to connect wounds on the body to specific patterns, which is important for crime scene reconstruction. PMID:26099070
Laan, Nick; de Bruin, Karla G; Slenter, Denise; Wilhelm, Julie; Jermy, Mark; Bonn, Daniel
2015-06-22
Bloodstain Pattern Analysis is a forensic discipline in which, among others, the position of victims can be determined at crime scenes on which blood has been shed. To determine where the blood source was investigators use a straight-line approximation for the trajectory, ignoring effects of gravity and drag and thus overestimating the height of the source. We determined how accurately the location of the origin can be estimated when including gravity and drag into the trajectory reconstruction. We created eight bloodstain patterns at one meter distance from the wall. The origin's location was determined for each pattern with: the straight-line approximation, our method including gravity, and our method including both gravity and drag. The latter two methods require the volume and impact velocity of each bloodstain, which we are able to determine with a 3D scanner and advanced fluid dynamics, respectively. We conclude that by including gravity and drag in the trajectory calculation, the origin's location can be determined roughly four times more accurately than with the straight-line approximation. Our study enables investigators to determine if the victim was sitting or standing, or it might be possible to connect wounds on the body to specific patterns, which is important for crime scene reconstruction.
NASA Astrophysics Data System (ADS)
Laan, Nick; de Bruin, Karla G.; Slenter, Denise; Wilhelm, Julie; Jermy, Mark; Bonn, Daniel
2015-06-01
Bloodstain Pattern Analysis is a forensic discipline in which, among others, the position of victims can be determined at crime scenes on which blood has been shed. To determine where the blood source was investigators use a straight-line approximation for the trajectory, ignoring effects of gravity and drag and thus overestimating the height of the source. We determined how accurately the location of the origin can be estimated when including gravity and drag into the trajectory reconstruction. We created eight bloodstain patterns at one meter distance from the wall. The origin’s location was determined for each pattern with: the straight-line approximation, our method including gravity, and our method including both gravity and drag. The latter two methods require the volume and impact velocity of each bloodstain, which we are able to determine with a 3D scanner and advanced fluid dynamics, respectively. We conclude that by including gravity and drag in the trajectory calculation, the origin’s location can be determined roughly four times more accurately than with the straight-line approximation. Our study enables investigators to determine if the victim was sitting or standing, or it might be possible to connect wounds on the body to specific patterns, which is important for crime scene reconstruction.
The relationship of the concentration of air pollutants to wind direction has been determined by nonparametric regression using a Gaussian kernel. The results are smooth curves with error bars that allow for the accurate determination of the wind direction where the concentrat...
LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources
NASA Astrophysics Data System (ADS)
Pan, Hanjie; Simeoni, Matthieu; Hurley, Paul; Blu, Thierry; Vetterli, Martin
2017-12-01
Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims: The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods: We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results: We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point-source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable reconstruction quality compared to a conventional method. The achieved angular resolution is higher than the perceived instrument resolution, and very close sources can be reliably distinguished. The proposed approach has cubic complexity in the total number (typically around a few thousand) of uniform Fourier data of the sky image estimated from the reconstruction. It is also demonstrated that the method is robust to the presence of extended-sources, and that false-positives can be addressed by choosing an adequate model order to match the noise level.
High-precision source location of the 1978 November 19 gamma-ray burst
NASA Technical Reports Server (NTRS)
Cline, T. L.; Desai, U. D.; Teegarden, B. J.; Pizzichini, G.; Evans, W. D.; Klebesadel, R. W.; Laros, J. G.; Barat, C.; Hurley, K.; Niel, M.
1981-01-01
The celestial source location of the November 19, 1978, intense gamma ray burst has been determined from data obtained with the interplanetary gamma-ray sensor network by means of long-baseline wave front timing instruments. Each of the instruments was designed for studying events with observable spectra of approximately greater than 100 keV, and each provides accurate event profile timing in the several millisecond range. The data analysis includes the following: the triangulated region is centered at (gamma, delta) 1950 = (1h16m32s, -28 deg 53 arcmin), at -84 deg galactic latitude, where the star density is very low and the obscuration negligible. The gamma-ray burst source region, consistent with that of a highly polarized radio source described by Hjellming and Ewald (1981), may assist in the source modeling and may facilitate the understanding of the source process. A marginally identifiable X-ray source was also found by an Einstein Observatory investigation. It is concluded that the burst contains redshifted positron annihilation and nuclear first-excited iron lines, which is consistent with a neutron star origin.
NASA Astrophysics Data System (ADS)
Zhou, Yunfei; Cai, Hongzhi; Zhong, Liyun; Qiu, Xiang; Tian, Jindong; Lu, Xiaoxu
2017-05-01
In white light scanning interferometry (WLSI), the accuracy of profile measurement achieved with the conventional zero optical path difference (ZOPD) position locating method is closely related with the shape of interference signal envelope (ISE), which is mainly decided by the spectral distribution of illumination source. For a broadband light with Gaussian spectral distribution, the corresponding shape of ISE reveals a symmetric distribution, so the accurate ZOPD position can be achieved easily. However, if the spectral distribution of source is irregular, the shape of ISE will become asymmetric or complex multi-peak distribution, WLSI cannot work well through using ZOPD position locating method. Aiming at this problem, we propose time-delay estimation (TDE) based WLSI method, in which the surface profile information is achieved by using the relative displacement of interference signal between different pixels instead of the conventional ZOPD position locating method. Due to all spectral information of interference signal (envelope and phase) are utilized, in addition to revealing the advantage of high accuracy, the proposed method can achieve profile measurement with high accuracy in the case that the shape of ISE is irregular while ZOPD position locating method cannot work. That is to say, the proposed method can effectively eliminate the influence of source spectrum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shchory, Tal; Schifter, Dan; Lichtman, Rinat
Purpose: In radiation therapy there is a need to accurately know the location of the target in real time. A novel radioactive tracking technology has been developed to answer this need. The technology consists of a radioactive implanted fiducial marker designed to minimize migration and a linac mounted tracking device. This study measured the static and dynamic accuracy of the new tracking technology in a clinical radiation therapy environment. Methods and Materials: The tracking device was installed on the linac gantry. The radioactive marker was located in a tissue equivalent phantom. Marker location was measured simultaneously by the radioactive trackingmore » system and by a Microscribe G2 coordinate measuring machine (certified spatial accuracy of 0.38 mm). Localization consistency throughout a volume and absolute accuracy in the Fixed coordinate system were measured at multiple gantry angles over volumes of at least 10 cm in diameter centered at isocenter. Dynamic accuracy was measured with the marker located inside a breathing phantom. Results: The mean consistency for the static source was 0.58 mm throughout the tested region at all measured gantry angles. The mean absolute position error in the Fixed coordinate system for all gantry angles was 0.97 mm. The mean real-time tracking error for the dynamic source within the breathing phantom was less than 1 mm. Conclusions: This novel radioactive tracking technology has the potential to be useful in accurate target localization and real-time monitoring for radiation therapy.« less
Shchory, Tal; Schifter, Dan; Lichtman, Rinat; Neustadter, David; Corn, Benjamin W
2010-11-15
In radiation therapy there is a need to accurately know the location of the target in real time. A novel radioactive tracking technology has been developed to answer this need. The technology consists of a radioactive implanted fiducial marker designed to minimize migration and a linac mounted tracking device. This study measured the static and dynamic accuracy of the new tracking technology in a clinical radiation therapy environment. The tracking device was installed on the linac gantry. The radioactive marker was located in a tissue equivalent phantom. Marker location was measured simultaneously by the radioactive tracking system and by a Microscribe G2 coordinate measuring machine (certified spatial accuracy of 0.38 mm). Localization consistency throughout a volume and absolute accuracy in the Fixed coordinate system were measured at multiple gantry angles over volumes of at least 10 cm in diameter centered at isocenter. Dynamic accuracy was measured with the marker located inside a breathing phantom. The mean consistency for the static source was 0.58 mm throughout the tested region at all measured gantry angles. The mean absolute position error in the Fixed coordinate system for all gantry angles was 0.97 mm. The mean real-time tracking error for the dynamic source within the breathing phantom was less than 1 mm. This novel radioactive tracking technology has the potential to be useful in accurate target localization and real-time monitoring for radiation therapy. Copyright © 2010 Elsevier Inc. All rights reserved.
North Alabama Lightning Mapping Array (LMA): VHF Source Retrieval Algorithm and Error Analyses
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solakiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J.; Bailey, J.; Krider, E. P.; Bateman, M. G.; Boccippio, D.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA Marshall Space Flight Center (MSFC) and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix Theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50 ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results. However, for many source locations, the Curvature Matrix Theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Lunar seismicity and tectonics
NASA Technical Reports Server (NTRS)
Lammlein, D. R.
1977-01-01
Results are presented for an analysis of all moonquake data obtained by the Apollo seismic stations during the period from November 1969 to May 1974 and a preliminary analysis of critical data obtained in the interval from May 1974 to May 1975. More accurate locations are found for previously located moonquakes, and additional sources are located. Consideration is given to the sources of natural seismic signals, lunar seismic activity, moonquake periodicities, tidal periodicities in moonquake activity, hypocentral locations and occurrence characteristics of deep and shallow moonquakes, lunar tidal control over moonquakes, lunar tectonism, the locations of moonquake belts, and the dynamics of the lunar interior. It is concluded that: (1) moonquakes are distributed in several major belts of global extent that coincide with regions of the youngest and most intense volcanic and tectonic activity; (2) lunar tides control both the small quakes occurring at great depth and the larger quakes occurring near the surface; (3) the moon has a much thicker lithosphere than earth; (4) a single tectonic mechanism may account for all lunar seismic activity; and (5) lunar tidal stresses are an efficient triggering mechanism for moonquakes.
A source-attractor approach to network detection of radiation sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qishi; Barry, M. L..; Grieme, M.
Radiation source detection using a network of detectors is an active field of research for homeland security and defense applications. We propose Source-attractor Radiation Detection (SRD) method to aggregate measurements from a network of detectors for radiation source detection. SRD method models a potential radiation source as a magnet -like attractor that pulls in pre-computed virtual points from the detector locations. A detection decision is made if a sufficient level of attraction, quantified by the increase in the clustering of the shifted virtual points, is observed. Compared with traditional methods, SRD has the following advantages: i) it does not requiremore » an accurate estimate of the source location from limited and noise-corrupted sensor readings, unlike the localizationbased methods, and ii) its virtual point shifting and clustering calculation involve simple arithmetic operations based on the number of detectors, avoiding the high computational complexity of grid-based likelihood estimation methods. We evaluate its detection performance using canonical datasets from Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) tests. SRD achieves both lower false alarm rate and false negative rate compared to three existing algorithms for network source detection.« less
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2018-01-01
The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
NASA Astrophysics Data System (ADS)
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-01
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
NASA Astrophysics Data System (ADS)
Ma, Xibo; Tian, Jie; Zhang, Bo; Zhang, Xing; Xue, Zhenwen; Dong, Di; Han, Dong
2011-03-01
Among many optical molecular imaging modalities, bioluminescence imaging (BLI) has more and more wide application in tumor detection and evaluation of pharmacodynamics, toxicity, pharmacokinetics because of its noninvasive molecular and cellular level detection ability, high sensitivity and low cost in comparison with other imaging technologies. However, BLI can not present the accurate location and intensity of the inner bioluminescence sources such as in the bone, liver or lung etc. Bioluminescent tomography (BLT) shows its advantage in determining the bioluminescence source distribution inside a small animal or phantom. Considering the deficiency of two-dimensional imaging modality, we developed three-dimensional tomography to reconstruct the information of the bioluminescence source distribution in transgenic mOC-Luc mice bone with the boundary measured data. In this paper, to study the osteocalcin (OC) accumulation in transgenic mOC-Luc mice bone, a BLT reconstruction method based on multilevel adaptive finite element (FEM) algorithm was used for localizing and quantifying multi bioluminescence sources. Optical and anatomical information of the tissues are incorporated as a priori knowledge in this method, which can reduce the ill-posedness of BLT. The data was acquired by the dual modality BLT and Micro CT prototype system that was developed by us. Through temperature control and absolute intensity calibration, a relative accurate intensity can be calculated. The location of the OC accumulation was reconstructed, which was coherent with the principle of bone differentiation. This result also was testified by ex vivo experiment in the black 96-plate well using the BLI system and the chemiluminescence apparatus.
Explosion localization and characterization via infrasound using numerical modeling
NASA Astrophysics Data System (ADS)
Fee, D.; Kim, K.; Iezzi, A. M.; Matoza, R. S.; Jolly, A. D.; De Angelis, S.; Diaz Moreno, A.; Szuberla, C.
2017-12-01
Numerous methods have been applied to locate, detect, and characterize volcanic and anthropogenic explosions using infrasound. Far-field localization techniques typically use back-azimuths from multiple arrays (triangulation) or Reverse Time Migration (RTM, or back-projection). At closer ranges, networks surrounding a source may use Time Difference of Arrival (TDOA), semblance, station-pair double difference, etc. However, at volcanoes and regions with topography or obstructions that block the direct path of sound, recent studies have shown that numerical modeling is necessary to provide an accurate source location. A heterogeneous and moving atmosphere (winds) may also affect the location. The time reversal mirror (TRM) application of Kim et al. (2015) back-propagates the wavefield using a Finite Difference Time Domain (FDTD) algorithm, with the source corresponding to the location of peak convergence. Although it provides high-resolution source localization and can account for complex wave propagation, TRM is computationally expensive and limited to individual events. Here we present a new technique, termed RTM-FDTD, which integrates TRM and FDTD. Travel time and transmission loss information is computed from each station to the entire potential source grid from 3-D Green's functions derived via FDTD. The wave energy is then back-projected and stacked at each grid point, with the maximum corresponding to the likely source. We apply our method to detect and characterize thousands of explosions from Yasur Volcano, Vanuatu and Etna Volcano, Italy, which both provide complex wave propagation and multiple source locations. We compare our results with those from more traditional methods (e.g. semblance), and suggest our method is preferred as it is computationally less expensive than TRM but still integrates numerical modeling. RTM-FDTD could be applied to volcanic other anthropogenic sources at a wide variety of ranges and scenarios. Kim, K., Lees, J.M., 2015. Imaging volcanic infrasound sources using time reversal mirror algorithm. Geophysical Journal International 202, 1663-1676.
Self characterization of a coded aperture array for neutron source imaging
NASA Astrophysics Data System (ADS)
Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; Guler, N.; Merrill, F. E.; Wilde, C. H.
2014-12-01
The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (˜100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.
Nonexposure Accurate Location K-Anonymity Algorithm in LBS
2014-01-01
This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060
Weighted small subdomain filtering technology
NASA Astrophysics Data System (ADS)
Tai, Zhenhua; Zhang, Fengxu; Zhang, Fengqin; Zhang, Xingzhou; Hao, Mengcheng
2017-09-01
A high-resolution method to define the horizontal edges of gravity sources is presented by improving the three-directional small subdomain filtering (TDSSF). This proposed method is the weighted small subdomain filtering (WSSF). The WSSF uses a numerical difference instead of the phase conversion in the TDSSF to reduce the computational complexity. To make the WSSF more insensitive to noise, the numerical difference is combined with the average algorithm. Unlike the TDSSF, the WSSF uses a weighted sum to integrate the numerical difference results along four directions into one contour, for making its interpretation more convenient and accurate. The locations of tightened gradient belts are used to define the edges of sources in the WSSF result. This proposed method is tested on synthetic data. The test results show that the WSSF provides the horizontal edges of sources more clearly and correctly, even if the sources are interfered with one another and the data is corrupted with random noise. Finally, the WSSF and two other known methods are applied to a real data respectively. The detected edges by the WSSF are sharper and more accurate.
Michalareas, George; Schoffelen, Jan-Mathijs; Paterson, Gavin; Gross, Joachim
2013-01-01
Abstract In this work, we investigate the feasibility to estimating causal interactions between brain regions based on multivariate autoregressive models (MAR models) fitted to magnetoencephalographic (MEG) sensor measurements. We first demonstrate the theoretical feasibility of estimating source level causal interactions after projection of the sensor-level model coefficients onto the locations of the neural sources. Next, we show with simulated MEG data that causality, as measured by partial directed coherence (PDC), can be correctly reconstructed if the locations of the interacting brain areas are known. We further demonstrate, if a very large number of brain voxels is considered as potential activation sources, that PDC as a measure to reconstruct causal interactions is less accurate. In such case the MAR model coefficients alone contain meaningful causality information. The proposed method overcomes the problems of model nonrobustness and large computation times encountered during causality analysis by existing methods. These methods first project MEG sensor time-series onto a large number of brain locations after which the MAR model is built on this large number of source-level time-series. Instead, through this work, we demonstrate that by building the MAR model on the sensor-level and then projecting only the MAR coefficients in source space, the true casual pathways are recovered even when a very large number of locations are considered as sources. The main contribution of this work is that by this methodology entire brain causality maps can be efficiently derived without any a priori selection of regions of interest. Hum Brain Mapp, 2013. © 2012 Wiley Periodicals, Inc. PMID:22328419
Microseismic response characteristics modeling and locating of underground water supply pipe leak
NASA Astrophysics Data System (ADS)
Wang, J.; Liu, J.
2015-12-01
In traditional methods of pipeline leak location, geophones must be located on the pipe wall. If the exact location of the pipeline is unknown, the leaks cannot be identified accurately. To solve this problem, taking into account the characteristics of the pipeline leak, we propose a continuous random seismic source model and construct geological models to investigate the proposed method for locating underground pipeline leaks. Based on two dimensional (2D) viscoacoustic equations and the staggered grid finite-difference (FD) algorithm, the microseismic wave field generated by a leaking pipe is modeled. Cross-correlation analysis and the simulated annealing (SA) algorithm were utilized to obtain the time difference and the leak location. We also analyze and discuss the effect of the number of recorded traces, the survey layout, and the offset and interval of the traces on the accuracy of the estimated location. The preliminary results of the simulation and data field experiment indicate that (1) a continuous random source can realistically represent the leak microseismic wave field in a simulation using 2D visco-acoustic equations and a staggered grid FD algorithm. (2) The cross-correlation method is effective for calculating the time difference of the direct wave relative to the reference trace. However, outside the refraction blind zone, the accuracy of the time difference is reduced by the effects of the refracted wave. (3) The acquisition method of time difference based on the microseismic theory and SA algorithm has a great potential for locating leaks from underground pipelines from an array located on the ground surface. Keywords: Viscoacoustic finite-difference simulation; continuous random source; simulated annealing algorithm; pipeline leak location
Automated seismic waveform location using Multichannel Coherency Migration (MCM)-I. Theory
NASA Astrophysics Data System (ADS)
Shi, Peidong; Angus, Doug; Rost, Sebastian; Nowacki, Andy; Yuan, Sanyi
2018-03-01
With the proliferation of dense seismic networks sampling the full seismic wavefield, recorded seismic data volumes are getting bigger and automated analysis tools to locate seismic events are essential. Here, we propose a novel Multichannel Coherency Migration (MCM) method to locate earthquakes in continuous seismic data and reveal the location and origin time of seismic events directly from recorded waveforms. By continuously calculating the coherency between waveforms from different receiver pairs, MCM greatly expands the available information which can be used for event location. MCM does not require phase picking or phase identification, which allows fully automated waveform analysis. By migrating the coherency between waveforms, MCM leads to improved source energy focusing. We have tested and compared MCM to other migration-based methods in noise-free and noisy synthetic data. The tests and analysis show that MCM is noise resistant and can achieve more accurate results compared with other migration-based methods. MCM is able to suppress strong interference from other seismic sources occurring at a similar time and location. It can be used with arbitrary 3D velocity models and is able to obtain reasonable location results with smooth but inaccurate velocity models. MCM exhibits excellent location performance and can be easily parallelized giving it large potential to be developed as a real-time location method for very large datasets.
NASA Astrophysics Data System (ADS)
DeGrandpre, K.; Pesicek, J. D.; Lu, Z.
2017-12-01
During the summer of 2014 and the early spring of 2015 two notable increases in seismic activity at Semisopochnoi Island in the western Aleutian islands were recorded on AVO seismometers on Semisopochnoi and neighboring islands. These seismic swarms did not lead to an eruption. This study employs interferometric synthetic aperture radar (InSAR) techniques using TerraSAR-X images in conjunction with more accurately relocating the recorded seismic events through simultaneous inversion of event travel times and a three-dimensional velocity model using tomoDD. The InSAR images exhibit surprising coherence and an island wide spatial distribution of inflation that is then used in Mogi, Okada, spheroid, and ellipsoid source models in order to define the three-dimensional location and volume change required for a source at the volcano to produce the observed surface deformation. The tomoDD relocations provide a more accurate and realistic three-dimensional velocity model as well as a tighter clustering of events for both swarms that clearly outline a linear seismic void within the larger group of shallow (<10 km) seismicity. The source models are fit to this void and pressure estimates from geochemical analysis are used to verify the storage depth of magmas at Semisopochnoi. Comparisons of calculated source cavity, magma injection, and surface deformation volumes are made in order to assess the reality behind the various modelling estimates. Incorporating geochemical and seismic data to provide constraints on surface deformation source inversions provides an interdisciplinary approach that can be used to make more accurate interpretations of dynamic observations.
Sen, Novonil; Kundu, Tribikram
2018-07-01
Estimating the location of an acoustic source in a structure is an important step towards passive structural health monitoring. Techniques for localizing an acoustic source in isotropic structures are well developed in the literature. Development of similar techniques for anisotropic structures, however, has gained attention only in the recent years and has a scope of further improvement. Most of the existing techniques for anisotropic structures either assume a straight line wave propagation path between the source and an ultrasonic sensor or require the material properties to be known. This study considers different shapes of the wave front generated during an acoustic event and develops a methodology to localize the acoustic source in an anisotropic plate from those wave front shapes. An elliptical wave front shape-based technique was developed first, followed by the development of a parametric curve-based technique for non-elliptical wave front shapes. The source coordinates are obtained by minimizing an objective function. The proposed methodology does not assume a straight line wave propagation path and can predict the source location without any knowledge of the elastic properties of the material. A numerical study presented here illustrates how the proposed methodology can accurately estimate the source coordinates. Copyright © 2018 Elsevier B.V. All rights reserved.
Classification of event location using matched filters via on-floor accelerometers
NASA Astrophysics Data System (ADS)
Woolard, Americo G.; Malladi, V. V. N. Sriram; Alajlouni, Sa'ed; Tarazaga, Pablo A.
2017-04-01
Recent years have shown prolific advancements in smart infrastructures, allowing buildings of the modern world to interact with their occupants. One of the sought-after attributes of smart buildings is the ability to provide unobtrusive, indoor localization of occupants. The ability to locate occupants indoors can provide a broad range of benefits in areas such as security, emergency response, and resource management. Recent research has shown promising results in occupant building localization, although there is still significant room for improvement. This study presents a passive, small-scale localization system using accelerometers placed around the edges of a small area in an active building environment. The area is discretized into a grid of small squares, and vibration measurements are processed using a pattern matching approach that estimates the location of the source. Vibration measurements are produced with ball-drops, hammer-strikes, and footsteps as the sources of the floor excitation. The developed approach uses matched filters based on a reference data set, and the location is classified using a nearest-neighbor search. This approach detects the appropriate location of impact-like sources i.e. the ball-drops and hammer-strikes with a 100% accuracy. However, this accuracy reduces to 56% for footsteps, with the average localization results being within 0.6 m (α = 0.05) from the true source location. While requiring a reference data set can make this method difficult to implement on a large scale, it may be used to provide accurate localization abilities in areas where training data is readily obtainable. This exploratory work seeks to examine the feasibility of the matched filter and nearest neighbor search approach for footstep and event localization in a small, instrumented area within a multi-story building.
Short-Period Surface Wave Based Seismic Event Relocation
NASA Astrophysics Data System (ADS)
White-Gaynor, A.; Cleveland, M.; Nyblade, A.; Kintner, J. A.; Homman, K.; Ammon, C. J.
2017-12-01
Accurate and precise seismic event locations are essential for a broad range of geophysical investigations. Superior location accuracy generally requires calibration with ground truth information, but superb relative location precision is often achievable independently. In explosion seismology, low-yield explosion monitoring relies on near-source observations, which results in a limited number of observations that challenges our ability to estimate any locations. Incorporating more distant observations means relying on data with lower signal-to-noise ratios. For small, shallow events, the short-period (roughly 1/2 to 8 s period) fundamental-mode and higher-mode Rayleigh waves (including Rg) are often the most stable and visible portion of the waveform at local distances. Cleveland and Ammon [2013] have shown that teleseismic surface waves are valuable observations for constructing precise, relative event relocations. We extend the teleseismic surface wave relocation method, and apply them to near-source distances using Rg observations from the Bighorn Arche Seismic Experiment (BASE) and the Earth Scope USArray Transportable Array (TA) seismic stations. Specifically, we present relocation results using short-period fundamental- and higher-mode Rayleigh waves (Rg) in a double-difference relative event relocation for 45 delay-fired mine blasts and 21 borehole chemical explosions. Our preliminary efforts are to explore the sensitivity of the short-period surface waves to local geologic structure, source depth, explosion magnitude (yield), and explosion characteristics (single-shot vs. distributed source, etc.). Our results show that Rg and the first few higher-mode Rayleigh wave observations can be used to constrain the relative locations of shallow low-yield events.
Ginter, S
2000-07-01
Ultrasound (US) thermotherapy is used to treat tumours, located deep in human tissue, by heat. It features by the application of high intensity focused ultrasound (HIFU), high local temperatures of about 90 degrees C and short treating time of a few seconds. Dosage of the therapy remains a problem. To get it under control, one has to know the heat source, i.e. the amount of absorbed US power, which shows nonlinear influences. Therefore, accurate simulations are essential. In this paper, an improved simulation model is introduced which enables accurate investigations of US thermotherapy. It combines nonlinear US propagation effects, which lead to generation of higher harmonics, with a broadband frequency-power law absorption typical for soft tissue. Only the combination of both provides a reliable calculation of the generated heat. Simulations show the influence of nonlinearities and broadband damping for different source signals on the absorbed US power density distribution.
Method and apparatus for calibrating a particle emissions monitor
Flower, W.L.; Renzi, R.F.
1998-07-07
The invention discloses a method and apparatus for calibrating particulate emissions monitors, in particular, sampling probes, and in general, without removing the instrument from the system being monitored. A source of one or more specific metals in aerosol (either solid or liquid) or vapor form is housed in the instrument. The calibration operation is initiated by moving a focusing lens, used to focus a light beam onto an analysis location and collect the output light response, from an operating position to a calibration position such that the focal point of the focusing lens is now within a calibration stream issuing from a calibration source. The output light response from the calibration stream can be compared to that derived from an analysis location in the operating position to more accurately monitor emissions within the emissions flow stream. 6 figs.
Method and apparatus for calibrating a particle emissions monitor
Flower, William L.; Renzi, Ronald F.
1998-07-07
The instant invention discloses method and apparatus for calibrating particulate emissions monitors, in particular, and sampling probes, in general, without removing the instrument from the system being monitored. A source of one or more specific metals in aerosol (either solid or liquid) or vapor form is housed in the instrument. The calibration operation is initiated by moving a focusing lens, used to focus a light beam onto an analysis location and collect the output light response, from an operating position to a calibration position such that the focal point of the focusing lens is now within a calibration stream issuing from a calibration source. The output light response from the calibration stream can be compared to that derived from an analysis location in the operating position to more accurately monitor emissions within the emissions flow stream.
Acoustic Location of Lightning Using Interferometric Techniques
NASA Astrophysics Data System (ADS)
Erives, H.; Arechiga, R. O.; Stock, M.; Lapierre, J. L.; Edens, H. E.; Stringer, A.; Rison, W.; Thomas, R. J.
2013-12-01
Acoustic arrays have been used to accurately locate thunder sources in lightning flashes. The acoustic arrays located around the Magdalena mountains of central New Mexico produce locations which compare quite well with source locations provided by the New Mexico Tech Lightning Mapping Array. These arrays utilize 3 outer microphones surrounding a 4th microphone located at the center, The location is computed by band-passing the signal to remove noise, and then computing the cross correlating the outer 3 microphones with respect the center reference microphone. While this method works very well, it works best on signals with high signal to noise ratios; weaker signals are not as well located. Therefore, methods are being explored to improve the location accuracy and detection efficiency of the acoustic location systems. The signal received by acoustic arrays is strikingly similar to th signal received by radio frequency interferometers. Both acoustic location systems and radio frequency interferometers make coherent measurements of a signal arriving at a number of closely spaced antennas. And both acoustic and interferometric systems then correlate these signals between pairs of receivers to determine the direction to the source of the received signal. The primary difference between the two systems is the velocity of propagation of the emission, which is much slower for sound. Therefore, the same frequency based techniques that have been used quite successfully with radio interferometers should be applicable to acoustic based measurements as well. The results presented here are comparisons between the location results obtained with current cross correlation method and techniques developed for radio frequency interferometers applied to acoustic signals. The data were obtained during the summer 2013 storm season using multiple arrays sensitive to both infrasonic frequency and audio frequency acoustic emissions from lightning. Preliminary results show that interferometric techniques have good potential for improving the lightning location accuracy and detection efficiency of acoustic arrays.
2013-01-01
Background Place and health researchers are increasingly interested in integrating individuals’ mobility and the experience they have with multiple settings in their studies. In practice, however, few tools exist which allow for rapid and accurate gathering of detailed information on the geographic location of places where people regularly undertake activities. We describe the development and validation of a new activity location questionnaire which can be useful in accounting for multiple environmental influences in large population health investigations. Methods To develop the questionnaire, we relied on a literature review of similar data collection tools and on results of a pilot study wherein we explored content validity, test-retest reliability, and face validity. To estimate convergent validity, we used data from a study of users of a public bicycle share program conducted in Montreal, Canada in 2011. We examined the spatial congruence between questionnaire data and data from three other sources: 1) one-week GPS tracks; 2) activity locations extracted from the GPS tracks; and 3) a prompted recall survey of locations visited during the day. Proximity and convex hull measures were used to compare questionnaire-derived data and GPS and prompted recall survey data. Results In the sample, 75% of questionnaire-reported activity locations were located within 400 meters of an activity location recorded on the GPS track or through the prompted recall survey. Results from convex hull analyses suggested questionnaire activity locations were more concentrated in space than GPS or prompted-recall locations. Conclusions The new questionnaire has high convergent validity and can be used to accurately collect data on regular activity spaces in terms of locations regularly visited. The methods, measures, and findings presented provide new material to further study mobility in place and health research. PMID:24025119
NASA Astrophysics Data System (ADS)
Bleiweiss, M. P.; DuBois, D. W.; Flores, M. I.
2013-12-01
Dust storms in the border region of the Southwest US and Northern Mexico are a serious problem for air quality (PM10 exceedances), health (Valley Fever is pandemic in the region) and transportation (road closures and deadly traffic accidents). In order to better understand the phenomena, we are attempting to identify critical characteristics of dust storm sources so that, possibly, one can perform more accurate predictions of events and, thus, mitigate some of the deleterious effects. Besides the emission mechanisms for dust storm production that are tied to atmospheric dynamics, one must know those locations whose source characteristics can be tied to dust production and, therefore, identify locations where a dust storm is eminent under favorable atmospheric dynamics. During the past 13 years, we have observed, on satellite imagery, more than 500 dust events in the region and are in the process of identifying the source regions for the dust plumes that make up an event. Where satellite imagery exists with high spatial resolution (less than or equal to 250m), dust 'plumes' appear to be made up of individual and merged plumes that are emitted from a 'point source' (smaller than the resolution of the imagery). In particular, we have observed events from the ASTER sensor whose spatial resolution is 15m as well as Landsat whose spatial resolution is 30m. Tying these source locations to surface properties such as NDVI, albedo, and soil properties (percent sand, silt, clay, and gravel; soil moisture; etc.) will identify regions with enhanced capability to produce a dust storm. This, along with atmospheric dynamics, will allow the forecast of dust events. The analysis of 10 events from the period 2004-2013, for which we have identified 1124 individual plumes, will be presented.
Self characterization of a coded aperture array for neutron source imaging
Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; ...
2014-12-15
The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning DT plasma during the stagnation stage of ICF implosions. Since the neutron source is small (~100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be preciselymore » aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.« less
NASA Astrophysics Data System (ADS)
Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.
2015-12-01
Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.
Rapid Regional Centroid Solutions
NASA Astrophysics Data System (ADS)
Wei, S.; Zhan, Z.; Luo, Y.; Ni, S.; Chen, Y.; Helmberger, D. V.
2009-12-01
The 2008 Wells Nevada Earthquake was recorded by 164 broadband USArray stations within a distance of 550km (5 degrees) with all azimuths uniformly sampled. To establish the source parameters, we applied the Cut and Paste (CAP) code to all the stations to obtain a mechanism (strike/dip/rake=35/41/-85) at a depth of 9km and Mw=5.9. Surface wave shifts range from -8s to 8s which are in good agreement with ambient seismic noise (ASN) predictions. Here we use this data set to test the accuracy of the number of stations needed to obtain adequate solutions (position of the compressional and tension axis) for mechanism. The stations were chosen at random where combinations of Pnl and surface waves were used to establish mechanism and depth. If the event is bracketed by two stations, we obtain an accurate magnitude with good solutions about 80% of the trials. Complete solutions from four stations or Pnl from 10 stations prove reliable in nearly all situations. We also explore the use of this dataset in locating the event using a combination of surface wave travel times and/or the full waveform inversion (CAPloc) that uses the CAP shifts to refine locations. If the mechanism is known (fixed) only a few stations is needed to locate an event to within 5km if date is available at less than 150km. In contrast, surface wave travel times (calibrated to within one second) produce amazing accurate locations with only 6 stations reasonably distributed. It appears this approach is easily automated as suggested by Scrivner and Helmberger (1995) who discussed travel times of Pnl and surface waves and the evolving of source accuracy as the various phases arrive.
NASA Astrophysics Data System (ADS)
Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio
2005-10-01
This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG.
Development of mine explosion ground truth smart sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Steven R.; Harben, Phillip E.; Jarpe, Steve
Accurate seismo-acoustic source location is one of the fundamental aspects of nuclear explosion monitoring. Critical to improved location is the compilation of ground truth data sets for which origin time and location are accurately known. Substantial effort by the National Laboratories and other seismic monitoring groups have been undertaken to acquire and develop ground truth catalogs that form the basis of location efforts (e.g. Sweeney, 1998; Bergmann et al., 2009; Waldhauser and Richards, 2004). In particular, more GT1 (Ground Truth 1 km) events are required to improve three-dimensional velocity models that are currently under development. Mine seismicity can form themore » basis of accurate ground truth datasets. Although the location of mining explosions can often be accurately determined using array methods (e.g. Harris, 1991) and from overhead observations (e.g. MacCarthy et al., 2008), accurate origin time estimation can be difficult. Occasionally, mine operators will share shot time, location, explosion size and even shot configuration, but this is rarely done, especially in foreign countries. Additionally, shot times provided by mine operators are often inaccurate. An inexpensive, ground truth event detector that could be mailed to a contact, placed in close proximity (< 5 km) to mining regions or earthquake aftershock regions that automatically transmits back ground-truth parameters, would greatly aid in development of ground truth datasets that could be used to improve nuclear explosion monitoring capabilities. We are developing an inexpensive, compact, lightweight smart sensor unit (or units) that could be used in the development of ground truth datasets for the purpose of improving nuclear explosion monitoring capabilities. The units must be easy to deploy, be able to operate autonomously for a significant period of time (> 6 months) and inexpensive enough to be discarded after useful operations have expired (although this may not be part of our business plan). Key parameters to be automatically determined are event origin time (within 0.1 sec), location (within 1 km) and size (within 0.3 magnitude units) without any human intervention. The key parameter ground truth information from explosions greater than magnitude 2.5 will be transmitted to a recording and transmitting site. Because we have identified a limited bandwidth, inexpensive two-way satellite communication (ORBCOMM), we have devised the concept of an accompanying Ground-Truth Processing Center that would enable calibration and ground-truth accuracy to improve over the duration of a deployment.« less
2013-09-06
the Nepal Himalaya and the south- central Tibetan Plateau. The 2002–2005 experiment consisted of 233 stations extending from the Himalayan foreland...into the central Tibetan Plateau. The dataset provides an opportunity to obtain accurate seismic event locations for ground truth evaluation and to...after an M=6+ earthquake in the Payang Basin . .....................................................15 Approved for public release; distribution is
NASA Technical Reports Server (NTRS)
1981-01-01
Observer Single-handed Transatlantic Race (OSTAR) participants were aided by a French-American space-based monitoring system which reported the yacht's positions throughout the race, and also served as an emergency locator service. Originating from NASA's Nimbus 6 Satellite, use of this system, called ARGOS made the OSTAR competition the most accurately reported sea race ever conducted. Each boat carried a portable transmitter allowing 88 new sources of oceanographic data available during the race.
NASA Astrophysics Data System (ADS)
Zeng, Lvming; Liu, Guodong; Yang, Diwu; Ren, Zhong; Huang, Zhen
2008-12-01
A near-infrared photoacoustic glucose monitoring system, which is integrated dual-wavelength pulsed laser diode excitation with eight-element planar annular array detection technique, is designed and fabricated during this study. It has the characteristics of nonivasive, inexpensive, portable, accurate location, and high signal-to-noise ratio. In the system, the exciting source is based on two laser diodes with wavelengths of 905 nm and 1550 nm, respectively, with optical pulse energy of 20 μJ and 6 μJ. The laser beam is optically focused and jointly projected to a confocal point with a diameter of 0.7 mm approximately. A 7.5 MHz 8-element annular array transducer with a hollow structure is machined to capture photoacoustic signal in backward mode. The captured signals excitated from blood glucose are processed with a synthetic focusing algorithm to obtain high signal-to-noise ratio and accurate location over a range of axial detection depth. The custom-made transducer with equal area elements is coaxially collimated with the laser source to improve the photoacoustic excite/receive efficiency. In the paper, we introduce the photoacoustic theory, receive/process technique, and design method of the portable noninvasive photoacoustic glucose monitoring system, which can potentially be developed as a powerful diagnosis and treatment tool for diabetes mellitus.
Seismo-volcano source localization with triaxial broad-band seismic array
NASA Astrophysics Data System (ADS)
Inza, L. A.; Mars, J. I.; Métaxian, J. P.; O'Brien, G. S.; Macedo, O.
2011-10-01
Seismo-volcano source localization is essential to improve our understanding of eruptive dynamics and of magmatic systems. The lack of clear seismic wave phases prohibits the use of classical location methods. Seismic antennas composed of one-component (1C) seismometers provide a good estimate of the backazimuth of the wavefield. The depth estimation, on the other hand, is difficult or impossible to determine. As in classical seismology, the use of three-component (3C) seismometers is now common in volcano studies. To determine the source location parameters (backazimuth and depth), we extend the 1C seismic antenna approach to 3Cs. This paper discusses a high-resolution location method using a 3C array survey (3C-MUSIC algorithm) with data from two seismic antennas installed on an andesitic volcano in Peru (Ubinas volcano). One of the main scientific questions related to the eruptive process of Ubinas volcano is the relationship between the magmatic explosions and long-period (LP) swarms. After introducing the 3C array theory, we evaluate the robustness of the location method on a full wavefield 3-D synthetic data set generated using a digital elevation model of Ubinas volcano and an homogeneous velocity model. Results show that the backazimuth determined using the 3C array has a smaller error than a 1C array. Only the 3C method allows the recovery of the source depths. Finally, we applied the 3C approach to two seismic events recorded in 2009. Crossing the estimated backazimuth and incidence angles, we find sources located 1000 ± 660 m and 3000 ± 730 m below the bottom of the active crater for the explosion and the LP event, respectively. Therefore, extending 1C arrays to 3C arrays in volcano monitoring allows a more accurate determination of the source epicentre and now an estimate for the depth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, D.M.; Coggins, T.L.; Marsh, J.
Numerous efforts are funded by US agencies (DOE, DoD, DHS) for development of novel radiation sensing and measurement systems. An effort has been undertaken to develop a flexible shielding system compatible with a variety of sources (beta, X-ray, gamma, and neutron) that can be highly characterized using conventional radiation detection and measurement systems. Sources available for use in this system include americium-beryllium (AmBe), plutonium-beryllium (PuBe), strontium-90 (Sr-90), californium-252 (Cf-252), krypton-85 (Kr-85), americium-241 (Am-241), and depleted uranium (DU). Shielding can be varied by utilization of materials that include lexan, water, oil, lead, and polyethylene. Arrangements and geometries of source(s) and shieldingmore » can produce symmetrical or asymmetrical radiation fields. The system has been developed to facilitate accurately repeatable configurations. Measurement positions are similarly capable of being accurately re-created. Stand-off measurement positions can be accurately re-established using differential global positioning system (GPS) navigation. Instruments used to characterize individual measurement locations include a variety of sodium iodide (NaI(Tl)) (3 x 3 inch, 4 x 4 x 16 inch, Fidler) and lithium iodide (LiI(Eu)) detectors (for use with multichannel analyzer software) and detectors for use with traditional hand held survey meters such as boron trifluoride (BF{sub 3}), helium-3 ({sup 3}He), and Geiger-Mueller (GM) tubes. Also available are Global Dosimetry thermoluminescent dosimeters (TLDs), CR39 neutron chips, and film badges. Data will be presented comparing measurement techniques with shielding/source configurations. The system is demonstrated to provide a highly functional process for comparison/characterization of various detector types relative to controllable radiation types and levels. Particular attention has been paid to use of neutron sources and measurements. (authors)« less
Travel-time source-specific station correction improves location accuracy
NASA Astrophysics Data System (ADS)
Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo
2013-04-01
Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.
Fractal Complexity-Based Feature Extraction Algorithm of Communication Signals
NASA Astrophysics Data System (ADS)
Wang, Hui; Li, Jingchao; Guo, Lili; Dou, Zheng; Lin, Yun; Zhou, Ruolin
How to analyze and identify the characteristics of radiation sources and estimate the threat level by means of detecting, intercepting and locating has been the central issue of electronic support in the electronic warfare, and communication signal recognition is one of the key points to solve this issue. Aiming at accurately extracting the individual characteristics of the radiation source for the increasingly complex communication electromagnetic environment, a novel feature extraction algorithm for individual characteristics of the communication radiation source based on the fractal complexity of the signal is proposed. According to the complexity of the received signal and the situation of environmental noise, use the fractal dimension characteristics of different complexity to depict the subtle characteristics of the signal to establish the characteristic database, and then identify different broadcasting station by gray relation theory system. The simulation results demonstrate that the algorithm can achieve recognition rate of 94% even in the environment with SNR of -10dB, and this provides an important theoretical basis for the accurate identification of the subtle features of the signal at low SNR in the field of information confrontation.
Lie, Octavian V; Papanastassiou, Alexander M; Cavazos, José E; Szabó, Ákos C
2015-10-01
Poor seizure outcomes after epilepsy surgery often reflect an incorrect localization of the epileptic sources by standard intracranial EEG interpretation because of limited electrode coverage of the epileptogenic zone. This study investigates whether, in such conditions, source modeling is able to provide more accurate source localization than the standard clinical method that can be used prospectively to improve surgical resection planning. Suboptimal epileptogenic zone sampling is simulated by subsets of the electrode configuration used to record intracranial EEG in a patient rendered seizure free after surgery. sLORETA and the clinical method solutions are applied to interictal spikes sampled with these electrode subsets and are compared for colocalization with the resection volume and displacement due to electrode downsampling. sLORETA provides often congruent and at times more accurate source localization when compared with the standard clinical method. However, with electrode downsampling, individual sLORETA solution locations can vary considerably and shift consistently toward the remaining electrodes. sLORETA application can improve source localization based on the clinical method but does not reliably compensate for suboptimal electrode placement. Incorporating sLORETA solutions based on intracranial EEG in surgical planning should proceed cautiously in cases where electrode repositioning is planned on clinical grounds.
Improved phase arrival estimate and location for local earthquakes in South Korea
NASA Astrophysics Data System (ADS)
Morton, E. A.; Rowe, C. A.; Begnaud, M. L.
2012-12-01
The Korean Institute of Geoscience and Mineral Resources (KIGAM) and the Korean Meteorological Agency (KMA) regularly report local (distance < ~1200 km) seismicity recorded with their networks; we obtain preliminary event location estimates as well as waveform data, but no phase arrivals are reported, so the data are not immediately useful for earthquake location. Our goal is to identify seismic events that are sufficiently well-located to provide accurate seismic travel-time information for events within the KIGAM and KMA networks, and also recorded by some regional stations. Toward that end, we are using a combination of manual phase identification and arrival-time picking, with waveform cross-correlation, to cluster events that have occurred in close proximity to one another, which allows for improved phase identification by comparing the highly correlating waveforms. We cross-correlate the known events with one another on 5 seismic stations and cluster events that correlate above a correlation coefficient threshold of 0.7, which reveals few clusters containing few events each. The small number of repeating events suggests that the online catalogs have had mining and quarry blasts removed before publication, as these can contribute significantly to repeating seismic sources in relatively aseismic regions such as South Korea. The dispersed source locations in our catalog, however, are ideal for seismic velocity modeling by providing superior sampling through the dense seismic station arrangement, which produces favorable event-to-station ray path coverage. Following careful manual phase picking on 104 events chosen to provide adequate ray coverage, we re-locate the events to obtain improved source coordinates. The re-located events are used with Thurber's Simul2000 pseudo-bending local tomography code to estimate the crustal structure on the Korean Peninsula, which is an important contribution to ongoing calibration for events of interest in the region.
NASA Technical Reports Server (NTRS)
Casper, Paul W.; Bent, Rodney B.
1991-01-01
The algorithm used in previous technology time-of-arrival lightning mapping systems was based on the assumption that the earth is a perfect spheroid. These systems yield highly-accurate lightning locations, which is their major strength. However, extensive analysis of tower strike data has revealed occasionally significant (one to two kilometer) systematic offset errors which are not explained by the usual error sources. It was determined that these systematic errors reduce dramatically (in some cases) when the oblate shape of the earth is taken into account. The oblate spheroid correction algorithm and a case example is presented.
Battaglia, J.; Got, J.-L.; Okubo, P.
2003-01-01
We present methods for improving the location of long-period (LP) events, deep and shallow, recorded below Kilauea Volcano by the permanent seismic network. LP events might be of particular interest to understanding eruptive processes as their source mechanism is assumed to directly involve fluid transport. However, it is usually difficult or impossible to locate their source using traditional arrival time methods because of emergent wave arrivals. At Kilauea, similar LP waveform signatures suggest the existence of LP multiplets. The waveform similarity suggests spatially close sources, while catalog solutions using arrival time estimates are widely scattered beneath Kilauea's summit caldera. In order to improve estimates of absolute LP location, we use the distribution of seismic amplitudes corrected for station site effects. The decay of the amplitude as a function of hypocentral distance is used for inferring LP location. In a second stage, we use the similarity of the events to calculate their relative positions. The analysis of the entire LP seismicity recorded between January 1997 and December 1999 suggests that a very large part of the LP event population, both deep and shallow, is generated by a small number of compact sources. Deep events are systematically composed of a weak high-frequency onset followed by a low-frequency wave train. Aligning the low-frequency wave trains does not lead to aligning the onsets indicating the two parts of the signal are dissociated. This observation favors an interpretation in terms of triggering and resonance of a magmatic conduit. Instead of defining fault planes, the precise relocation of similar LP events, based on the alignment of the high-energy low-frequency wave trains, defines limited size volumes. Copyright 2003 by the American Geophysical Union.
Characterization of the new neutron imaging and materials science facility IMAT
NASA Astrophysics Data System (ADS)
Minniti, Triestino; Watanabe, Kenichi; Burca, Genoveva; Pooley, Daniel E.; Kockelmann, Winfried
2018-04-01
IMAT is a new cold neutron imaging and diffraction instrument located at the second target station of the pulsed neutron spallation source ISIS, UK. A broad range of materials science and materials testing areas will be covered by IMAT. We present the characterization of the imaging part, including the energy-selective and energy-dispersive imaging options, and provide the basic parameters of the radiography and tomography instrument. In particular, detailed studies on mono and bi-dimensional neutron beam flux profiles, neutron flux as a function of the neutron wavelength, spatial and energy dependent neutron beam uniformities, guide artifacts, divergence and spatial resolution, and neutron pulse widths are provided. An accurate characterization of the neutron beam at the sample position, located 56 m from the source, is required to optimize collection of radiographic and tomographic data sets and for performing energy-dispersive neutron imaging via time-of-flight methods in particular.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Timothy C.; Wellman, Dawn M.
2015-06-26
Electrical resistivity tomography (ERT) has been widely used in environmental applications to study processes associated with subsurface contaminants and contaminant remediation. Anthropogenic alterations in subsurface electrical conductivity associated with contamination often originate from highly industrialized areas with significant amounts of buried metallic infrastructure. The deleterious influence of such infrastructure on imaging results generally limits the utility of ERT where it might otherwise prove useful for subsurface investigation and monitoring. In this manuscript we present a method of accurately modeling the effects of buried conductive infrastructure within the forward modeling algorithm, thereby removing them from the inversion results. The method ismore » implemented in parallel using immersed interface boundary conditions, whereby the global solution is reconstructed from a series of well-conditioned partial solutions. Forward modeling accuracy is demonstrated by comparison with analytic solutions. Synthetic imaging examples are used to investigate imaging capabilities within a subsurface containing electrically conductive buried tanks, transfer piping, and well casing, using both well casings and vertical electrode arrays as current sources and potential measurement electrodes. Results show that, although accurate infrastructure modeling removes the dominating influence of buried metallic features, the presence of metallic infrastructure degrades imaging resolution compared to standard ERT imaging. However, accurate imaging results may be obtained if electrodes are appropriately located.« less
HCMM hydrological analysis in Utah
NASA Technical Reports Server (NTRS)
Miller, A. W. (Principal Investigator)
1982-01-01
The feasibility of applying a linear model to HCMM data in hopes of obtaining an accurate linear correlation was investigated. The relationship among HCMM sensed data surface temperature and red reflectivity on Utah Lake and water quality factors including algae concentrations, algae type, and nutrient and turbidity concentrations was established and evaluated. Correlation (composite) images of day infrared and reflectance imagery were assessed to determine if remote sensing offers the capability of using masses of accurate and comprehensive data in calculating evaporation. The effects of algae on temperature and evaporation were studied and the possibility of using satellite thermal data to locate areas within Utah Lake where significant thermal sources exist and areas of near surface groundwater was examined.
Accurate, reliable prototype earth horizon sensor head
NASA Technical Reports Server (NTRS)
Schwarz, F.; Cohen, H.
1973-01-01
The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.
Impedance Eduction in Ducts with Higher-Order Modes and Flow
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Jones, Michael G.
2009-01-01
An impedance eduction technique, previously validated for ducts with plane waves at the source and duct termination planes, has been extended to support higher-order modes at these locations. Inputs for this method are the acoustic pressures along the source and duct termination planes, and along a microphone array located in a wall either adjacent or opposite to the test liner. A second impedance eduction technique is then presented that eliminates the need for the microphone array. The integrity of both methods is tested using three sound sources, six Mach numbers, and six selected frequencies. Results are presented for both a hardwall and a test liner (with known impedance) consisting of a perforated plate bonded to a honeycomb core. The primary conclusion of the study is that the second method performs well in the presence of higher-order modes and flow. However, the first method performs poorly when most of the microphones are located near acoustic pressure nulls. The negative effects of the acoustic pressure nulls can be mitigated by a judicious choice of the mode structure in the sound source. The paper closes by using the first impedance eduction method to design a rectangular array of 32 microphones for accurate impedance eduction in the NASA LaRC Curved Duct Test Rig in the presence of expected measurement uncertainties, higher order modes, and mean flow.
Configuration of electro-optic fire source detection system
NASA Astrophysics Data System (ADS)
Fabian, Ram Z.; Steiner, Zeev; Hofman, Nir
2007-04-01
The recent fighting activities in various parts of the world have highlighted the need for accurate fire source detection on one hand and fast "sensor to shooter cycle" capabilities on the other. Both needs can be met by the SPOTLITE system which dramatically enhances the capability to rapidly engage hostile fire source with a minimum of casualties to friendly force and to innocent bystanders. Modular system design enable to meet each customer specific requirements and enable excellent future growth and upgrade potential. The design and built of a fire source detection system is governed by sets of requirements issued by the operators. This can be translated into the following design criteria: I) Long range, fast and accurate fire source detection capability. II) Different threat detection and classification capability. III) Threat investigation capability. IV) Fire source data distribution capability (Location, direction, video image, voice). V) Men portability. ) In order to meet these design criteria, an optimized concept was presented and exercised for the SPOTLITE system. Three major modular components were defined: I) Electro Optical Unit -Including FLIR camera, CCD camera, Laser Range Finder and Marker II) Electronic Unit -including system computer and electronic. III) Controller Station Unit - Including the HMI of the system. This article discusses the system's components definition and optimization processes, and also show how SPOTLITE designers successfully managed to introduce excellent solutions for other system parameters.
NASA Astrophysics Data System (ADS)
Williams, J. R.; Hawthorne, J.; Rost, S.; Wright, T. J.
2017-12-01
Earthquakes on oceanic transform faults often show unusual behaviour. They tend to occur in swarms, have large numbers of foreshocks, and have high stress drops. We estimate stress drops for approximately 60 M > 4 earthquakes along the Blanco oceanic transform fault, a right-lateral fault separating the Juan de Fuca and Pacific plates offshore of Oregon. We find stress drops with a median of 4.4±19.3MPa and examine how they vary with earthquake moment. We calculate stress drops using a recently developed method based on inter-station phase coherence. We compare seismic records of co-located earthquakes at a range of stations. At each station, we apply an empirical Green's function (eGf) approach to remove phase path effects and isolate the relative apparent source time functions. The apparent source time functions at each earthquake should vary among stations at periods shorter than a P wave's travel time across the earthquake rupture area. Therefore we compute the rupture length of the larger earthquake by identifying the frequency at which the relative apparent source time functions start to vary among stations, leading to low inter-station phase coherence. We determine a stress drop from the rupture length and moment of the larger earthquake. Our initial stress drop estimates increase with increasing moment, suggesting that earthquakes on the Blanco fault are not self-similar. However, these stress drops may be biased by several factors, including depth phases, trace alignment, and source co-location. We find that the inclusion of depth phases (such as pP) in the analysis time window has a negligible effect on the phase coherence of our relative apparent source time functions. We find that trace alignment must be accurate to within 0.05 s to allow us to identify variations in the apparent source time functions at periods relevant for M > 4 earthquakes. We check that the alignments are accurate enough by comparing P wave arrival times across groups of earthquakes. Finally, we note that the eGf path effect removal will be unsuccessful if earthquakes are too far apart. We therefore calculate relative earthquake locations from our estimated differential P wave arrival times, then we examine how our stress drop estimates vary with inter-earthquake distance.
Heading Estimation for Pedestrian Dead Reckoning Based on Robust Adaptive Kalman Filtering.
Wu, Dongjin; Xia, Linyuan; Geng, Jijun
2018-06-19
Pedestrian dead reckoning (PDR) using smart phone-embedded micro-electro-mechanical system (MEMS) sensors plays a key role in ubiquitous localization indoors and outdoors. However, as a relative localization method, it suffers from the problem of error accumulation which prevents it from long term independent running. Heading estimation error is one of the main location error sources, and therefore, in order to improve the location tracking performance of the PDR method in complex environments, an approach based on robust adaptive Kalman filtering (RAKF) for estimating accurate headings is proposed. In our approach, outputs from gyroscope, accelerometer, and magnetometer sensors are fused using the solution of Kalman filtering (KF) that the heading measurements derived from accelerations and magnetic field data are used to correct the states integrated from angular rates. In order to identify and control measurement outliers, a maximum likelihood-type estimator (M-estimator)-based model is used. Moreover, an adaptive factor is applied to resist the negative effects of state model disturbances. Extensive experiments under static and dynamic conditions were conducted in indoor environments. The experimental results demonstrate the proposed approach provides more accurate heading estimates and supports more robust and dynamic adaptive location tracking, compared with methods based on conventional KF.
Su, Ri-Qi; Wang, Wen-Xu; Wang, Xiao; Lai, Ying-Cheng
2016-01-01
Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key challenge is that the signals collected are necessarily time delayed, due to the varying physical distances from the nodes to the data collection centre. To meet this challenge, we develop a compressive-sensing-based approach enabling reconstruction of the full topology of the underlying geospatial network and more importantly, accurate estimate of the time delays. A standard triangularization algorithm can then be employed to find the physical locations of the nodes in the network. We further demonstrate successful detection of a hidden node (or a hidden source or threat), from which no signal can be obtained, through accurate detection of all its neighbouring nodes. As a geospatial network has the feature that a node tends to connect with geophysically nearby nodes, the localized region that contains the hidden node can be identified. PMID:26909187
Development of Parameters for the Collection and Analysis of Lidar at Military Munitions Sites
2010-01-01
and inertial measurement unit (IMU) equipment is used to locate the sensor in the air . The time of return of the laser signal allows for the...approximately 15 centimeters (cm) on soft ground surfaces and a horizontal accuracy of approximately 60 cm, both compared to surveyed control points...provide more accurate topographic data than other sources, at a reasonable cost compared to alternatives such as ground survey or photogrammetry
PADF electromagnetic source localization using extremum seeking control
NASA Astrophysics Data System (ADS)
Al Issa, Huthaifa A.; Ordóñez, Raúl
2014-10-01
Wireless Sensor Networks (WSNs) are a significant technology attracting considerable research interest. Recent advances in wireless communications and electronics have enabled the development of low-cost, low-power and multi-functional sensors that are small in size and communicate over short distances. Most WSN applications require knowing or measuring locations of thousands of sensors accurately. For example, sensing data without knowing the sensor location is often meaningless. Locations of sensor nodes are fundamental to providing location stamps, locating and tracking objects, forming clusters, and facilitating routing. This research focused on the modeling and implementation of distributed, mobile radar sensor networks. In particular, we worked on the problem of Position-Adaptive Direction Finding (PADF), to determine the location of a non- collaborative transmitter, possibly hidden within a structure, by using a team of cooperative intelligent sensor networks. Position-Adaptive radar concepts have been formulated and investigated at the Air Force Research Laboratory (AFRL) within the past few years. In this paper, we present the simulation performance analysis on the application aspect. We apply Extremum Seeking Control (ESC) schemes by using the swarm seeking problem, where the goal is to design a control law for each individual sensor that can minimize the error metric by adapting the sensor positions in real-time, thereby minimizing the unknown estimation error. As a result we achieved source seeking and collision avoidance of the entire group of the sensor positions.
NASA Technical Reports Server (NTRS)
Grosveld, Ferdinand W.; Silcox, Richard (Technical Monitor)
2001-01-01
A location and positioning system was developed and implemented in the anechoic chamber of the Structural Acoustics Loads and Transmission (SALT) facility to accurately determine the coordinates of points in three-dimensional space. Transfer functions were measured between a shaker source at two different panel locations and the vibrational response distributed over the panel surface using a scanning laser vibrometer. The binaural simulation test matrix included test runs for several locations of the measuring microphones, various attitudes of the mannequin, two locations of the shaker excitation and three different shaker inputs including pulse, broadband random, and pseudo-random. Transfer functions, auto spectra, and coherence functions were acquired for the pseudo-random excitation. Time histories were acquired for the pulse and broadband random input to the shaker. The tests were repeated with a reflective surface installed. Binary data files were converted to universal format and archived on compact disk.
Design of laser monitoring and sound localization system
NASA Astrophysics Data System (ADS)
Liu, Yu-long; Xu, Xi-ping; Dai, Yu-ming; Qiao, Yang
2013-08-01
In this paper, a novel design of laser monitoring and sound localization system is proposed. It utilizes laser to monitor and locate the position of the indoor conversation. In China most of the laser monitors no matter used in labor in an instrument uses photodiode or phototransistor as a detector at present. At the laser receivers of those facilities, light beams are adjusted to ensure that only part of the window in photodiodes or phototransistors received the beams. The reflection would deviate from its original path because of the vibration of the detected window, which would cause the changing of imaging spots in photodiode or phototransistor. However, such method is limited not only because it could bring in much stray light in receivers but also merely single output of photocurrent could be obtained. Therefore a new method based on quadrant detector is proposed. It utilizes the relation of the optical integral among quadrants to locate the position of imaging spots. This method could eliminate background disturbance and acquired two-dimensional spots vibrating data pacifically. The principle of this whole system could be described as follows. Collimated laser beams are reflected from vibrate-window caused by the vibration of sound source. Therefore reflected beams are modulated by vibration source. Such optical signals are collected by quadrant detectors and then are processed by photoelectric converters and corresponding circuits. Speech signals are eventually reconstructed. In addition, sound source localization is implemented by the means of detecting three different reflected light sources simultaneously. Indoor mathematical models based on the principle of Time Difference Of Arrival (TDOA) are established to calculate the twodimensional coordinate of sound source. Experiments showed that this system is able to monitor the indoor sound source beyond 15 meters with a high quality of speech reconstruction and to locate the sound source position accurately.
Tennessee Valley Total and Cloud-to-Ground Lightning Climatology Comparison
NASA Technical Reports Server (NTRS)
Buechler, Dennis; Blakeslee, R. J.; Hall, J. M.; McCaul, E. W.
2008-01-01
The North Alabama Lightning Mapping Array (NALMA) has been in operation since 2001 and consists often VHF receivers deployed across northern Alabama. The NALMA locates sources of impulsive VHF radio signals from total lightning by accurately measuring the time that the signals arrive at the different receiving stations. The sources detected are then clustered into flashes by applying spatially and temporally constraints. This study examines the total lightning climatology of the region derived from NALMA and compares it to the cloud-to-ground (CG) climatology derived from the National Lightning Detection Network (NLDN) The presentation compares the total and CG lightning trends for monthly, daily, and hourly periods.
NASA Technical Reports Server (NTRS)
Theobald, M. A.
1978-01-01
The single source location used for helicopter model studies was utilized in a study to determine the distances and directions upstream of the model accurate at which measurements of the direct acoustic field could be obtained. The method used was to measure the decrease of sound pressure levels with distance from a noise source and thereby determine the Hall radius as a function of frequency and direction. Test arrangements and procedures are described. Graphs show the normalized sound pressure level versus distance curves for the glass fiber floor treatment and for the foam floor treatment.
Lofgren, E.J.
1959-04-14
This patcnt relates to calutron devices and deals particularly with the mechanism used to produce the beam of ions wherein a charge material which is a vapor at room temperature is used. A charge container located outside the tank is connected through several conduits to various points along the arc chamber of the ion source. In addition, the rate of flow of the vapor to the arc chamber is controlled by a throttle valve in each conduit. By this arrangement the arc can be regulated accurately and without appreciable time lag, inasmuch as the rate of vapor flow is immediately responsive to the manipulation of the throttle valves.
Fiber tracking of brain white matter based on graph theory.
Lu, Meng
2015-01-01
Brain white matter tractography is reconstructed via diffusion-weighted magnetic resonance images. Due to the complex structure of brain white matter fiber bundles, fiber crossing and fiber branching are abundant in human brain. And regular methods with diffusion tensor imaging (DTI) can't accurately handle this problem. the biggest problems of the brain tractography. Therefore, this paper presented a novel brain white matter tractography method based on graph theory, so the fiber tracking between two voxels is transformed into locating the shortest path in a graph. Besides, the presented method uses Q-ball imaging (QBI) as the source data instead of DTI, because QBI can provide accurate information about multiple fiber crossing and branching in one voxel using orientation distribution function (ODF). Experiments showed that the presented method can accurately handle the problem of brain white matter fiber crossing and branching, and reconstruct brain tractograhpy both in phantom data and real brain data.
Strain gage based determination of mixed mode SIFs
NASA Astrophysics Data System (ADS)
Murthy, K. S. R. K.; Sarangi, H.; Chakraborty, D.
2018-05-01
Accurate determination of mixed mode stress intensity factors (SIFs) is essential in understanding and analysis of mixed mode fracture of engineering components. Only a few strain gage determination of mixed mode SIFs are reported in literatures and those also do not provide any prescription for radial locations of strain gages to ensure accuracy of measurement. The present investigation experimentally demonstrates the efficacy of a proposed methodology for the accurate determination of mixed mode I/II SIFs using strain gages. The proposed approach is based on the modified Dally and Berger's mixed mode technique. Using the proposed methodology appropriate gage locations (optimal locations) for a given configuration have also been suggested ensuring accurate determination of mixed mode SIFs. Experiments have been conducted by locating the gages at optimal and non-optimal locations to study the efficacy of the proposed approach. The experimental results from the present investigation show that highly accurate SIFs (0.064%) can be determined using the proposed approach if the gages are located at the suggested optimal locations. On the other hand, results also show the very high errors (212.22%) in measured SIFs possible if the gages are located at non-optimal locations. The present work thus clearly substantiates the importance of knowing the optimal locations of the strain gages apriori in accurate determination of SIFs.
Fidan, Barış; Umay, Ilknur
2015-01-01
Accurate signal-source and signal-reflector target localization tasks via mobile sensory units and wireless sensor networks (WSNs), including those for environmental monitoring via sensory UAVs, require precise knowledge of specific signal propagation properties of the environment, which are permittivity and path loss coefficients for the electromagnetic signal case. Thus, accurate estimation of these coefficients has significant importance for the accuracy of location estimates. In this paper, we propose a geometric cooperative technique to instantaneously estimate such coefficients, with details provided for received signal strength (RSS) and time-of-flight (TOF)-based range sensors. The proposed technique is integrated to a recursive least squares (RLS)-based adaptive localization scheme and an adaptive motion control law, to construct adaptive target localization and adaptive target tracking algorithms, respectively, that are robust to uncertainties in aforementioned environmental signal propagation coefficients. The efficiency of the proposed adaptive localization and tracking techniques are both mathematically analysed and verified via simulation experiments. PMID:26690441
Merrill, Rebecca D.; Shamim, Abu Ahmed; Ali, Hasmot; Schulze, Kerry; Rashid, Mahbubur; Christian, Parul; West, Jr., Keith P.
2009-01-01
Iron is ubiquitous in natural water sources used around the world for drinking and cooking. The health impact of chronic exposure to iron through water, which in groundwater sources can reach well above the World Health Organization's defined aesthetic limit of 0.3 mg/L, is not currently understood. To quantify the impact of consumption of iron in groundwater on nutritional status, it is important to accurately assess naturally-occurring exposure levels among populations. In this study, the validity of iron quantification in water was evaluated using two portable instruments: the HACH DR/890 portable colorimeter (colorimeter) and HACH Iron test-kit, Model IR-18B (test-kit), by comparing field-based iron estimates for 25 tubewells located in northwestern Bangladesh with gold standard atomic absorption spectrophotometry analysis. Results of the study suggest that the HACH test-kit delivers more accurate point-of-use results across a wide range of iron concentrations under challenging field conditions. PMID:19507757
Merrill, Rebecca D; Shamim, Abu Ahmed; Labrique, Alain B; Ali, Hasmot; Schulze, Kerry; Rashid, Mahbubur; Christian, Parul; West, Keith P
2009-06-01
Iron is ubiquitous in natural water sources used around the world for drinking and cooking. The health impact of chronic exposure to iron through water, which in groundwater sources can reach well above the World Health Organization's defined aesthetic limit of 0.3 mg/L, is not currently understood. To quantify the impact of consumption of iron in groundwater on nutritional status, it is important to accurately assess naturally-occurring exposure levels among populations. In this study, the validity of iron quantification in water was evaluated using two portable instruments: the HACH DR/890 portable colorimeter (colorimeter) and HACH Iron test-kit, Model IR-18B (test-kit), by comparing field-based iron estimates for 25 tubewells located in northwestern Bangladesh with gold standard atomic absorption spectrophotometry analysis. Results of the study suggest that the HACH test-kit delivers more accurate point-of-use results across a wide range of iron concentrations under challenging field conditions.
Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.
Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J
2018-02-15
Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Albertson, J. D.
2015-12-01
Methane emissions from underground pipeline leaks remain an ongoing issue in the development of accurate methane emission inventories for the natural gas supply chain. Application of mobile methods during routine street surveys would help address this issue, but there are large uncertainties in current approaches. In this paper, we describe results from a series of near-source (< 30 m) controlled methane releases where an instrumented van was used to measure methane concentrations during both fixed location sampling and during mobile traverses immediately downwind of the source. The measurements were used to evaluate the application of EPA Method 33A for estimating methane emissions downwind of a source and also to test the application of a new probabilistic approach for estimating emission rates from mobile traverse data.
NASA Astrophysics Data System (ADS)
Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques
2015-12-01
In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.
Three dimensional time reversal optical tomography
NASA Astrophysics Data System (ADS)
Wu, Binlin; Cai, W.; Alrubaiee, M.; Xu, M.; Gayen, S. K.
2011-03-01
Time reversal optical tomography (TROT) approach is used to detect and locate absorptive targets embedded in a highly scattering turbid medium to assess its potential in breast cancer detection. TROT experimental arrangement uses multi-source probing and multi-detector signal acquisition and Multiple-Signal-Classification (MUSIC) algorithm for target location retrieval. Light transport from multiple sources through the intervening medium with embedded targets to the detectors is represented by a response matrix constructed using experimental data. A TR matrix is formed by multiplying the response matrix by its transpose. The eigenvectors with leading non-zero eigenvalues of the TR matrix correspond to embedded objects. The approach was used to: (a) obtain the location and spatial resolution of an absorptive target as a function of its axial position between the source and detector planes; and (b) study variation in spatial resolution of two targets at the same axial position but different lateral positions. The target(s) were glass sphere(s) of diameter ~9 mm filled with ink (absorber) embedded in a 60 mm-thick slab of Intralipid-20% suspension in water with an absorption coefficient μa ~ 0.003 mm-1 and a transport mean free path lt ~ 1 mm at 790 nm, which emulate the average values of those parameters for human breast tissue. The spatial resolution and accuracy of target location depended on axial position, and target contrast relative to the background. Both the targets could be resolved and located even when they were only 4-mm apart. The TROT approach is fast, accurate, and has the potential to be useful in breast cancer detection and localization.
NASA Astrophysics Data System (ADS)
Travis, B. J.; Sauer, J.; Dubey, M. K.
2017-12-01
Methane (CH4) leaks from oil and gas production fields are a potentially significant source of atmospheric methane. US DOE's ARPA-E office is supporting research to locate methane emissions at 10 m size well pads to within 1 m. A team led by Aeris Technologies, and that includes LANL, Planetary Science Institute and Rice University has developed an autonomous leak detection system (LDS) employing a compact laser absorption methane sensor, a sonic anemometer and multiport sampling. The LDS system analyzes monitoring data using a convolutional neural network (cNN) to locate and quantify CH4 emissions. The cNN was trained using three sources: (1) ultra-high-resolution simulations of methane transport provided by LANL's coupled atmospheric transport model HIGRAD, for numerous controlled methane release scenarios and methane sampling configurations under variable atmospheric conditions, (2) Field tests at the METEC site in Ft. Collins, CO., and (3) Field data from other sites where point-source surface methane releases were monitored downwind. A cNN learning algorithm is well suited to problems in which the training and observed data are noisy, or correspond to complex sensor data as is typical of meteorological and sensor data over a well pad. Recent studies with our cNN emphasize the importance of tracking wind speeds and directions at fine resolution ( 1 second), and accounting for variations in background CH4 levels. A few cases illustrate the importance of sufficiently long monitoring; short monitoring may not provide enough information to determine accurately a leak location or strength, mainly because of short-term unfavorable wind directions and choice of sampling configuration. Length of multiport duty cycle sampling and sample line flush time as well as number and placement of monitoring sensors can significantly impact ability to locate and quantify leaks. Source location error at less than 10% requires about 30 or more training cases.
Toward regional corrections of long period CMT inversions using InSAR
NASA Astrophysics Data System (ADS)
Shakibay Senobari, N.; Funning, G.; Ferreira, A. M.
2017-12-01
One of InSAR's main strengths, with respect to other methods of studying earthquakes, is finding the accurate location of the best point source (or `centroid') for an earthquake. While InSAR data have great advantages for study of shallow earthquakes, the number of earthquakes for which we have InSAR data is low, compared with the number of earthquakes recorded seismically. And though improvements to SAR satellite constellations have enhanced the use of InSAR data during earthquake response, post-event data still have a latency on the order of days. On the other hand, earthquake centroid inversion methods using long period seismic data (e.g. the Global CMT method) are fast but include errors caused by inaccuracies in both the Earth velocity model and in wave propagation assumptions (e.g. Hjörleifsdóttir and Ekström, 2010; Ferreira and Woodhouse, 2006). Here we demonstrate a method that combines the strengths of both methods, calculating regional travel-time corrections for long-period waveforms using accurate centroid locations from InSAR, then applying these to other events that occur in the same region. Our method is based on the observation that synthetic seismograms produced from InSAR source models and locations match the data very well except for some phase shifts (travel time biases) between the two waveforms, likely corresponding to inaccuracies in Earth velocity models (Weston et al., 2014). Our previous work shows that adding such phase shifts to the Green's functions can improve the accuracy of long period seismic CMT inversions by reducing tradeoffs between the moment tensor components and centroid location (e.g. Shakibay Senobari et al., AGU Fall Meeting 2015). Preliminary work on several pairs of neighboring events (e.g. Landers-Hector Mine, the 2000 South Iceland earthquake sequences) shows consistent azimuthal patterns of these phase shifts for nearby events at common stations. These phase shift patterns strongly suggest that it is possible to determine regional corrections for the source regions of these events. The aim of this project is to perform a full CMT inversion using the phase shift corrections, calculated for nearby events, to observe improvement in CMT locations and solutions. We will demonstrate our method on the five M 6 events that occurred in central Italy between 1997 and 2016.
Digital breast tomosynthesis geometry calibration
NASA Astrophysics Data System (ADS)
Wang, Xinying; Mainprize, James G.; Kempston, Michael P.; Mawdsley, Gordon E.; Yaffe, Martin J.
2007-03-01
Digital Breast Tomosynthesis (DBT) is a 3D x-ray technique for imaging the breast. The x-ray tube, mounted on a gantry, moves in an arc over a limited angular range around the breast while 7-15 images are acquired over a period of a few seconds. A reconstruction algorithm is used to create a 3D volume dataset from the projection images. This procedure reduces the effects of tissue superposition, often responsible for degrading the quality of projection mammograms. This may help improve sensitivity of cancer detection, while reducing the number of false positive results. For DBT, images are acquired at a set of gantry rotation angles. The image reconstruction process requires several geometrical factors associated with image acquisition to be known accurately, however, vibration, encoder inaccuracy, the effects of gravity on the gantry arm and manufacturing tolerances can produce deviations from the desired acquisition geometry. Unlike cone-beam CT, in which a complete dataset is acquired (500+ projections over 180°), tomosynthesis reconstruction is challenging in that the angular range is narrow (typically from 20°-45°) and there are fewer projection images (~7-15). With such a limited dataset, reconstruction is very sensitive to geometric alignment. Uncertainties in factors such as detector tilt, gantry angle, focal spot location, source-detector distance and source-pivot distance can produce several artifacts in the reconstructed volume. To accurately and efficiently calculate the location and angles of orientation of critical components of the system in DBT geometry, a suitable phantom is required. We have designed a calibration phantom for tomosynthesis and developed software for accurate measurement of the geometric parameters of a DBT system. These have been tested both by simulation and experiment. We will present estimates of the precision available with this technique for a prototype DBT system.
2007-02-01
determined by its neighbors’ correspondence. Thus, the algorithm consists of four main steps: ICP registration of the base and nipple regions of the...the nipple and the base of the breast, as a location for accurately determining initial correspondence. However, due to the compression, the nipple of...cloud) is translated and lies at a different angle than the nipple of the pendant breast (the source point cloud). By minimizing the average distance
Zhao, Jing-Xin; Su, Xiu-Yun; Zhao, Zhe; Xiao, Ruo-Xiu; Zhang, Li-Cheng; Tang, Pei-Fu
2018-02-17
The aim of this study is to demonstrate the varying rules of radiographic angles following varying three-dimensional (3D) orientations and locations of cup using an accurate mathematical model. A cone model is established to address the quantitative relationship between the opening circle of cup and its ellipse projection on radiograph. The varying rules of two-dimensional (2D) radiographic anteversion (RA) and inclination (RI) angles can be analyzed. When the centre of cup is located above X-ray source, with proper 3D RI/RA angles, 2D RA angle can be equal to its 3D counterpart, and 2D RI angle is usually greater than its 3D counterpart. Except for the original point on hip-centered anterior-posterior radiograph, there is no area on radiograph where both 2D RA and RI angles are equal to their 3D counterparts simultaneously. This study proposes an innovative model for accurately explaining how 2D RA/RI angles of cup are varying following different 3D RA/RI angles and location of cup. The analysis results provide clinicians an intuitive grasp of knowledge about 2D RA/RI angles greater or smaller than their 3D counterparts post-operatively. The established model may allow determining the effects of pelvic rotations on 2D radiographic angles of cup.
NASA Technical Reports Server (NTRS)
Stuart, J. R.
1984-01-01
The evolution of NASA's planetary navigation techniques is traced, and radiometric and optical data types are described. Doppler navigation; the Deep Space Network; differenced two-way range techniques; differential very long base interferometry; and optical navigation are treated. The Doppler system enables a spacecraft in cruise at high absolute declination to be located within a total angular uncertainty of 1/4 microrad. The two-station range measurement provides a 1 microrad backup at low declinations. Optical data locate the spacecraft relative to the target to an angular accuracy of 5 microrad. Earth-based radio navigation and its less accurate but target-relative counterpart, optical navigation, thus form complementary measurement sources, which provide a powerful sensory system to produce high-precision orbit estimates.
Yanq, Xuming; Ye, Yijun; Xia, Yong; Wei, Xuanzhong; Wang, Zheyu; Ni, Hongmei; Zhu, Ying; Xu, Lingyu
2015-02-01
To develop a more precise and accurate method, and identified a procedure to measure whether an acupoint had been correctly located. On the face, we used an acupoint location from different acupuncture experts and obtained the most precise and accurate values of acupoint location based on the consistency information fusion algorithm, through a virtual simulation of the facial orientation coordinate system. Because of inconsistencies in each acupuncture expert's original data, the system error the general weight calculation. First, we corrected each expert of acupoint location system error itself, to obtain a rational quantification for each expert of acupuncture and moxibustion acupoint location consistent support degree, to obtain pointwise variable precision fusion results, to put every expert's acupuncture acupoint location fusion error enhanced to pointwise variable precision. Then, we more effectively used the measured characteristics of different acupuncture expert's acupoint location, to improve the measurement information utilization efficiency and acupuncture acupoint location precision and accuracy. Based on using the consistency matrix pointwise fusion method on the acupuncture experts' acupoint location values, each expert's acupoint location information could be calculated, and the most precise and accurate values of each expert's acupoint location could be obtained.
Etchemendy, Pablo E; Spiousas, Ignacio; Calcagno, Esteban R; Abregú, Ezequiel; Eguia, Manuel C; Vergara, Ramiro O
2018-06-01
In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.
Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.
Ding, Lei; Yuan, Han
2013-04-01
Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Saikia, C. K.; Woods, B. B.; Thio, H. K.
- Regional crustal waveguide calibration is essential to the retrieval of source parameters and the location of smaller (M<4.8) seismic events. This path calibration of regional seismic phases is strongly dependent on the accuracy of hypocentral locations of calibration (or master) events. This information can be difficult to obtain, especially for smaller events. Generally, explosion or quarry blast generated travel-time data with known locations and origin times are useful for developing the path calibration parameters, but in many regions such data sets are scanty or do not exist. We present a method which is useful for regional path calibration independent of such data, i.e. with earthquakes, which is applicable for events down to Mw = 4 and which has successfully been applied in India, central Asia, western Mediterranean, North Africa, Tibet and the former Soviet Union. These studies suggest that reliably determining depth is essential to establishing accurate epicentral location and origin time for events. We find that the error in source depth does not necessarily trade-off only with the origin time for events with poor azimuthal coverage, but with the horizontal location as well, thus resulting in poor epicentral locations. For example, hypocenters for some events in central Asia were found to move from their fixed-depth locations by about 20km. Such errors in location and depth will propagate into path calibration parameters, particularly with respect to travel times. The modeling of teleseismic depth phases (pP, sP) yields accurate depths for earthquakes down to magnitude Mw = 4.7. This Mwthreshold can be lowered to four if regional seismograms are used in conjunction with a calibrated velocity structure model to determine depth, with the relative amplitude of the Pnl waves to the surface waves and the interaction of regional sPmP and pPmP phases being good indicators of event depths. We also found that for deep events a seismic phase which follows an S-wave path to the surface and becomes critical, developing a head wave by S to P conversion is also indicative of depth. The detailed characteristic of this phase is controlled by the crustal waveguide. The key to calibrating regionalized crustal velocity structure is to determine depths for a set of master events by applying the above methods and then by modeling characteristic features that are recorded on the regional waveforms. The regionalization scheme can also incorporate mixed-path crustal waveguide models for cases in which seismic waves traverse two or more distinctly different crustal structures. We also demonstrate that once depths are established, we need only two-stations travel-time data to obtain reliable epicentral locations using a new adaptive grid-search technique which yields locations similar to those determined using travel-time data from local seismic networks with better azimuthal coverage.
Spatiotemporal patterns of ERP based on combined ICA-LORETA analysis
NASA Astrophysics Data System (ADS)
Zhang, Jiacai; Guo, Taomei; Xu, Yaqin; Zhao, Xiaojie; Yao, Li
2007-03-01
In contrast to the FMRI methods widely used up to now, this method try to understand more profoundly how the brain systems work under sentence processing task map accurately the spatiotemporal patterns of activity of the large neuronal populations in the human brain from the analysis of ERP data recorded on the brain scalp. In this study, an event-related brain potential (ERP) paradigm to record the on-line responses to the processing of sentences is chosen as an example. In order to give attention to both utilizing the ERPs' temporal resolution of milliseconds and overcoming the insensibility of cerebral location ERP sources, we separate these sources in space and time based on a combined method of independent component analysis (ICA) and low-resolution tomography (LORETA) algorithms. ICA blindly separate the input ERP data into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain sources. And then the spatial maps associated with each ICA component are analyzed, with use of LORETA to uniquely locate its cerebral sources throughout the full brain according to the assumption that neighboring neurons are simultaneously and synchronously activated. Our results show that the cerebral computation mechanism underlies content words reading is mediated by the orchestrated activity of several spatially distributed brain sources located in the temporal, frontal, and parietal areas, and activate at distinct time intervals and are grouped into different statistically independent components. Thus ICA-LORETA analysis provides an encouraging and effective method to study brain dynamics from ERP.
Acoustic Network Localization and Interpretation of Infrasonic Pulses from Lightning
NASA Astrophysics Data System (ADS)
Arechiga, R. O.; Johnson, J. B.; Badillo, E.; Michnovicz, J. C.; Thomas, R. J.; Edens, H. E.; Rison, W.
2011-12-01
We improve on the localization accuracy of thunder sources and identify infrasonic pulses that are correlated across a network of acoustic arrays. We attribute these pulses to electrostatic charge relaxation (collapse of the electric field) and attempt to model their spatial extent and acoustic source strength. Toward this objective we have developed a single audio range (20-15,000 Hz) acoustic array and a 4-station network of broadband (0.01-500 Hz) microphone arrays with aperture of ~45 m. The network has an aperture of 1700 m and was installed during the summers of 2009-2011 in the Magdalena mountains of New Mexico, an area that is subject to frequent lightning activity. We are exploring a new technique based on inverse theory that integrates information from the audio range and the network of broadband acoustic arrays to locate thunder sources more accurately than can be achieved with a single array. We evaluate the performance of the technique by comparing the location of thunder sources with RF sources located by the lightning mapping array (LMA) of Langmuir Laboratory at New Mexico Tech. We will show results of this technique for lightning flashes that occurred in the vicinity of our network of acoustic arrays and over the LMA. We will use acoustic network detection of infrasonic pulses together with LMA data and electric field measurements to estimate the spatial distribution of the charge (within the cloud) that is used to produce a lightning flash, and will try to quantify volumetric charges (charge magnitude) within clouds.
Interference effects in phased beam tracing using exact half-space solutions.
Boucher, Matthew A; Pluymers, Bert; Desmet, Wim
2016-12-01
Geometrical acoustics provides a correct solution to the wave equation for rectangular rooms with rigid boundaries and is an accurate approximation at high frequencies with nearly hard walls. When interference effects are important, phased geometrical acoustics is employed in order to account for phase shifts due to propagation and reflection. Error increases, however, with more absorption, complex impedance values, grazing incidence, smaller volumes and lower frequencies. Replacing the plane wave reflection coefficient with a spherical one reduces the error but results in slower convergence. Frequency-dependent stopping criteria are then applied to avoid calculating higher order reflections for frequencies that have already converged. Exact half-space solutions are used to derive two additional spherical wave reflection coefficients: (i) the Sommerfeld integral, consisting of a plane wave decomposition of a point source and (ii) a line of image sources located at complex coordinates. Phased beam tracing using exact half-space solutions agrees well with the finite element method for rectangular rooms with absorbing boundaries, at low frequencies and for rooms with different aspect ratios. Results are accurate even for long source-to-receiver distances. Finally, the crossover frequency between the plane and spherical wave reflection coefficients is discussed.
Enhance the Value of a Research Paper: Choosing the Right References and Writing them Accurately.
Bavdekar, Sandeep B
2016-03-01
References help readers identify and locate sources used for justifying the need for conducting the research study, verify methods employed in the study and for discussing the interpretation of results and implications of the study. It is extremely essential that references are accurate and complete. This article provides suggestions regarding choosing references and writing reference list. References are a list of sources that are selected by authors to represent the best documents concerning the research study.1 They constitute the foundation of any research paper. Although generally written towards the end of the article-writing process, they are nevertheless extremely important. They provide the context for the hypothesis and help justify the need for conducting the research study. Authors use references to inform readers about the techniques used for conducting the study and convince them about the appropriateness of methodology used. References help provide appropriate perspective in which the research findings should be seen and interpreted. This communication will discuss the purpose of citations, how to select quality sources for citing and the importance of accuracy while writing the reference list. © Journal of the Association of Physicians of India 2011.
Methane Leak Detection and Emissions Quantification with UAVs
NASA Astrophysics Data System (ADS)
Barchyn, T.; Fox, T. A.; Hugenholtz, C.
2016-12-01
Robust leak detection and emissions quantification algorithms are required to accurately monitor greenhouse gas emissions. Unmanned aerial vehicles (UAVs, `drones') could both reduce the cost and increase the accuracy of monitoring programs. However, aspects of the platform create unique challenges. UAVs typically collect large volumes of data that are close to source (due to limited range) and often lower quality (due to weight restrictions on sensors). Here we discuss algorithm development for (i) finding sources of unknown position (`leak detection') and (ii) quantifying emissions from a source of known position. We use data from a simulated leak and field study in Alberta, Canada. First, we detail a method for localizing a leak of unknown spatial location using iterative fits against a forward Gaussian plume model. We explore sources of uncertainty, both inherent to the method and operational. Results suggest this method is primarily constrained by accurate wind direction data, distance downwind from source, and the non-Gaussian shape of close range plumes. Second, we examine sources of uncertainty in quantifying emissions with the mass balance method. Results suggest precision is constrained by flux plane interpolation errors and time offsets between spatially adjacent measurements. Drones can provide data closer to the ground than piloted aircraft, but large portions of the plume are still unquantified. Together, we find that despite larger volumes of data, working with close range plumes as measured with UAVs is inherently difficult. We describe future efforts to mitigate these challenges and work towards more robust benchmarking for application in industrial and regulatory settings.
NASA Technical Reports Server (NTRS)
Williams, A. C.; Elsner, R. F.; Weisskopf, M. C.; Darbro, W.
1984-01-01
It is shown in this work how to obtain the probabilities of photons escaping from a cold electron plasma environment after having undergone an arbitrary number of scatterings. This is done by retaining the exact differential cross section for Thomson scattering as opposed to using its polarization and angle averaged form. The results are given in the form of recursion relations. The geometry used is the semi-infinite plane-parallel geometry witlh a photon source located on a plane at an arbitrary optical depth below the surface. Analytical expressions are given for the probabilities which are accurate over a wide range of initial optical depth. These results can be used to model compact X-ray galactic sources which are surrounded by an electron-rich plasma.
Uncertainty, variability, and earthquake physics in ground‐motion prediction equations
Baltay, Annemarie S.; Hanks, Thomas C.; Abrahamson, Norm A.
2017-01-01
Residuals between ground‐motion data and ground‐motion prediction equations (GMPEs) can be decomposed into terms representing earthquake source, path, and site effects. These terms can be cast in terms of repeatable (epistemic) residuals and the random (aleatory) components. Identifying the repeatable residuals leads to a GMPE with reduced uncertainty for a specific source, site, or path location, which in turn can yield a lower hazard level at small probabilities of exceedance. We illustrate a schematic framework for this residual partitioning with a dataset from the ANZA network, which straddles the central San Jacinto fault in southern California. The dataset consists of more than 3200 1.15≤M≤3 earthquakes and their peak ground accelerations (PGAs), recorded at close distances (R≤20 km). We construct a small‐magnitude GMPE for these PGA data, incorporating VS30 site conditions and geometrical spreading. Identification and removal of the repeatable source, path, and site terms yield an overall reduction in the standard deviation from 0.97 (in ln units) to 0.44, for a nonergodic assumption, that is, for a single‐source location, single site, and single path. We give examples of relationships between independent seismological observables and the repeatable terms. We find a correlation between location‐based source terms and stress drops in the San Jacinto fault zone region; an explanation of the site term as a function of kappa, the near‐site attenuation parameter; and a suggestion that the path component can be related directly to elastic structure. These correlations allow the repeatable source location, site, and path terms to be determined a priori using independent geophysical relationships. Those terms could be incorporated into location‐specific GMPEs for more accurate and precise ground‐motion prediction.
The Scaling of Broadband Shock-Associated Noise with Increasing Temperature
NASA Technical Reports Server (NTRS)
Miller, Steven A.
2012-01-01
A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. For this purpose the acoustic analogy of Morris and Miller is examined. To isolate the relevant physics, the scaling of BBSAN at the peak intensity level at the sideline ( = 90 degrees) observer location is examined. Scaling terms are isolated from the acoustic analogy and the result is compared using a convergent nozzle with the experiments of Bridges and Brown and using a convergent-divergent nozzle with the experiments of Kuo, McLaughlin, and Morris at four nozzle pressure ratios in increments of total temperature ratios from one to four. The equivalent source within the framework of the acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green s function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for the accurate saturation of BBSAN with increasing stagnation temperature. This is a minor change to the source model relative to the previously developed models. The full development of the scaling term is shown. The sources and vector Green s function solver are informed by steady Reynolds-Averaged Navier-Stokes solutions. These solutions are examined as a function of stagnation temperature at the first shock wave shear layer interaction. It is discovered that saturation of BBSAN with increasing jet stagnation temperature occurs due to a balance between the amplification of the sound propagation through the shear layer and the source term scaling.A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. For this purpose the acoustic analogy of Morris and Miller is examined. To isolate the relevant physics, the scaling of BBSAN at the peak intensity level at the sideline psi = 90 degrees) observer location is examined. Scaling terms are isolated from the acoustic analogy and the result is compared using a convergent nozzle with the experiments of Bridges and Brown and using a convergent-divergent nozzle with the experiments of Kuo, McLaughlin, and Morris at four nozzle pressure ratios in increments of total temperature ratios from one to four. The equivalent source within the framework of the acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green s function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for the accurate saturation of BBSAN with increasing stagnation temperature. This is a minor change to the source model relative to the previously developed models. The full development of the scaling term is shown. The sources and vector Green s function solver are informed by steady Reynolds-Averaged Navier-Stokes solutions. These solutions are examined as a function of stagnation temperature at the first shock wave shear layer interaction. It is discovered that saturation of BBSAN with increasing jet stagnation temperature occurs due to a balance between the amplification of the sound propagation through the shear layer and the source term scaling.
NASA Astrophysics Data System (ADS)
Xie, J.; Ni, S.; Chu, R.; Xia, Y.
2017-12-01
Accurate seismometer clock plays an important role in seismological studies including earthquake location and tomography. However, some seismic stations may have clock drift larger than 1 second, especially in early days of global seismic network. The 26 s Persistent Localized (PL) microseism event in the Gulf of Guinea sometime excites strong and coherent signals, and can be used as repeating source for assessing stability of seismometer clocks. Taking station GSC/TS in southern California, USA as an example, the 26 s PL signal can be easily observed in the ambient Noise Cross-correlation Function (NCF) between GSC/TS and a remote station. The variation of travel-time of this 26 s signal in the NCF is used to infer clock error. A drastic clock error is detected during June, 1992. This short-term clock error is confirmed by both teleseismic and local earthquake records with a magnitude of ±25 s. Using 26 s PL source, the clock can be validated for historical records of sparsely distributed stations, where usual NCF of short period microseism (<20 s) might be less effective due to its attenuation over long interstation distances. However, this method suffers from cycling problem, and should be verified by teleseismic/local P waves. The location change of the 26 s PL source may influence the measured clock drift, using regional stations with stable clock, we estimate the possible location change of the source.
Veira, Andreas; Jackson, Peter L; Ainslie, Bruce; Fudge, Dennis
2013-07-01
This study investigates the development and application of a simple method to calculate annual and seasonal PM2.5 and PM10 background concentrations in small cities and rural areas. The Low Pollution Sectors and Conditions (LPSC) method is based on existing measured long-term data sets and is designed for locations where particulate matter (PM) monitors are only influenced by local anthropogenic emission sources from particular wind sectors. The LPSC method combines the analysis of measured hourly meteorological data, PM concentrations, and geographical emission source distributions. PM background levels emerge from measured data for specific wind conditions, where air parcel trajectories measured at a monitoring station are assumed to have passed over geographic sectors with negligible local emissions. Seasonal and annual background levels were estimated for two monitoring stations in Prince George, Canada, and the method was also applied to four other small cities (Burns Lake, Houston, Quesnel, Smithers) in northern British Columbia. The analysis showed reasonable background concentrations for both monitoring stations in Prince George, whereas annual PM10 background concentrations at two of the other locations and PM2.5 background concentrations at one other location were implausibly high. For those locations where the LPSC method was successful, annual background levels ranged between 1.8 +/- 0.1 microg/m3 and 2.5 +/- 0.1 microg/m3 for PM2.5 and between 6.3 +/- 0.3 microg/m3 and 8.5 +/- 0.3 microg/m3 for PM10. Precipitation effects and patterns of seasonal variability in the estimated background concentrations were detectable for all locations where the method was successful. Overall the method was dependent on the configuration of local geography and sources with respect to the monitoring location, and may fail at some locations and under some conditions. Where applicable, the LPSC method can provide a fast and cost-efficient way to estimate background PM concentrations for small cities in sparsely populated regions like northern British Columbia. In rural areas like northern British Columbia, particulate matter (PM) monitoring stations are usually located close to emission sources and residential areas in order to assess the PM impact on human health. Thus there is a lack of accurate PM background concentration data that represent PM ambient concentrations in the absence of local emissions. The background calculation method developed in this study uses observed meteorological data as well as local source emission locations and provides annual, seasonal and precipitation-related PM background concentrations that are comparable to literature values for four out of six monitoring stations.
Multi-modal molecular diffuse optical tomography system for small animal imaging
Guggenheim, James A.; Basevi, Hector R. A.; Frampton, Jon; Styles, Iain B.; Dehghani, Hamid
2013-01-01
A multi-modal optical imaging system for quantitative 3D bioluminescence and functional diffuse imaging is presented, which has no moving parts and uses mirrors to provide multi-view tomographic data for image reconstruction. It is demonstrated that through the use of trans-illuminated spectral near infrared measurements and spectrally constrained tomographic reconstruction, recovered concentrations of absorbing agents can be used as prior knowledge for bioluminescence imaging within the visible spectrum. Additionally, the first use of a recently developed multi-view optical surface capture technique is shown and its application to model-based image reconstruction and free-space light modelling is demonstrated. The benefits of model-based tomographic image recovery as compared to 2D planar imaging are highlighted in a number of scenarios where the internal luminescence source is not visible or is confounding in 2D images. The results presented show that the luminescence tomographic imaging method produces 3D reconstructions of individual light sources within a mouse-sized solid phantom that are accurately localised to within 1.5mm for a range of target locations and depths indicating sensitivity and accurate imaging throughout the phantom volume. Additionally the total reconstructed luminescence source intensity is consistent to within 15% which is a dramatic improvement upon standard bioluminescence imaging. Finally, results from a heterogeneous phantom with an absorbing anomaly are presented demonstrating the use and benefits of a multi-view, spectrally constrained coupled imaging system that provides accurate 3D luminescence images. PMID:24954977
Real-time flood extent maps based on social media
NASA Astrophysics Data System (ADS)
Eilander, Dirk; van Loenen, Arnejan; Roskam, Ruud; Wagemaker, Jurjen
2015-04-01
During a flood event it is often difficult to get accurate information about the flood extent and the people affected. This information is very important for disaster risk reduction management and crisis relief organizations. In the post flood phase, information about the flood extent is needed for damage estimation and calibrating hydrodynamic models. Currently, flood extent maps are derived from a few sources such as satellite images, areal images and post-flooding flood marks. However, getting accurate real-time or maximum flood extent maps remains difficult. With the rise of social media, we now have a new source of information with large numbers of observations. In the city of Jakarta, Indonesia, the intensity of unique flood related tweets during a flood event, peaked at 8 tweets per second during floods in early 2014. A fair amount of these tweets also contains observations of water depth and location. Our hypothesis is that based on the large numbers of tweets it is possible to generate real-time flood extent maps. In this study we use tweets from the city of Jakarta, Indonesia, to generate these flood extent maps. The data-mining procedure looks for tweets with a mention of 'banjir', the Bahasa Indonesia word for flood. It then removes modified and retweeted messages in order to keep unique tweets only. Since tweets are not always sent directly from the location of observation, the geotag in the tweets is unreliable. We therefore extract location information using mentions of names of neighborhoods and points of interest. Finally, where encountered, a mention of a length measure is extracted as water depth. These tweets containing a location reference and a water level are considered to be flood observations. The strength of this method is that it can easily be extended to other regions and languages. Based on the intensity of tweets in Jakarta during a flood event we can provide a rough estimate of the flood extent. To provide more accurate flood extend information, we project the water depth observations in tweets on a digital elevation model using a flood-fill algorithm. Based on statistical methods we combine the large numbers of observations in order to create time series of flood extent maps. Early results indicate this method is very promising.
Helicopter magnetic survey conducted to locate wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veloski, G.A.; Hammack, R.W.; Stamp, V.
2008-07-01
A helicopter magnetic survey was conducted in August 2007 over 15.6 sq mi at the Naval Petroleum Reserve No. 3’s (NPR-3) Teapot Dome Field near Casper, Wyoming. The survey’s purpose was to accurately locate wells drilled there during more than 90 years of continuous oilfield operation. The survey was conducted at low altitude and with closely spaced flight lines to improve the detection of wells with weak magnetic response and to increase the resolution of closely spaced wells. The survey was in preparation for a planned CO2 flood for EOR, which requires a complete well inventory with accurate locations formore » all existing wells. The magnetic survey was intended to locate wells missing from the well database and to provide accurate locations for all wells. The ability of the helicopter magnetic survey to accurately locate wells was accomplished by comparing airborne well picks with well locations from an intense ground search of a small test area.« less
2015-09-30
experiment was conducted in Broad Sound of Massachusetts Bay using the AUV Unicorn, a 147dB omnidirectional Lubell source, and an open-ended steel pipe... steel pipe target (Figure C) was dropped at an approximate local coordinate position of (x,y)=(170,155). The location was estimated using ship...position when the target was dropped, but was only accurate within 10-15m. The orientation of the target was unknown. Figure C: Open-ended steel
The spin-down rate of Swift J1822.3-1606 finally measured: confirmation as magnetar
NASA Astrophysics Data System (ADS)
Kuiper, L.; Hermsen, W.
2011-09-01
Data from monitoring observations of magnetar-candidate Swift J1822.3-1606 with RXTE PCA covering a time span of about 10 weeks (MJD 55758-55826) since its discovery on July 14, 2011 (ATEL #3488; GCN #12159) have been used to construct an accurate phase-coherent timing solution. Barycentered pulse arrival times (ToA's; see ATEL #3493 for the adopted source location) have been obtained by a cross-correlation method with a high-statistics pulse-profile template.
Accurate visible speech synthesis based on concatenating variable length motion capture data.
Ma, Jiyong; Cole, Ron; Pellom, Bryan; Ward, Wayne; Wise, Barbara
2006-01-01
We present a novel approach to synthesizing accurate visible speech based on searching and concatenating optimal variable-length units in a large corpus of motion capture data. Based on a set of visual prototypes selected on a source face and a corresponding set designated for a target face, we propose a machine learning technique to automatically map the facial motions observed on the source face to the target face. In order to model the long distance coarticulation effects in visible speech, a large-scale corpus that covers the most common syllables in English was collected, annotated and analyzed. For any input text, a search algorithm to locate the optimal sequences of concatenated units for synthesis is desrcribed. A new algorithm to adapt lip motions from a generic 3D face model to a specific 3D face model is also proposed. A complete, end-to-end visible speech animation system is implemented based on the approach. This system is currently used in more than 60 kindergarten through third grade classrooms to teach students to read using a lifelike conversational animated agent. To evaluate the quality of the visible speech produced by the animation system, both subjective evaluation and objective evaluation are conducted. The evaluation results show that the proposed approach is accurate and powerful for visible speech synthesis.
Three-Dimensional Innervation Zone Imaging from Multi-Channel Surface EMG Recordings.
Liu, Yang; Ning, Yong; Li, Sheng; Zhou, Ping; Rymer, William Z; Zhang, Yingchun
2015-09-01
There is an unmet need to accurately identify the locations of innervation zones (IZs) of spastic muscles, so as to guide botulinum toxin (BTX) injections for the best clinical outcome. A novel 3D IZ imaging (3DIZI) approach was developed by combining the bioelectrical source imaging and surface electromyogram (EMG) decomposition methods to image the 3D distribution of IZs in the target muscles. Surface IZ locations of motor units (MUs), identified from the bipolar map of their MU action potentials (MUAPs) were employed as a prior knowledge in the 3DIZI approach to improve its imaging accuracy. The performance of the 3DIZI approach was first optimized and evaluated via a series of designed computer simulations, and then validated with the intramuscular EMG data, together with simultaneously recorded 128-channel surface EMG data from the biceps of two subjects. Both simulation and experimental validation results demonstrate the high performance of the 3DIZI approach in accurately reconstructing the distributions of IZs and the dynamic propagation of internal muscle activities in the biceps from high-density surface EMG recordings.
THREE-DIMENSIONAL INNERVATION ZONE IMAGING FROM MULTI-CHANNEL SURFACE EMG RECORDINGS
LIU, YANG; NING, YONG; LI, SHENG; ZHOU, PING; RYMER, WILLIAM Z.; ZHANG, YINGCHUN
2017-01-01
There is an unmet need to accurately identify the locations of innervation zones (IZs) of spastic muscles, so as to guide botulinum toxin (BTX) injections for the best clinical outcome. A novel 3-dimensional IZ imaging (3DIZI) approach was developed by combining the bioelectrical source imaging and surface electromyogram (EMG) decomposition methods to image the 3D distribution of IZs in the target muscles. Surface IZ locations of motor units (MUs), identified from the bipolar map of their motor unit action potentials (MUAPs) were employed as a prior knowledge in the 3DIZI approach to improve its imaging accuracy. The performance of the 3DIZI approach was first optimized and evaluated via a series of designed computer simulations, and then validated with the intramuscular EMG data, together with simultaneously recorded 128-channel surface EMG data from the biceps of two subjects. Both simulation and experimental validation results demonstrate the high performance of the 3DIZI approach in accurately reconstructing the distributions of IZs and the dynamic propagation of internal muscle activities in the biceps from high-density surface EMG recordings. PMID:26160432
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.; Wu, Chris K.; Lin, Y. H.
1991-01-01
A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiatt, JR; Rivard, MJ
2014-06-01
Purpose: The model S700 Axxent electronic brachytherapy source by Xoft was characterized in 2006 by Rivard et al. The source design was modified in 2006 to include a plastic centering insert at the source tip to more accurately position the anode. The objectives of the current study were to establish an accurate Monte Carlo source model for simulation purposes, to dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and to determine dose differences between the source with and without the centering insert. Methods: Design information from dissected sources and vendor-supplied CAD drawings were used to devisemore » the source model for radiation transport simulations of dose distributions in a water phantom. Collision kerma was estimated as a function of radial distance, r, and polar angle, θ, for determination of reference TG-43 dosimetry parameters. Simulations were run for 10{sup 10} histories, resulting in statistical uncertainties on the transverse plane of 0.03% at r=1 cm and 0.08% at r=10 cm. Results: The dose rate distribution the transverse plane did not change beyond 2% between the 2006 model and the current study. While differences exceeding 15% were observed near the source distal tip, these diminished to within 2% for r>1.5 cm. Differences exceeding a factor of two were observed near θ=150° and in contact with the source, but diminished to within 20% at r=10 cm. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% over a third of the available solid angle external from the source. For clinical applications using balloons or applicators with tissue located within 5 cm from the source, dose differences exceeding 2% were observed only for θ>110°. This study carefully examined the current source geometry and presents a modern reference TG-43 dosimetry dataset for the model S700 source.« less
NASA Astrophysics Data System (ADS)
Verlinden, Christopher M.
Controlled acoustic sources have typically been used for imaging the ocean. These sources can either be used to locate objects or characterize the ocean environment. The processing involves signal extraction in the presence of ambient noise, with shipping being a major component of the latter. With the advent of the Automatic Identification System (AIS) which provides accurate locations of all large commercial vessels, these major noise sources can be converted from nuisance to beacons or sources of opportunity for the purpose of studying the ocean. The source localization method presented here is similar to traditional matched field processing, but differs in that libraries of data-derived measured replicas are used in place of modeled replicas. In order to account for differing source spectra between library and target vessels, cross-correlation functions are compared instead of comparing acoustic signals directly. The library of measured cross-correlation function replicas is extrapolated using waveguide invariant theory to fill gaps between ship tracks, fully populating the search grid with estimated replicas allowing for continuous tracking. In addition to source localization, two ocean sensing techniques are discussed in this dissertation. The feasibility of estimating ocean sound speed and temperature structure, using ship noise across a drifting volumetric array of hydrophones suspended beneath buoys, in a shallow water marine environment is investigated. Using the attenuation of acoustic energy along eigenray paths to invert for ocean properties such as temperature, salinity, and pH is also explored. In each of these cases, the theory is developed, tested using numerical simulations, and validated with data from acoustic field experiments.
Acoustic emission testing on an F/A-18 E/F titanium bulkhead
NASA Astrophysics Data System (ADS)
Martin, Christopher A.; Van Way, Craig B.; Lockyer, Allen J.; Kudva, Jayanth N.; Ziola, Steve M.
1995-04-01
An important opportunity recently transpired at Northrop Grumman Corporation to instrument an F/A - 18 E/F titanium bulkhead with broad band acoustic emission sensors during a scheduled structural fatigue test. The overall intention of this effort was to investigate the potential for detecting crack propagation using acoustic transmission signals for a large structural component. Key areas of experimentation and experience included (1) acoustic noise characterization, (2) separation of crack signals from extraneous noise, (3) source location accuracy, and (4) methods of acoustic transducer attachment. Fatigue cracking was observed and monitored by strategically placed acoustic emission sensors. The outcome of the testing indicated that accurate source location still remains enigmatic for non-specialist engineering personnel especially at this level of structural complexity. However, contrary to preconceived expectations, crack events could be readily separated from extraneous noise. A further dividend from the investigation materialized in the form of close correspondence between frequency domain waveforms of the bulkhead test specimen tested and earlier work with thick plates.
Gamma Ray Bursts-Afterglows and Counterparts
NASA Technical Reports Server (NTRS)
Fishman, Gerald J
1998-01-01
Several breakthrough discoveries were made last year of x-ray, optical and radio afterglows and counterparts to gamma-ray bursts, and a redshift has been associated with at least one of these. These discoveries were made possible by the fast, accurate gamma-ray burst locations of the BeppoSAX satellite. It is now generally believed that the burst sources are at cosmological distances and that they represent the most powerful explosions in the Universe. These observations also open new possibilities for the study of early star formation, the physics of extreme conditions and perhaps even cosmology. This session will concentrate on recent x-ray, optical and radio afterglow observations of gamma-ray bursts, associated redshift measurements, and counterpart observations. Several review and theory talks will also be presented, along with a summary of the astrophysical implications of the observations. There will be additional poster contributions on observations of gamma-ray burst source locations at wavelengths other than gamma rays. Posters are also solicited that describe new observational capabilities for rapid follow-up observations of gamma-ray bursts.
NASA Astrophysics Data System (ADS)
Matsumoto, H.; Haralabus, G.; Zampolli, M.; Özel, N. M.
2016-12-01
Underwater acoustic signal waveforms recorded during the 2015 Chile earthquake (Mw 8.3) by the hydrophones of hydroacoustic station HA03, located at the Juan Fernandez Islands, are analyzed. HA03 is part of the Comprehensive Nuclear-Test-Ban Treaty International Monitoring System. The interest in the particular data set stems from the fact that HA03 is located only approximately 700 km SW from the epicenter of the earthquake. This makes it possible to study aspects of the signal associated with the tsunamigenic earthquake, which would be more difficult to detect had the hydrophones been located far from the source. The analysis shows that the direction of arrival of the T phase can be estimated by means of a three-step preprocessing technique which circumvents spatial aliasing caused by the hydrophone spacing, the latter being large compared to the wavelength. Following this preprocessing step, standard frequency-wave number analysis (F-K analysis) can accurately estimate back azimuth and slowness of T-phase signals. The data analysis also shows that the dispersive tsunami signals can be identified by the water-column hydrophones at the time when the tsunami surface gravity wave reaches the station.
Newspaper archives + text mining = rich sources of historical geo-spatial data
NASA Astrophysics Data System (ADS)
Yzaguirre, A.; Smit, M.; Warren, R.
2016-04-01
Newspaper archives are rich sources of cultural, social, and historical information. These archives, even when digitized, are typically unstructured and organized by date rather than by subject or location, and require substantial manual effort to analyze. The effort of journalists to be accurate and precise means that there is often rich geo-spatial data embedded in the text, alongside text describing events that editors considered to be of sufficient importance to the region or the world to merit column inches. A regional newspaper can add over 100,000 articles to its database each year, and extracting information from this data for even a single country would pose a substantial Big Data challenge. In this paper, we describe a pilot study on the construction of a database of historical flood events (location(s), date, cause, magnitude) to be used in flood assessment projects, for example to calibrate models, estimate frequency, establish high water marks, or plan for future events in contexts ranging from urban planning to climate change adaptation. We then present a vision for extracting and using the rich geospatial data available in unstructured text archives, and suggest future avenues of research.
NASA Astrophysics Data System (ADS)
Beucler, E.; Haugmard, M.; Mocquet, A.
2016-12-01
The most widely used inversion schemes to locate earthquakes are based on iterative linearized least-squares algorithms and using an a priori knowledge of the propagation medium. When a small amount of observations is available for moderate events for instance, these methods may lead to large trade-offs between outputs and both the velocity model and the initial set of hypocentral parameters. We present a joint structure-source determination approach using Bayesian inferences. Monte-Carlo continuous samplings, using Markov chains, generate models within a broad range of parameters, distributed according to the unknown posterior distributions. The non-linear exploration of both the seismic structure (velocity and thickness) and the source parameters relies on a fast forward problem using 1-D travel time computations. The a posteriori covariances between parameters (hypocentre depth, origin time and seismic structure among others) are computed and explicitly documented. This method manages to decrease the influence of the surrounding seismic network geometry (sparse and/or azimuthally inhomogeneous) and a too constrained velocity structure by inferring realistic distributions on hypocentral parameters. Our algorithm is successfully used to accurately locate events of the Armorican Massif (western France), which is characterized by moderate and apparently diffuse local seismicity.
Bauer, Timothy J
2013-06-15
The Jack Rabbit Test Program was sponsored in April and May 2010 by the Department of Homeland Security Transportation Security Administration to generate source data for large releases of chlorine and ammonia from transport tanks. In addition to a variety of data types measured at the release location, concentration versus time data was measured using sensors at distances up to 500 m from the tank. Release data were used to create accurate representations of the vapor flux versus time for the ten releases. This study was conducted to determine the importance of source terms and meteorological conditions in predicting downwind concentrations and the accuracy that can be obtained in those predictions. Each source representation was entered into an atmospheric transport and dispersion model using simplifying assumptions regarding the source characterization and meteorological conditions, and statistics for cloud duration and concentration at the sensor locations were calculated. A detailed characterization for one of the chlorine releases predicted 37% of concentration values within a factor of two, but cannot be considered representative of all the trials. Predictions of toxic effects at 200 m are relevant to incidents involving 1-ton chlorine tanks commonly used in parts of the United States and internationally. Published by Elsevier B.V.
Cluster-search based monitoring of local earthquakes in SeisComP3
NASA Astrophysics Data System (ADS)
Roessler, D.; Becker, J.; Ellguth, E.; Herrnkind, S.; Weber, B.; Henneberger, R.; Blanck, H.
2016-12-01
We present a new cluster-search based SeisComP3 module for locating local and regional earthquakes in real time. Real-time earthquake monitoring systems such as SeisComP3 provide the backbones for earthquake early warning (EEW), tsunami early warning (TEW) and the rapid assessment of natural and induced seismicity. For any earthquake monitoring system fast and accurate event locations are fundamental determining the reliability and the impact of further analysis. SeisComP3 in the OpenSource version includes a two-stage detector for picking P waves and a phase associator for locating earthquakes based on P-wave detections. scanloc is a more advanced earthquake location program developed by gempa GmbH with seamless integration into SeisComP3. scanloc performs advanced cluster search to discriminate earthquakes occurring closely in space and time and makes additional use of S-wave detections. It has proven to provide fast and accurate earthquake locations at local and regional distances where it outperforms the base SeisComP3 tools. We demonstrate the performance of scanloc for monitoring induced seismicity as well as local and regional earthquakes in different tectonic regimes including subduction, spreading and intra-plate regions. In particular we present examples and catalogs from real-time monitoring of earthquake in Northern Chile based on data from the IPOC network by GFZ German Research Centre for Geosciences for the recent years. Depending on epicentral distance and data transmission, earthquake locations are available within a few seconds after origin time when using scanloc. The association of automatic S-wave detections provides a better constraint on focal depth.
Uranium decay daughters from isolated mines: Accumulation and sources.
Cuvier, A; Panza, F; Pourcelot, L; Foissard, B; Cagnat, X; Prunier, J; van Beek, P; Souhaut, M; Le Roux, G
2015-11-01
This study combines in situ gamma spectrometry performed at different scales, in order to accurately locate the contamination pools, to identify the concerned radionuclides and to determine the distribution of the contaminants from soil to bearing phase scale. The potential mobility of several radionuclides is also evaluated using sequential extraction. Using this procedure, an accumulation area located downstream of a former French uranium mine and concentrating a significant fraction of radioactivity is highlighted. We report disequilibria in the U-decay chains, which are likely related to the processes implemented on the mining area. Coupling of mineralogical analyzes with sequential extraction allow us to highlight the presence of barium sulfate, which may be the carrier of the Ra-226 activities found in the residual phase (Ba(Ra)SO4). In contrast, uranium is essentially in the reducible fraction and potentially trapped in clay-iron coatings located on the surface of minerals. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hayes, G.P.; Wald, D.J.
2009-01-01
A key step in many earthquake source inversions requires knowledge of the geometry of the fault surface on which the earthquake occurred. Our knowledge of this surface is often uncertain, however, and as a result fault geometry misinterpretation can map into significant error in the final temporal and spatial slip patterns of these inversions. Relying solely on an initial hypocentre and CMT mechanism can be problematic when establishing rupture characteristics needed for rapid tsunami and ground shaking estimates. Here, we attempt to improve the quality of fast finite-fault inversion results by combining several independent and complementary data sets to more accurately constrain the geometry of the seismic rupture plane of subducting slabs. Unlike previous analyses aimed at defining the general form of the plate interface, we require mechanisms and locations of the seismicity considered in our inversions to be consistent with their occurrence on the plate interface, by limiting events to those with well-constrained depths and with CMT solutions indicative of shallow-dip thrust faulting. We construct probability density functions about each location based on formal assumptions of their depth uncertainty and use these constraints to solve for the ‘most-likely’ fault plane. Examples are shown for the trench in the source region of the Mw 8.6 Southern Sumatra earthquake of March 2005, and for the Northern Chile Trench in the source region of the November 2007 Antofagasta earthquake. We also show examples using only the historic catalogues in regions without recent great earthquakes, such as the Japan and Kamchatka Trenches. In most cases, this method produces a fault plane that is more consistent with all of the data available than is the plane implied by the initial hypocentre and CMT mechanism. Using the aggregated data sets, we have developed an algorithm to rapidly determine more accurate initial fault plane geometries for source inversions of future earthquakes.
Real-time realizations of the Bayesian Infrasonic Source Localization Method
NASA Astrophysics Data System (ADS)
Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.
2015-12-01
The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.
Distributed sensor network for local-area atmospheric modeling
NASA Astrophysics Data System (ADS)
French, Patrick D.; Lovell, John S.; Seaman, Nelson L.
2003-09-01
In the event of a Weapons of Mass Destruction (WMD) chemical or radiological release, quick identification of the nature and source of the release can support efforts to warn, protect and evacuate threatened populations downwind; mitigate the release; provide more accurate plume forecasting; and collect critical transient evidence to help identify the perpetrator(s). Although there are systems available to assist in tracking a WMD release and then predicting where a plume may be traveling, there are no reliable systems available to determine the source location of that release. This would typically require the timely deployment of a remote sensing capability, a grid of expendable air samplers, or a surface sampling plan if the plume has dissipated. Each of these typical solutions has major drawbacks (i.e.: excessive cost, technical feasibility, duration to accomplish, etc...). This paper presents data to support the use of existing rapid-response meteorological modeling coupled with existing transport and diffusion modeling along with a prototype cost-effective situational awareness monitor which would reduce the sensor network requirements while still accomplishing the overall mission of having a 95% probability in converging on a source location within 100 meters.
Combined ICA-LORETA analysis of mismatch negativity.
Marco-Pallarés, J; Grau, C; Ruffini, G
2005-04-01
A major challenge for neuroscience is to map accurately the spatiotemporal patterns of activity of the large neuronal populations that are believed to underlie computing in the human brain. To study a specific example, we selected the mismatch negativity (MMN) brain wave (an event-related potential, ERP) because it gives an electrophysiological index of a "primitive intelligence" capable of detecting changes, even abstract ones, in a regular auditory pattern. ERPs have a temporal resolution of milliseconds but appear to result from mixed neuronal contributions whose spatial location is not fully understood. Thus, it is important to separate these sources in space and time. To tackle this problem, a two-step approach was designed combining the independent component analysis (ICA) and low-resolution tomography (LORETA) algorithms. Here we implement this approach to analyze the subsecond spatiotemporal dynamics of MMN cerebral sources using trial-by-trial experimental data. We show evidence that a cerebral computation mechanism underlies MMN. This mechanism is mediated by the orchestrated activity of several spatially distributed brain sources located in the temporal, frontal, and parietal areas, which activate at distinct time intervals and are grouped in six main statistically independent components.
NASA Technical Reports Server (NTRS)
Ramachandran, Ganesh K.; Akopian, David; Heckler, Gregory W.; Winternitz, Luke B.
2011-01-01
Location technologies have many applications in wireless communications, military and space missions, etc. US Global Positioning System (GPS) and other existing and emerging Global Navigation Satellite Systems (GNSS) are expected to provide accurate location information to enable such applications. While GNSS systems perform very well in strong signal conditions, their operation in many urban, indoor, and space applications is not robust or even impossible due to weak signals and strong distortions. The search for less costly, faster and more sensitive receivers is still in progress. As the research community addresses more and more complicated phenomena there exists a demand on flexible multimode reference receivers, associated SDKs, and development platforms which may accelerate and facilitate the research. One of such concepts is the software GPS/GNSS receiver (GPS SDR) which permits a facilitated access to algorithmic libraries and a possibility to integrate more advanced algorithms without hardware and essential software updates. The GNU-SDR and GPS-SDR open source receiver platforms are such popular examples. This paper evaluates the performance of recently proposed block-corelator techniques for acquisition and tracking of GPS signals using open source GPS-SDR platform.
Reconstructing cortical current density by exploring sparseness in the transform domain
NASA Astrophysics Data System (ADS)
Ding, Lei
2009-05-01
In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.
Measurements of scalar released from point sources in a turbulent boundary layer
NASA Astrophysics Data System (ADS)
Talluru, K. M.; Hernandez-Silva, C.; Philip, J.; Chauhan, K. A.
2017-04-01
Measurements of velocity and concentration fluctuations for a horizontal plume released at several wall-normal locations in a turbulent boundary layer (TBL) are discussed in this paper. The primary objective of this study is to establish a systematic procedure to acquire accurate single-point concentration measurements for a substantially long time so as to obtain converged statistics of long tails of probability density functions of concentration. Details of the calibration procedure implemented for long measurements are presented, which include sensor drift compensation to eliminate the increase in average background concentration with time. While most previous studies reported measurements where the source height is limited to, {{s}z}/δ ≤slant 0.2 , where s z is the wall-normal source height and δ is the boundary layer thickness, here results of concentration fluctuations when the plume is released in the outer layer are emphasised. Results of mean and root-mean-square (r.m.s.) profiles of concentration for elevated sources agree with the well-accepted reflected Gaussian model (Fackrell and Robins 1982 J. Fluid. Mech. 117). However, there is clear deviation from the reflected Gaussian model for source in the intermittent region of TBL particularly at locations higher than the source itself. Further, we find that the plume half-widths are different for the mean and r.m.s. concentration profiles. Long sampling times enabled us to calculate converged probability density functions at high concentrations and these are found to exhibit exponential distribution.
Wang, Jiabiao; Zhao, Jianshi; Lei, Xiaohui; Wang, Hao
2018-06-13
Pollution risk from the discharge of industrial waste or accidental spills during transportation poses a considerable threat to the security of rivers. The ability to quickly identify the pollution source is extremely important to enable emergency disposal of pollutants. This study proposes a new approach for point source identification of sudden water pollution in rivers, which aims to determine where (source location), when (release time) and how much pollutant (released mass) was introduced into the river. Based on the backward probability method (BPM) and the linear regression model (LR), the proposed LR-BPM converts the ill-posed problem of source identification into an optimization model, which is solved using a Differential Evolution Algorithm (DEA). The decoupled parameters of released mass are not dependent on prior information, which improves the identification efficiency. A hypothetical case study with a different number of pollution sources was conducted to test the proposed approach, and the largest relative errors for identified location, release time, and released mass in all tests were not greater than 10%. Uncertainty in the LR-BPM is mainly due to a problem with model equifinality, but averaging the results of repeated tests greatly reduces errors. Furthermore, increasing the gauging sections further improves identification results. A real-world case study examines the applicability of the LR-BPM in practice, where it is demonstrated to be more accurate and time-saving than two existing approaches, Bayesian-MCMC and basic DEA. Copyright © 2018 Elsevier Ltd. All rights reserved.
Advances in audio source seperation and multisource audio content retrieval
NASA Astrophysics Data System (ADS)
Vincent, Emmanuel
2012-06-01
Audio source separation aims to extract the signals of individual sound sources from a given recording. In this paper, we review three recent advances which improve the robustness of source separation in real-world challenging scenarios and enable its use for multisource content retrieval tasks, such as automatic speech recognition (ASR) or acoustic event detection (AED) in noisy environments. We present a Flexible Audio Source Separation Toolkit (FASST) and discuss its advantages compared to earlier approaches such as independent component analysis (ICA) and sparse component analysis (SCA). We explain how cues as diverse as harmonicity, spectral envelope, temporal fine structure or spatial location can be jointly exploited by this toolkit. We subsequently present the uncertainty decoding (UD) framework for the integration of audio source separation and audio content retrieval. We show how the uncertainty about the separated source signals can be accurately estimated and propagated to the features. Finally, we explain how this uncertainty can be efficiently exploited by a classifier, both at the training and the decoding stage. We illustrate the resulting performance improvements in terms of speech separation quality and speaker recognition accuracy.
Probing Motion of Fast Radio Burst Sources by Timing Strongly Lensed Repeaters
NASA Astrophysics Data System (ADS)
Dai, Liang; Lu, Wenbin
2017-09-01
Given the possible repetitive nature of fast radio bursts (FRBs), their cosmological origin, and their high occurrence, detection of strongly lensed sources due to intervening galaxy lenses is possible with forthcoming radio surveys. We show that if multiple images of a repeating source are resolved with VLBI, using a method independent of lens modeling, accurate timing could reveal non-uniform motion, either physical or apparent, of the emission spot. This can probe the physical nature of FRBs and their surrounding environments, constraining scenarios including orbital motion around a stellar companion if FRBs require a compact star in a special system, and jet-medium interactions for which the location of the emission spot may randomly vary. The high timing precision possible for FRBs (˜ms) compared with the typical time delays between images in galaxy lensing (≳10 days) enables the measurement of tiny fractional changes in the delays (˜ {10}-9) and hence the detection of time-delay variations induced by relative motions between the source, the lens, and the Earth. We show that uniform cosmic peculiar velocities only cause the delay time to drift linearly, and that the effect from the Earth’s orbital motion can be accurately subtracted, thus enabling a search for non-trivial source motion. For a timing accuracy of ˜1 ms and a repetition rate (of detected bursts) of ˜0.05 per day of a single FRB source, non-uniform displacement ≳0.1-1 au of the emission spot perpendicular to the line of sight is detectable if repetitions are seen over a period of hundreds of days.
2012-01-01
Background Robust demographic information is important to understanding the risk of introduction and spread of exotic diseases as well as the development of effective disease control strategies, but is often based on datasets collected for other purposes. Thus, it is important to validate, or at least cross-reference these datasets to other sources to assess whether they are being used appropriately. The aim of this study was to use horse location data collected from different contributing industry sectors ("Stakeholder horse data") to calibrate the spatial distribution of horses as indicated by owner locations registered in the National Equine Database (the NED). Results A conservative estimate for the accurately geo-located NED horse population within GB is approximately 840,000 horses. This is likely to be an underestimate because of the exclusion of horses due to age or location criteria. In both datasets, horse density was higher in England and Wales than in Scotland. The high density of horses located in urban areas as indicated in the NED is consistent with previous reports indicating that owner location cannot always be viewed as a direct substitute for horse location. Otherwise, at a regional resolution, there are few differences between the datasets. There are inevitable biases in the stakeholder data, and leisure horses that are unaffiliated to major stakeholders are not included in these data. Despite this, the similarity in distributions of these datasets is re-assuring, suggesting that there are few regional biases in the NED. Conclusions Our analyses suggest that stakeholder data could be used to monitor possible changes in horse demographics. Given such changes in horse demographics and the advantages of stakeholder data (which include annual updates and accurate horse location), it may be appropriate to use these data for future disease modelling in conjunction with, if not in place of the NED. PMID:22475060
Solutions to inverse plume in a crosswind problem using a predictor - corrector method
NASA Astrophysics Data System (ADS)
Vanderveer, Joseph; Jaluria, Yogesh
2013-11-01
Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.
Improved method for retinotopy constrained source estimation of visual evoked responses
Hagler, Donald J.; Dale, Anders M.
2011-01-01
Retinotopy constrained source estimation (RCSE) is a method for non-invasively measuring the time courses of activation in early visual areas using magnetoencephalography (MEG) or electroencephalography (EEG). Unlike conventional equivalent current dipole or distributed source models, the use of multiple, retinotopically-mapped stimulus locations to simultaneously constrain the solutions allows for the estimation of independent waveforms for visual areas V1, V2, and V3, despite their close proximity to each other. We describe modifications that improve the reliability and efficiency of this method. First, we find that increasing the number and size of visual stimuli results in source estimates that are less susceptible to noise. Second, to create a more accurate forward solution, we have explicitly modeled the cortical point spread of individual visual stimuli. Dipoles are represented as extended patches on the cortical surface, which take into account the estimated receptive field size at each location in V1, V2, and V3 as well as the contributions from contralateral, ipsilateral, dorsal, and ventral portions of the visual areas. Third, we implemented a map fitting procedure to deform a template to match individual subject retinotopic maps derived from functional magnetic resonance imaging (fMRI). This improves the efficiency of the overall method by allowing automated dipole selection, and it makes the results less sensitive to physiological noise in fMRI retinotopy data. Finally, the iteratively reweighted least squares (IRLS) method was used to reduce the contribution from stimulus locations with high residual error for robust estimation of visual evoked responses. PMID:22102418
Preliminary Design of a Lightning Optical Camera and ThundEr (LOCATE) Sensor
NASA Technical Reports Server (NTRS)
Phanord, Dieudonne D.; Koshak, William J.; Rybski, Paul M.; Arnold, James E. (Technical Monitor)
2001-01-01
The preliminary design of an optical/acoustical instrument is described for making highly accurate real-time determinations of the location of cloud-to-ground (CG) lightning. The instrument, named the Lightning Optical Camera And ThundEr (LOCATE) sensor, will also image the clear and cloud-obscured lightning channel produced from CGs and cloud flashes, and will record the transient optical waveforms produced from these discharges. The LOCATE sensor will consist of a full (360 degrees) field-of-view optical camera for obtaining CG channel image and azimuth, a sensitive thunder microphone for obtaining CG range, and a fast photodiode system for time-resolving the lightning optical waveform. The optical waveform data will be used to discriminate CGs from cloud flashes. Together, the optical azimuth and thunder range is used to locate CGs and it is anticipated that a network of LOCATE sensors would determine CG source location to well within 100 meters. All of this would be accomplished for a relatively inexpensive cost compared to present RF lightning location technologies, but of course the range detection is limited and will be quantified in the future. The LOCATE sensor technology would have practical applications for electric power utility companies, government (e.g. NASA Kennedy Space Center lightning safety and warning), golf resort lightning safety, telecommunications, and other industries.
Three-dimensional MR imaging in the assessment of physeal growth arrest.
Sailhan, Frédéric; Chotel, Franck; Guibal, Anne-Laure; Gollogly, Sohrab; Adam, Philippe; Bérard, Jérome; Guibaud, Laurent
2004-09-01
The purpose of this study is to describe an imaging method for identifying and characterising physeal growth arrest following physeal plate aggression. The authors describe the use of three-dimensional MRI performed with fat-suppressed three-dimensional spoiled gradient-recalled echo sequences followed by manual image reconstruction to create a 3D model of the physeal plate. This retrospective series reports the analysis of 33 bony physeal bridges in 28 children (mean age 10.5 years) with the use of fat-suppressed three-dimensional spoiled gradient-recalled echo imaging and 3D reconstructions from the source images. 3D reconstructions were obtained after the outlining was done manually on each source image. Files of all patients were reviewed for clinical data at the time of MRI, type of injury, age at MRI and bone bridge characteristics on reconstructions. Twenty-one (63%) of the 33 bridges were post-traumatic and were mostly situated in the lower extremities (19/21). The distal tibia was involved in 66% (14/21) of the cases. Bridges due to causes other than trauma were located in the lower extremities in 10/12 cases, and the distal femur represented 60% of these cases. Of the 28 patients, five presented with two bridges involving two different growth plates making a total of 33 physeal bone bars. The location and shape of each bridge was accurately identified in each patient, and in post-traumatic cases, 89% of bone bars were of Ogden type III (central) or I (peripheral). Reconstructions were obtained in 15 min and are easy to interpret. Volumes of the physeal bone bridge(s) and of the remaining normal physis were calculated. The bone bridging represented less than 1% to 47% of the total physeal plate volume. The precise shape and location of the bridge can be visualised on the 3D reconstructions. This information is useful in the surgical management of these deformities; as for the eight patients who underwent bone bar resection, an excellent correspondence was found by the treating surgeon between the MRI 3D model and the per-operative findings. Accurate 3D mapping obtained after manual reconstruction can also visualise very small physeal plates and bridges such as in cases of finger physeal disorders. MR imaging with fat-suppressed three-dimensional spoiled gradient-recalled echo sequences can be used to identify patterns of physeal growth arrest. 3D reconstructions can be obtained from the manual outlining of source images to provide an accurate representation of the bony bridge that can be a guide during surgical management.
Kendall, William L.; White, Gary C.
2009-01-01
1. Assessing the probability that a given site is occupied by a species of interest is important to resource managers, as well as metapopulation or landscape ecologists. Managers require accurate estimates of the state of the system, in order to make informed decisions. Models that yield estimates of occupancy, while accounting for imperfect detection, have proven useful by removing a potentially important source of bias. To account for detection probability, multiple independent searches per site for the species are required, under the assumption that the species is available for detection during each search of an occupied site. 2. We demonstrate that when multiple samples per site are defined by searching different locations within a site, absence of the species from a subset of these spatial subunits induces estimation bias when locations are exhaustively assessed or sampled without replacement. 3. We further demonstrate that this bias can be removed by choosing sampling locations with replacement, or if the species is highly mobile over a short period of time. 4. Resampling an existing data set does not mitigate bias due to exhaustive assessment of locations or sampling without replacement. 5. Synthesis and applications. Selecting sampling locations for presence/absence surveys with replacement is practical in most cases. Such an adjustment to field methods will prevent one source of bias, and therefore produce more robust statistical inferences about species occupancy. This will in turn permit managers to make resource decisions based on better knowledge of the state of the system.
Visuospatial working memory mediates inhibitory and facilitatory guidance in preview search.
Barrett, Doug J K; Shimozaki, Steven S; Jensen, Silke; Zobay, Oliver
2016-10-01
Visual search is faster and more accurate when a subset of distractors is presented before the display containing the target. This "preview benefit" has been attributed to separate inhibitory and facilitatory guidance mechanisms during search. In the preview task the temporal cues thought to elicit inhibition and facilitation provide complementary sources of information about the likely location of the target. In this study, we use a Bayesian observer model to compare sensitivity when the temporal cues eliciting inhibition and facilitation produce complementary, and competing, sources of information. Observers searched for T-shaped targets among L-shaped distractors in 2 standard and 2 preview conditions. In the standard conditions, all the objects in the display appeared at the same time. In the preview conditions, the initial subset of distractors either stayed on the screen or disappeared before the onset of the search display, which contained the target when present. In the latter, the synchronous onset of old and new objects negates the predictive utility of stimulus-driven capture during search. The results indicate observers combine memory-driven inhibition and sensory-driven capture to reduce spatial uncertainty about the target's likely location during search. In the absence of spatially predictive onsets, memory-driven inhibition at old locations persists despite irrelevant sensory change at previewed locations. This result is consistent with a bias toward unattended objects during search via the active suppression of irrelevant capture at previously attended locations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Jayet, Baptiste; Ahmad, Junaid; Taylor, Shelley L.; Hill, Philip J.; Dehghani, Hamid; Morgan, Stephen P.
2017-03-01
Bioluminescence imaging (BLI) is a commonly used imaging modality in biology to study cancer in vivo in small animals. Images are generated using a camera to map the optical fluence emerging from the studied animal, then a numerical reconstruction algorithm is used to locate the sources and estimate their sizes. However, due to the strong light scattering properties of biological tissues, the resolution is very limited (around a few millimetres). Therefore obtaining accurate information about the pathology is complicated. We propose a combined ultrasound/optics approach to improve accuracy of these techniques. In addition to the BLI data, an ultrasound probe driven by a scanner is used for two main objectives. First, to obtain a pure acoustic image, which provides structural information of the sample. And second, to alter the light emission by the bioluminescent sources embedded inside the sample, which is monitored using a high speed optical detector (e.g. photomultiplier tube). We will show that this last measurement, used in conjunction with the ultrasound data, can provide accurate localisation of the bioluminescent sources. This can be used as a priori information by the numerical reconstruction algorithm, greatly increasing the accuracy of the BLI image reconstruction as compared to the image generated using only BLI data.
Hybrid CFD/CAA Modeling for Liftoff Acoustic Predictions
NASA Technical Reports Server (NTRS)
Strutzenberg, Louise L.; Liever, Peter A.
2011-01-01
This paper presents development efforts at the NASA Marshall Space flight Center to establish a hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) simulation system for launch vehicle liftoff acoustics environment analysis. Acoustic prediction engineering tools based on empirical jet acoustic strength and directivity models or scaled historical measurements are of limited value in efforts to proactively design and optimize launch vehicles and launch facility configurations for liftoff acoustics. CFD based modeling approaches are now able to capture the important details of vehicle specific plume flow environment, identifY the noise generation sources, and allow assessment of the influence of launch pad geometric details and sound mitigation measures such as water injection. However, CFD methodologies are numerically too dissipative to accurately capture the propagation of the acoustic waves in the large CFD models. The hybrid CFD/CAA approach combines the high-fidelity CFD analysis capable of identifYing the acoustic sources with a fast and efficient Boundary Element Method (BEM) that accurately propagates the acoustic field from the source locations. The BEM approach was chosen for its ability to properly account for reflections and scattering of acoustic waves from launch pad structures. The paper will present an overview of the technology components of the CFD/CAA framework and discuss plans for demonstration and validation against test data.
Strynar, Mark; Dagnino, Sonia; McMahen, Rebecca; Liang, Shuang; Lindstrom, Andrew; Andersen, Erik; McMillan, Larry; Thurman, Michael; Ferrer, Imma; Ball, Carol
2015-10-06
Recent scientific scrutiny and concerns over exposure, toxicity, and risk have led to international regulatory efforts resulting in the reduction or elimination of certain perfluorinated compounds from various products and waste streams. Some manufacturers have started producing shorter chain per- and polyfluorinated compounds to try to reduce the potential for bioaccumulation in humans and wildlife. Some of these new compounds contain central ether oxygens or other minor modifications of traditional perfluorinated structures. At present, there has been very limited information published on these "replacement chemistries" in the peer-reviewed literature. In this study we used a time-of-flight mass spectrometry detector (LC-ESI-TOFMS) to identify fluorinated compounds in natural waters collected from locations with historical perfluorinated compound contamination. Our workflow for discovery of chemicals included sequential sampling of surface water for identification of potential sources, nontargeted TOFMS analysis, molecular feature extraction (MFE) of samples, and evaluation of features unique to the sample with source inputs. Specifically, compounds were tentatively identified by (1) accurate mass determination of parent and/or related adducts and fragments from in-source collision-induced dissociation (CID), (2) in-depth evaluation of in-source adducts formed during analysis, and (3) confirmation with authentic standards when available. We observed groups of compounds in homologous series that differed by multiples of CF2 (m/z 49.9968) or CF2O (m/z 65.9917). Compounds in each series were chromatographically separated and had comparable fragments and adducts produced during analysis. We detected 12 novel perfluoroalkyl ether carboxylic and sulfonic acids in surface water in North Carolina, USA using this approach. A key piece of evidence was the discovery of accurate mass in-source n-mer formation (H(+) and Na(+)) differing by m/z 21.9819, corresponding to the mass difference between the protonated and sodiated dimers.
Approach to identifying pollutant source and matching flow field
NASA Astrophysics Data System (ADS)
Liping, Pang; Yu, Zhang; Hongquan, Qu; Tao, Hu; Wei, Wang
2013-07-01
Accidental pollution events often threaten people's health and lives, and it is necessary to identify a pollutant source rapidly so that prompt actions can be taken to prevent the spread of pollution. But this identification process is one of the difficulties in the inverse problem areas. This paper carries out some studies on this issue. An approach using single sensor information with noise was developed to identify a sudden continuous emission trace pollutant source in a steady velocity field. This approach first compares the characteristic distance of the measured concentration sequence to the multiple hypothetical measured concentration sequences at the sensor position, which are obtained based on a source-three-parameter multiple hypotheses. Then we realize the source identification by globally searching the optimal values with the objective function of the maximum location probability. Considering the large amount of computation load resulting from this global searching, a local fine-mesh source search method based on priori coarse-mesh location probabilities is further used to improve the efficiency of identification. Studies have shown that the flow field has a very important influence on the source identification. Therefore, we also discuss the impact of non-matching flow fields with estimation deviation on identification. Based on this analysis, a method for matching accurate flow field is presented to improve the accuracy of identification. In order to verify the practical application of the above method, an experimental system simulating a sudden pollution process in a steady flow field was set up and some experiments were conducted when the diffusion coefficient was known. The studies showed that the three parameters (position, emission strength and initial emission time) of the pollutant source in the experiment can be estimated by using the method for matching flow field and source identification.
Global Disease Monitoring and Forecasting with Wikipedia
Generous, Nicholas; Fairchild, Geoffrey; Deshpande, Alina; Del Valle, Sara Y.; Priedhorsky, Reid
2014-01-01
Infectious disease is a leading threat to public health, economic stability, and other key social structures. Efforts to mitigate these impacts depend on accurate and timely monitoring to measure the risk and progress of disease. Traditional, biologically-focused monitoring techniques are accurate but costly and slow; in response, new techniques based on social internet data, such as social media and search queries, are emerging. These efforts are promising, but important challenges in the areas of scientific peer review, breadth of diseases and countries, and forecasting hamper their operational usefulness. We examine a freely available, open data source for this use: access logs from the online encyclopedia Wikipedia. Using linear models, language as a proxy for location, and a systematic yet simple article selection procedure, we tested 14 location-disease combinations and demonstrate that these data feasibly support an approach that overcomes these challenges. Specifically, our proof-of-concept yields models with up to 0.92, forecasting value up to the 28 days tested, and several pairs of models similar enough to suggest that transferring models from one location to another without re-training is feasible. Based on these preliminary results, we close with a research agenda designed to overcome these challenges and produce a disease monitoring and forecasting system that is significantly more effective, robust, and globally comprehensive than the current state of the art. PMID:25392913
Global disease monitoring and forecasting with Wikipedia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Generous, Nicholas; Fairchild, Geoffrey; Deshpande, Alina
Infectious disease is a leading threat to public health, economic stability, and other key social structures. Efforts to mitigate these impacts depend on accurate and timely monitoring to measure the risk and progress of disease. Traditional, biologically-focused monitoring techniques are accurate but costly and slow; in response, new techniques based on social internet data, such as social media and search queries, are emerging. These efforts are promising, but important challenges in the areas of scientific peer review, breadth of diseases and countries, and forecasting hamper their operational usefulness. We examine a freely available, open data source for this use: accessmore » logs from the online encyclopedia Wikipedia. Using linear models, language as a proxy for location, and a systematic yet simple article selection procedure, we tested 14 location-disease combinations and demonstrate that these data feasibly support an approach that overcomes these challenges. Specifically, our proof-of-concept yields models with up to 0.92, forecasting value up to the 28 days tested, and several pairs of models similar enough to suggest that transferring models from one location to another without re-training is feasible. Based on these preliminary results, we close with a research agenda designed to overcome these challenges and produce a disease monitoring and forecasting system that is significantly more effective, robust, and globally comprehensive than the current state of the art.« less
Global disease monitoring and forecasting with Wikipedia.
Generous, Nicholas; Fairchild, Geoffrey; Deshpande, Alina; Del Valle, Sara Y; Priedhorsky, Reid
2014-11-01
Infectious disease is a leading threat to public health, economic stability, and other key social structures. Efforts to mitigate these impacts depend on accurate and timely monitoring to measure the risk and progress of disease. Traditional, biologically-focused monitoring techniques are accurate but costly and slow; in response, new techniques based on social internet data, such as social media and search queries, are emerging. These efforts are promising, but important challenges in the areas of scientific peer review, breadth of diseases and countries, and forecasting hamper their operational usefulness. We examine a freely available, open data source for this use: access logs from the online encyclopedia Wikipedia. Using linear models, language as a proxy for location, and a systematic yet simple article selection procedure, we tested 14 location-disease combinations and demonstrate that these data feasibly support an approach that overcomes these challenges. Specifically, our proof-of-concept yields models with r2 up to 0.92, forecasting value up to the 28 days tested, and several pairs of models similar enough to suggest that transferring models from one location to another without re-training is feasible. Based on these preliminary results, we close with a research agenda designed to overcome these challenges and produce a disease monitoring and forecasting system that is significantly more effective, robust, and globally comprehensive than the current state of the art.
Global disease monitoring and forecasting with Wikipedia
Generous, Nicholas; Fairchild, Geoffrey; Deshpande, Alina; ...
2014-11-13
Infectious disease is a leading threat to public health, economic stability, and other key social structures. Efforts to mitigate these impacts depend on accurate and timely monitoring to measure the risk and progress of disease. Traditional, biologically-focused monitoring techniques are accurate but costly and slow; in response, new techniques based on social internet data, such as social media and search queries, are emerging. These efforts are promising, but important challenges in the areas of scientific peer review, breadth of diseases and countries, and forecasting hamper their operational usefulness. We examine a freely available, open data source for this use: accessmore » logs from the online encyclopedia Wikipedia. Using linear models, language as a proxy for location, and a systematic yet simple article selection procedure, we tested 14 location-disease combinations and demonstrate that these data feasibly support an approach that overcomes these challenges. Specifically, our proof-of-concept yields models with up to 0.92, forecasting value up to the 28 days tested, and several pairs of models similar enough to suggest that transferring models from one location to another without re-training is feasible. Based on these preliminary results, we close with a research agenda designed to overcome these challenges and produce a disease monitoring and forecasting system that is significantly more effective, robust, and globally comprehensive than the current state of the art.« less
Using a motion capture system for spatial localization of EEG electrodes
Reis, Pedro M. R.; Lochmann, Matthias
2015-01-01
Electroencephalography (EEG) is often used in source analysis studies, in which the locations of cortex regions responsible for a signal are determined. For this to be possible, accurate positions of the electrodes at the scalp surface must be determined, otherwise errors in the source estimation will occur. Today, several methods for acquiring these positions exist but they are often not satisfyingly accurate or take a long time to perform. Therefore, in this paper we describe a method capable of determining the positions accurately and fast. This method uses an infrared light motion capture system (IR-MOCAP) with 8 cameras arranged around a human participant. It acquires 3D coordinates of each electrode and automatically labels them. Each electrode has a small reflector on top of it thus allowing its detection by the cameras. We tested the accuracy of the presented method by acquiring the electrodes positions on a rigid sphere model and comparing these with measurements from computer tomography (CT). The average Euclidean distance between the sphere model CT measurements and the presented method was 1.23 mm with an average standard deviation of 0.51 mm. We also tested the method with a human participant. The measurement was quickly performed and all positions were captured. These results tell that, with this method, it is possible to acquire electrode positions with minimal error and little time effort for the study participants and investigators. PMID:25941468
Semantic congruence enhances memory of episodic associations: role of theta oscillations.
Atienza, Mercedes; Crespo-Garcia, Maite; Cantero, Jose L
2011-01-01
Growing evidence suggests that theta oscillations play a crucial role in episodic encoding. The present study evaluates whether changes in electroencephalographic theta source dynamics mediate the positive influence of semantic congruence on incidental associative learning. Here we show that memory for episodic associations (face-location) is more accurate when studied under semantically congruent contexts. However, only participants showing RT priming effect in a conceptual priming test (priming group) also gave faster responses when recollecting source information of semantically congruent faces as compared with semantically incongruent faces. This improved episodic retrieval was positively correlated with increases in theta power during the study phase mainly in the bilateral parahippocampal gyrus, left superior temporal gyrus, and left lateral posterior parietal lobe. Reconstructed signals from the estimated sources showed higher theta power for congruent than incongruent faces and also for the priming than the nonpriming group. These results are in agreement with the attention to memory model. Besides directing top-down attention to goal-relevant semantic information during encoding, the dorsal parietal lobe may also be involved in redirecting attention to bottom-up-driven memories thanks to connections between the medial-temporal and the left ventral parietal lobe. The latter function can either facilitate or interfere with encoding of face-location associations depending on whether they are preceded by semantically congruent or incongruent contexts, respectively, because only in the former condition retrieved representations related to the cue and the face are both coherent with the person identity and are both associated with the same location.
NASA Astrophysics Data System (ADS)
Ogwari, P.; DeShon, H. R.; Hornbach, M.
2017-12-01
Post-2008 earthquake rate increases in the Central United States have been associated with large-scale subsurface disposal of waste-fluids from oil and gas operations. The beginning of various earthquake sequences in Fort Worth and Permian basins have occurred in the absence of seismic stations at local distances to record and accurately locate hypocenters. Most typically, the initial earthquakes have been located using regional seismic network stations (>100km epicentral distance) and using global 1D velocity models, which usually results in large location uncertainty, especially in depth, does not resolve magnitude <2.5 events, and does not constrain the geometry of the activated fault(s). Here, we present a method to better resolve earthquake occurrence and location using matched filters and regional relative location when local data becomes available. We use the local distance data for high-resolution earthquake location, identifying earthquake templates and accurate source-station raypath velocities for the Pg and Lg phases at regional stations. A matched-filter analysis is then applied to seismograms recorded at US network stations and at adopted TA stations that record the earthquakes before and during the local network deployment period. Positive detections are declared based on manual review of associated with P and S arrivals on local stations. We apply hierarchical clustering to distinguish earthquakes that are both spatially clustered and spatially separated. Finally, we conduct relative earthquake and earthquake cluster location using regional station differential times. Initial analysis applied to the 2008-2009 DFW airport sequence in north Texas results in time continuous imaging of epicenters extending into 2014. Seventeen earthquakes in the USGS earthquake catalog scattered across a 10km2 area near DFW airport are relocated onto a single fault using these approaches. These techniques will also be applied toward imaging recent earthquakes in the Permian Basin near Pecos, TX.
Noise Sources in Photometry and Radial Velocities
NASA Astrophysics Data System (ADS)
Oshagh, Mahmoudreza
The quest for Earth-like, extrasolar planets (exoplanets), especially those located inside the habitable zone of their host stars, requires techniques sensitive enough to detect the faint signals produced by those planets. The radial velocity (RV) and photometric transit methods are the most widely used and also the most efficient methods for detecting and characterizing exoplanets. However, presence of astrophysical "noise" makes it difficult to detect and accurately characterize exoplanets. It is important to note that the amplitude of such astrophysical noise is larger than both the signal of Earth-like exoplanets and state-of-the-art instrumentation limit precision, making this a pressing topic that needs to be addressed. In this chapter, I present a general review of the main sources of noise in photometric and RV observations, namely, stellar oscillations, granulation, and magnetic activity. Moreover, for each noise source I discuss the techniques and observational strategies which allow us to mitigate their impact.
The density and location of the X-ray absorbing gas in AGN
NASA Astrophysics Data System (ADS)
Netzer, H.
Chandra and XMM-Newton have opened the era of real X-ray spectroscopy. The launch of these spacecraft was followed with great expectations to solve some of the mysteries of AGN, including the determination of the accurate profile of the iron Kα line, the ionization and location of the warm absorbing gas (the "warm absorber", hereafter WA) and the AGN-starburst connection. So far, only a small subset of those big questions have been answered. In particular, new observations of several type-I AGN are providing the first clues about the nature of the various soft X-ray spectral components and the location of the WA. Perhaps the best example regarding the WA location and the various spectral components is the new Chandra observation of NGC 3516. This is a well known Seyfert 1 galaxy that happened to be in a very low state at the time of observations (October 2000). The analysis by Netzer et al (2002) gives the following new results: The X-ray continuum is extremely hard (Γ < 1) and is very different from a single power-law continuum. Its shape is entirely consistent with the fading of the source by a factor of ~8 between 1994 (ASCA observation) and 2000 assuming a 1022 cm-2 line-of sight WA which reacts to the flux of the central ionizing source. A correlation between the observed changes in the source spectral energy distribution (SED) and the hard (E>5 keV) energy luminosity of the source. This can be used to set an upper limit of about 6 × 1017 cm on the WA location. There is a significant contribution, at high energies, from a cold reflecting remote gas which is constant in time and is producing most of the narrow component of the Kα line. The combination of a highly variable central source and a constant far-away ``reflector'' is required to explain the time dependent SED of NGC 3516. The generalization of these results (assuming they are typical) to other type I AGN is very important. They suggest that in many sources much (or even most) of the apparent changes in soft X-ray continuum slope may be due to variations in the ionization of the obscuring gas. They also suggest that reflection by remote cold gas (the "torus"?) must be taken into account when studying the X-ray spectrum of Seyfert galaxies and quasars. Finally, at least in one case, the WA gas is located well within the assumed torus and perhaps as close to the center as the BLR.
Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M
2017-10-01
Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.
NASA Astrophysics Data System (ADS)
Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.
2017-10-01
Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.
Inverse modeling methods for indoor airborne pollutant tracking: literature review and fundamentals.
Liu, X; Zhai, Z
2007-12-01
Reduction in indoor environment quality calls for effective control and improvement measures. Accurate and prompt identification of contaminant sources ensures that they can be quickly removed and contaminated spaces isolated and cleaned. This paper discusses the use of inverse modeling to identify potential indoor pollutant sources with limited pollutant sensor data. The study reviews various inverse modeling methods for advection-dispersion problems and summarizes the methods into three major categories: forward, backward, and probability inverse modeling methods. The adjoint probability inverse modeling method is indicated as an appropriate model for indoor air pollutant tracking because it can quickly find source location, strength and release time without prior information. The paper introduces the principles of the adjoint probability method and establishes the corresponding adjoint equations for both multi-zone airflow models and computational fluid dynamics (CFD) models. The study proposes a two-stage inverse modeling approach integrating both multi-zone and CFD models, which can provide a rapid estimate of indoor pollution status and history for a whole building. Preliminary case study results indicate that the adjoint probability method is feasible for indoor pollutant inverse modeling. The proposed method can help identify contaminant source characteristics (location and release time) with limited sensor outputs. This will ensure an effective and prompt execution of building management strategies and thus achieve a healthy and safe indoor environment. The method can also help design optimal sensor networks.
NASA Astrophysics Data System (ADS)
Roostaee, M.; Deng, Z.
2017-12-01
The states' environmental agencies are required by The Clean Water Act to assess all waterbodies and evaluate potential sources of impairments. Spatial and temporal distributions of water quality parameters are critical in identifying Critical Source Areas (CSAs). However, due to limitations in monetary resources and a large number of waterbodies, available monitoring stations are typically sparse with intermittent periods of data collection. Hence, scarcity of water quality data is a major obstacle in addressing sources of pollution through management strategies. In this study spatiotemporal Bayesian Maximum Entropy method (BME) is employed to model the inherent temporal and spatial variability of measured water quality indicators such as Dissolved Oxygen (DO) concentration for Turkey Creek Watershed. Turkey Creek is located in northern Louisiana and has been listed in 303(d) list for DO impairment since 2014 in Louisiana Water Quality Inventory Reports due to agricultural practices. BME method is proved to provide more accurate estimates than the methods of purely spatial analysis by incorporating space/time distribution and uncertainty in available measured soft and hard data. This model would be used to estimate DO concentration at unmonitored locations and times and subsequently identifying CSAs. The USDA's crop-specific land cover data layers of the watershed were then used to determine those practices/changes that led to low DO concentration in identified CSAs. Primary results revealed that cultivation of corn and soybean as well as urban runoff are main contributing sources in low dissolved oxygen in Turkey Creek Watershed.
Klausner, Z; Klement, E; Fattal, E
2018-02-01
Viruses that affect the health of humans and farm animals can spread over long distances via atmospheric mechanisms. The phenomenon of atmospheric long-distance dispersal (LDD) is associated with severe consequences because it may introduce pathogens into new areas. The introduction of new pathogens to Israel was attributed to LDD events numerous times. This provided the motivation for this study which is aimed to identify all the locations in the eastern Mediterranean that may serve as sources for pathogen incursion into Israel via LDD. This aim was achieved by calculating source-receptor relationship probability maps. These maps describe the probability that an infected vector or viral aerosol, once airborne, will have an atmospheric route that can transport it to a distant location. The resultant probability maps demonstrate a seasonal tendency in the probability of specific areas to serve as sources for pathogen LDD into Israel. Specifically, Cyprus' season is the summer; southern Turkey and the Greek islands of Crete, Karpathos and Rhodes are associated with spring and summer; lower Egypt and Jordan may serve as sources all year round, except the summer months. The method used in this study can easily be implemented to any other geographic region. The importance of this study is the ability to provide a climatologically valid and accurate risk assessment tool to support long-term decisions regarding preparatory actions for future outbreaks long before a specific outbreak occurs. © 2017 Blackwell Verlag GmbH.
A reassessment of ground water flow conditions and specific yield at Borden and Cape Cod
Grimestad, Garry
2002-01-01
Recent widely accepted findings respecting the origin and nature of specific yield in unconfined aquifers rely heavily on water level changes observed during two pumping tests, one conducted at Borden, Ontario, Canada, and the other at Cape Cod, Massachusetts. The drawdown patterns observed during those tests have been taken as proof that unconfined specific yield estimates obtained from long-duration pumping tests should approach the laboratory-estimated effective porosity of representative aquifer formation samples. However, both of the original test reports included direct or referential descriptions of potential supplemental sources of pumped water that would have introduced intractable complications and errors into straightforward interpretations of the drawdown observations if actually present. Searches for evidence of previously neglected sources were performed by screening the original drawdown observations from both locations for signs of diagnostic skewing that should be present only if some of the extracted water was derived from sources other than main aquifer storage. The data screening was performed using error-guided computer assisted fitting techniques, capable of accurately sensing and simulating the effects of a wide range of non-traditional and external sources. The drawdown curves from both tests proved to be inconsistent with traditional single-source pumped aquifer models but consistent with site-specific alternatives that included significant contributions of water from external sources. The corrected pumping responses shared several important features. Unsaturated drainage appears to have ceased effectively at both locations within the first day of pumping, and estimates of specific yield stabilized at levels considerably smaller than the corresponding laboratory-measured or probable effective porosity. Separate sequential analyses of progressively later field observations gave stable and nearly constant specific yield estimates for each location, with no evidence from either test that more prolonged pumping would have induced substantially greater levels of unconfined specific yield.
Achieving perceptually-accurate aural telepresence
NASA Astrophysics Data System (ADS)
Henderson, Paul D.
Immersive multimedia requires not only realistic visual imagery but also a perceptually-accurate aural experience. A sound field may be presented simultaneously to a listener via a loudspeaker rendering system using the direct sound from acoustic sources as well as a simulation or "auralization" of room acoustics. Beginning with classical Wave-Field Synthesis (WFS), improvements are made to correct for asymmetries in loudspeaker array geometry. Presented is a new Spatially-Equalized WFS (SE-WFS) technique to maintain the energy-time balance of a simulated room by equalizing the reproduced spectrum at the listener for a distribution of possible source angles. Each reproduced source or reflection is filtered according to its incidence angle to the listener. An SE-WFS loudspeaker array of arbitrary geometry reproduces the sound field of a room with correct spectral and temporal balance, compared with classically-processed WFS systems. Localization accuracy of human listeners in SE-WFS sound fields is quantified by psychoacoustical testing. At a loudspeaker spacing of 0.17 m (equivalent to an aliasing cutoff frequency of 1 kHz), SE-WFS exhibits a localization blur of 3 degrees, nearly equal to real point sources. Increasing the loudspeaker spacing to 0.68 m (for a cutoff frequency of 170 Hz) results in a blur of less than 5 degrees. In contrast, stereophonic reproduction is less accurate with a blur of 7 degrees. The ventriloquist effect is psychometrically investigated to determine the effect of an intentional directional incongruence between audio and video stimuli. Subjects were presented with prerecorded full-spectrum speech and motion video of a talker's head as well as broadband noise bursts with a static image. The video image was displaced from the audio stimulus in azimuth by varying amounts, and the perceived auditory location measured. A strong bias was detectable for small angular discrepancies between audio and video stimuli for separations of less than 8 degrees for speech and less than 4 degrees with a pink noise burst. The results allow for the density of WFS systems to be selected from the required localization accuracy. Also, by exploiting the ventriloquist effect, the angular resolution of an audio rendering may be reduced when combined with spatially-accurate video.
Error Analyses of the North Alabama Lightning Mapping Array (LMA)
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Solution of the three-dimensional Helmholtz equation with nonlocal boundary conditions
NASA Technical Reports Server (NTRS)
Hodge, Steve L.; Zorumski, William E.; Watson, Willie R.
1995-01-01
The Helmholtz equation is solved within a three-dimensional rectangular duct with a nonlocal radiation boundary condition at the duct exit plane. This condition accurately models the acoustic admittance at an arbitrarily-located computational boundary plane. A linear system of equations is constructed with second-order central differences for the Helmholtz operator and second-order backward differences for both local admittance conditions and the gradient term in the nonlocal radiation boundary condition. The resulting matrix equation is large, sparse, and non-Hermitian. The size and structure of the matrix makes direct solution techniques impractical; as a result, a nonstationary iterative technique is used for its solution. The theory behind the nonstationary technique is reviewed, and numerical results are presented for radiation from both a point source and a planar acoustic source. The solutions with the nonlocal boundary conditions are invariant to the location of the computational boundary, and the same nonlocal conditions are valid for all solutions. The nonlocal conditions thus provide a means of minimizing the size of three-dimensional computational domains.
González-Gómez, Paulina L.; Madrid-Lopez, Natalia; Salazar, Juan E.; Suárez, Rodrigo; Razeto-Barry, Pablo; Mpodozis, Jorge; Bozinovic, Francisco; Vásquez, Rodrigo A.
2014-01-01
In scatter-hoarding species, several behavioral and neuroanatomical adaptations allow them to store and retrieve thousands of food items per year. Nectarivorous animals face a similar scenario having to remember quality, location and replenishment schedules of several nectar sources. In the green-backed firecrown hummingbird (Sephanoides sephanoides), males are territorial and have the ability to accurately keep track of nectar characteristics of their defended food sources. In contrast, females display an opportunistic strategy, performing rapid intrusions into males territories. In response, males behave aggressively during the non-reproductive season. In addition, females have higher energetic demands due to higher thermoregulatory costs and travel times. The natural scenario of this species led us to compared cognitive abilities and hippocampal size between males and females. Males were able to remember nectar location and renewal rates significantly better than females. However, the hippocampal formation was significantly larger in females than males. We discuss these findings in terms of sexually dimorphic use of spatial resources and variable patterns of brain dimorphisms in birds. PMID:24599049
NASA Astrophysics Data System (ADS)
Ainalis, Daniel; Kaufmann, Olivier; Tshibangu, Jean-Pierre; Verlinden, Olivier; Kouroussis, Georges
2017-01-01
The mining and construction industries have long been faced with considerable attention and criticism in regard to the effects of blasting. The generation of ground vibrations is one of the most significant factors associated with blasting and is becoming increasingly important as mining sites are now regularly located near urban areas. This is of concern to not only the operators of the mine but also residents. Mining sites are subjected to an inevitable compromise: a production blast is designed to fragment the utmost amount of rock possible; however, any increase in the blast can generate ground vibrations which can propagate great distances and cause structural damage or discomfort to residents in surrounding urban areas. To accurately predict the propagation of ground vibrations near these sensitive areas, the blasting process and surrounding environment must be characterised and understood. As an initial step, an accurate model of the source of blast-induced vibrations is required. This paper presents a comprehensive review of the approaches to model the blasting source in order to critically evaluate developments in the field. An overview of the blasting process and description of the various factors which influence the blast performance and subsequent ground vibrations are also presented. Several approaches to analytically model explosives are discussed. Ground vibration prediction methods focused on seed waveform and charge weight scaling techniques are presented. Finally, numerical simulations of the blasting source are discussed, including methods to estimate blasthole wall pressure time-history, and hydrodynamic codes.
Elaina, Nor Safira; Malik, Aamir Saeed; Shams, Wafaa Khazaal; Badruddin, Nasreen; Abdullah, Jafri Malin; Reza, Mohammad Faruque
2018-06-01
To localize sensorimotor cortical activation in 10 patients with frontoparietal tumors using quantitative magnetoencephalography (MEG) with noise-normalized approaches. Somatosensory evoked magnetic fields (SEFs) were elicited in 10 patients with somatosensory tumors and in 10 control participants using electrical stimulation of the median nerve via the right and left wrists. We localized the N20m component of the SEFs using dynamic statistical parametric mapping (dSPM) and standardized low-resolution brain electromagnetic tomography (sLORETA) combined with 3D magnetic resonance imaging (MRI). The obtained coordinates were compared between groups. Finally, we statistically evaluated the N20m parameters across hemispheres using non-parametric statistical tests. The N20m sources were accurately localized to Brodmann area 3b in all members of the control group and in seven of the patients; however, the sources were shifted in three patients relative to locations outside the primary somatosensory cortex (SI). Compared with the affected (tumor) hemispheres in the patient group, N20m amplitudes and the strengths of the current sources were significantly lower in the unaffected hemispheres and in both hemispheres of the control group. These results were consistent for both dSPM and sLORETA approaches. Tumors in the sensorimotor cortex lead to cortical functional reorganization and an increase in N20m amplitude and current-source strengths. Noise-normalized approaches for MEG analysis that are integrated with MRI show accurate and reliable localization of sensorimotor function.
Freitag, L E; Tyack, P L
1993-04-01
A method for localization and tracking of calling marine mammals was tested under realistic field conditions that include noise, multipath, and arbitrarily located sensors. Experiments were performed in two locations using four and six hydrophones with captive Atlantic bottlenose dolphins (Tursiops truncatus). Acoustic signals from the animals were collected in the field using a digital acoustic data acquisition system. The data were then processed off-line to determine relative hydrophone positions and the animal locations. Accurate hydrophone position estimates are achieved by pinging sequentially from each hydrophone to all the others. A two-step least-squares algorithm is then used to determine sensor locations from the calibration data. Animal locations are determined by estimating the time differences of arrival of the dolphin signals at the different sensors. The peak of a matched filter output or the first cycle of the observed waveform is used to determine arrival time of an echolocation click. Cross correlation between hydrophones is used to determine inter-sensor time delays of whistles. Calculation of source location using the time difference of arrival measurements is done using a least-squares solution to minimize error. These preliminary experimental results based on a small set of data show that realistic trajectories for moving animals may be generated from consecutive location estimates.
Aydin, Ümit; Vorwerk, Johannes; Küpper, Philipp; Heers, Marcel; Kugel, Harald; Galka, Andreas; Hamid, Laith; Wellmer, Jörg; Kellinghaus, Christoph; Rampp, Stefan; Wolters, Carsten Hermann
2014-01-01
To increase the reliability for the non-invasive determination of the irritative zone in presurgical epilepsy diagnosis, we introduce here a new experimental and methodological source analysis pipeline that combines the complementary information in EEG and MEG, and apply it to data from a patient, suffering from refractory focal epilepsy. Skull conductivity parameters in a six compartment finite element head model with brain anisotropy, constructed from individual MRI data, are estimated in a calibration procedure using somatosensory evoked potential (SEP) and field (SEF) data. These data are measured in a single run before acquisition of further runs of spontaneous epileptic activity. Our results show that even for single interictal spikes, volume conduction effects dominate over noise and need to be taken into account for accurate source analysis. While cerebrospinal fluid and brain anisotropy influence both modalities, only EEG is sensitive to skull conductivity and conductivity calibration significantly reduces the difference in especially depth localization of both modalities, emphasizing its importance for combining EEG and MEG source analysis. On the other hand, localization differences which are due to the distinct sensitivity profiles of EEG and MEG persist. In case of a moderate error in skull conductivity, combined source analysis results can still profit from the different sensitivity profiles of EEG and MEG to accurately determine location, orientation and strength of the underlying sources. On the other side, significant errors in skull modeling are reflected in EEG reconstruction errors and could reduce the goodness of fit to combined datasets. For combined EEG and MEG source analysis, we therefore recommend calibrating skull conductivity using additionally acquired SEP/SEF data. PMID:24671208
NASA Astrophysics Data System (ADS)
Ning, Nannan; Tian, Jie; Liu, Xia; Deng, Kexin; Wu, Ping; Wang, Bo; Wang, Kun; Ma, Xibo
2014-02-01
In mathematics, optical molecular imaging including bioluminescence tomography (BLT), fluorescence tomography (FMT) and Cerenkov luminescence tomography (CLT) are concerned with a similar inverse source problem. They all involve the reconstruction of the 3D location of a single/multiple internal luminescent/fluorescent sources based on 3D surface flux distribution. To achieve that, an accurate fusion between 2D luminescent/fluorescent images and 3D structural images that may be acquired form micro-CT, MRI or beam scanning is extremely critical. However, the absence of a universal method that can effectively convert 2D optical information into 3D makes the accurate fusion challengeable. In this study, to improve the fusion accuracy, a new fusion method for dual-modality tomography (luminescence/fluorescence and micro-CT) based on natural light surface reconstruction (NLSR) and iterated closest point (ICP) was presented. It consisted of Octree structure, exact visual hull from marching cubes and ICP. Different from conventional limited projection methods, it is 360° free-space registration, and utilizes more luminescence/fluorescence distribution information from unlimited multi-orientation 2D optical images. A mouse mimicking phantom (one XPM-2 Phantom Light Source, XENOGEN Corporation) and an in-vivo BALB/C mouse with implanted one luminescent light source were used to evaluate the performance of the new fusion method. Compared with conventional fusion methods, the average error of preset markers was improved by 0.3 and 0.2 pixels from the new method, respectively. After running the same 3D internal light source reconstruction algorithm of the BALB/C mouse, the distance error between the actual and reconstructed internal source was decreased by 0.19 mm.
NASA Technical Reports Server (NTRS)
Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris
2011-01-01
A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.
Time-Frequency Analysis of the Dispersion of Lamb Modes
NASA Technical Reports Server (NTRS)
Prosser, W. H.; Seale, Michael D.; Smith, Barry T.
1999-01-01
Accurate knowledge of the velocity dispersion of Lamb modes is important for ultrasonic nondestructive evaluation methods used in detecting and locating flaws in thin plates and in determining their elastic stiffness coefficients. Lamb mode dispersion is also important in the acoustic emission technique for accurately triangulating the location of emissions in thin plates. In this research, the ability to characterize Lamb mode dispersion through a time-frequency analysis (the pseudo-Wigner-Ville distribution) was demonstrated. A major advantage of time-frequency methods is the ability to analyze acoustic signals containing multiple propagation modes, which overlap and superimpose in the time domain signal. By combining time-frequency analysis with a broadband acoustic excitation source, the dispersion of multiple Lamb modes over a wide frequency range can be determined from as little as a single measurement. In addition, the technique provides a direct measurement of the group velocity dispersion. The technique was first demonstrated in the analysis of a simulated waveform in an aluminum plate in which the Lamb mode dispersion was well known. Portions of the dispersion curves of the AO, A I , So, and S2 Lamb modes were obtained from this one waveform. The technique was also applied for the analysis of experimental waveforms from a unidirectional graphite/epoxy composite plate. Measurements were made both along and perpendicular to the fiber direction. In this case, the signals contained only the lowest order symmetric and antisymmetric modes. A least squares fit of the results from several source to detector distances was used. Theoretical dispersion curves were calculated and are shown to be in good agreement with experimental results.
NASA Astrophysics Data System (ADS)
Ding, R.; He, T.
2017-12-01
With the increased popularity in mobile applications and services, there has been a growing demand for more advanced mobile technologies that utilize real-time Location Based Services (LBS) data to support natural hazard response efforts. Compared to traditional sources like the census bureau that often can only provide historical and static data, an LBS service can provide more current data to drive a real-time natural hazard response system to more accurately process and assess issues such as population density in areas impacted by a hazard. However, manually preparing or preprocessing the data to suit the needs of the particular application would be time-consuming. This research aims to implement a population heatmap visual analytics system based on real-time data for natural disaster emergency management. System comprised of a three-layered architecture, including data collection, data processing, and visual analysis layers. Real-time, location-based data meeting certain polymerization conditions are collected from multiple sources across the Internet, then processed and stored in a cloud-based data store. Parallel computing is utilized to provide fast and accurate access to the pre-processed population data based on criteria such as the disaster event and to generate a location-based population heatmap as well as other types of visual digital outputs using auxiliary analysis tools. At present, a prototype system, which geographically covers the entire region of China and combines population heat map based on data from the Earthquake Catalogs database has been developed. It Preliminary results indicate that the generation of dynamic population density heatmaps based on the prototype system has effectively supported rapid earthquake emergency rescue and evacuation efforts as well as helping responders and decision makers to evaluate and assess earthquake damage. Correlation analyses that were conducted revealed that the aggregation and movement of people depended on various factors, including earthquake occurrence time and location of epicenter. This research hopes to continue to build upon the success of the prototype system in order to improve and extend the system to support the analysis of earthquakes and other types of natural hazard events.
NASA Astrophysics Data System (ADS)
Madankan, Reza
All across the world, toxic material clouds are emitted from sources, such as industrial plants, vehicular traffic, and volcanic eruptions can contain chemical, biological or radiological material. With the growing fear of natural, accidental or deliberate release of toxic agents, there is tremendous interest in precise source characterization and generating accurate hazard maps of toxic material dispersion for appropriate disaster management. In this dissertation, an end-to-end framework has been developed for probabilistic source characterization and forecasting of atmospheric release incidents. The proposed methodology consists of three major components which are combined together to perform the task of source characterization and forecasting. These components include Uncertainty Quantification, Optimal Information Collection, and Data Assimilation. Precise approximation of prior statistics is crucial to ensure performance of the source characterization process. In this work, an efficient quadrature based method has been utilized for quantification of uncertainty in plume dispersion models that are subject to uncertain source parameters. In addition, a fast and accurate approach is utilized for the approximation of probabilistic hazard maps, based on combination of polynomial chaos theory and the method of quadrature points. Besides precise quantification of uncertainty, having useful measurement data is also highly important to warranty accurate source parameter estimation. The performance of source characterization is highly affected by applied sensor orientation for data observation. Hence, a general framework has been developed for the optimal allocation of data observation sensors, to improve performance of the source characterization process. The key goal of this framework is to optimally locate a set of mobile sensors such that measurement of textit{better} data is guaranteed. This is achieved by maximizing the mutual information between model predictions and observed data, given a set of kinetic constraints on mobile sensors. Dynamic Programming method has been utilized to solve the resulting optimal control problem. To complete the loop of source characterization process, two different estimation techniques, minimum variance estimation framework and Bayesian Inference method has been developed to fuse model forecast with measurement data. Incomplete information regarding the distribution of associated noise signal in measurement data, is another major challenge in the source characterization of plume dispersion incidents. This frequently happens in data assimilation of atmospheric data by using the satellite imagery. This occurs due to the fact that satellite imagery data can be polluted with noise, depending on weather conditions, clouds, humidity, etc. Unfortunately, there is no accurate procedure to quantify the error in recorded satellite data. Hence, using classical data assimilation methods in this situation is not straight forward. In this dissertation, the basic idea of a novel approach has been proposed to tackle these types of real world problems with more accuracy and robustness. A simple example demonstrating the real-world scenario is presented to validate the developed methodology.
Addante, Richard, J.; Ranganath, Charan; Yonelinas, Andrew, P.
2012-01-01
Recollection is typically associated with high recognition confidence and accurate source memory. However, subjects sometimes make accurate source memory judgments even for items that are not confidently recognized, and it is not known whether these responses are based on recollection or some other memory process. In the current study, we measured event related potentials (ERPs) while subjects made item and source memory confidence judgments in order to determine whether recollection supported accurate source recognition responses for items that were not confidently recognized. In line with previous studies, we found that recognition memory was associated with two ERP effects: an early on-setting FN400 effect, and a later parietal old-new effect [Late Positive Component (LPC)], which have been associated with familiarity and recollection, respectively. The FN400 increased gradually with item recognition confidence, whereas the LPC was only observed for highly confident recognition responses. The LPC was also related to source accuracy, but only for items that had received a high confidence item recognition response; accurate source judgments to items that were less confidently recognized did not exhibit the typical ERP correlate of recollection or familiarity, but rather showed a late, broadly distributed negative ERP difference. The results indicate that accurate source judgments of episodic context can occur even when recollection fails. PMID:22548808
NASA Astrophysics Data System (ADS)
Díaz-Mojica, J. J.; Cruz-Atienza, V. M.; Madariaga, R.; Singh, S. K.; Iglesias, A.
2013-05-01
We introduce a novel approach for imaging the earthquakes dynamics from ground motion records based on a parallel genetic algorithm (GA). The method follows the elliptical dynamic-rupture-patch approach introduced by Di Carli et al. (2010) and has been carefully verified through different numerical tests (Díaz-Mojica et al., 2012). Apart from the five model parameters defining the patch geometry, our dynamic source description has four more parameters: the stress drop inside the nucleation and the elliptical patches; and two friction parameters, the slip weakening distance and the change of the friction coefficient. These parameters are constant within the rupture surface. The forward dynamic source problem, involved in the GA inverse method, uses a highly accurate computational solver for the problem, namely the staggered-grid split-node. The synthetic inversion presented here shows that the source model parameterization is suitable for the GA, and that short-scale source dynamic features are well resolved in spite of low-pass filtering of the data for periods comparable to the source duration. Since there is always uncertainty in the propagation medium as well as in the source location and the focal mechanisms, we have introduced a statistical approach to generate a set of solution models so that the envelope of the corresponding synthetic waveforms explains as much as possible the observed data. We applied the method to the 2012 Mw6.5 intraslab Zumpango, Mexico earthquake and determined several fundamental source parameters that are in accordance with different and completely independent estimates for Mexican and worldwide earthquakes. Our weighted-average final model satisfactorily explains eastward rupture directivity observed in the recorded data. Some parameters found for the Zumpango earthquake are: Δτ = 30.2+/-6.2 MPa, Er = 0.68+/-0.36x10^15 J, G = 1.74+/-0.44x10^15 J, η = 0.27+/-0.11, Vr/Vs = 0.52+/-0.09 and Mw = 6.64+/-0.07; for the stress drop, radiated energy, fracture energy, radiation efficiency, rupture velocity and moment magnitude, respectively. Mw6.5 intraslab Zumpango earthquake location, stations location and tectonic setting in central Mexico
Evaluation of substitution monopole models for tire noise sound synthesis
NASA Astrophysics Data System (ADS)
Berckmans, D.; Kindt, P.; Sas, P.; Desmet, W.
2010-01-01
Due to the considerable efforts in engine noise reduction, tire noise has become one of the major sources of passenger car noise nowadays and the demand for accurate prediction models is high. A rolling tire is therefore experimentally characterized by means of the substitution monopole technique, suiting a general sound synthesis approach with a focus on perceived sound quality. The running tire is substituted by a monopole distribution covering the static tire. All monopoles have mutual phase relationships and a well-defined volume velocity distribution which is derived by means of the airborne source quantification technique; i.e. by combining static transfer function measurements with operating indicator pressure measurements close to the rolling tire. Models with varying numbers/locations of monopoles are discussed and the application of different regularization techniques is evaluated.
Evaluating the Effectiveness of DART® Buoy Networks Based on Forecast Accuracy
NASA Astrophysics Data System (ADS)
Percival, Donald B.; Denbo, Donald W.; Gica, Edison; Huang, Paul Y.; Mofjeld, Harold O.; Spillane, Michael C.; Titov, Vasily V.
2018-04-01
A performance measure for a DART® tsunami buoy network has been developed. DART® buoys are used to detect tsunamis, but the full potential of the data they collect is realized through accurate forecasts of inundations caused by the tsunamis. The performance measure assesses how well the network achieves its full potential through a statistical analysis of simulated forecasts of wave amplitudes outside an impact site and a consideration of how much the forecasts are degraded in accuracy when one or more buoys are inoperative. The analysis uses simulated tsunami amplitude time series collected at each buoy from selected source segments in the Short-term Inundation Forecast for Tsunamis database and involves a set for 1000 forecasts for each buoy/segment pair at sites just offshore of selected impact communities. Random error-producing scatter in the time series is induced by uncertainties in the source location, addition of real oceanic noise, and imperfect tidal removal. Comparison with an error-free standard leads to root-mean-square errors (RMSEs) for DART® buoys located near a subduction zone. The RMSEs indicate which buoy provides the best forecast (lowest RMSE) for sections of the zone, under a warning-time constraint for the forecasts of 3 h. The analysis also shows how the forecasts are degraded (larger minimum RMSE among the remaining buoys) when one or more buoys become inoperative. The RMSEs provide a way to assess array augmentation or redesign such as moving buoys to more optimal locations. Examples are shown for buoys off the Aleutian Islands and off the West Coast of South America for impact sites at Hilo HI and along the US West Coast (Crescent City CA and Port San Luis CA, USA). A simple measure (coded green, yellow or red) of the current status of the network's ability to deliver accurate forecasts is proposed to flag the urgency of buoy repair.
Evaluating the Effectiveness of DART® Buoy Networks Based on Forecast Accuracy
NASA Astrophysics Data System (ADS)
Percival, Donald B.; Denbo, Donald W.; Gica, Edison; Huang, Paul Y.; Mofjeld, Harold O.; Spillane, Michael C.; Titov, Vasily V.
2018-03-01
A performance measure for a DART® tsunami buoy network has been developed. DART® buoys are used to detect tsunamis, but the full potential of the data they collect is realized through accurate forecasts of inundations caused by the tsunamis. The performance measure assesses how well the network achieves its full potential through a statistical analysis of simulated forecasts of wave amplitudes outside an impact site and a consideration of how much the forecasts are degraded in accuracy when one or more buoys are inoperative. The analysis uses simulated tsunami amplitude time series collected at each buoy from selected source segments in the Short-term Inundation Forecast for Tsunamis database and involves a set for 1000 forecasts for each buoy/segment pair at sites just offshore of selected impact communities. Random error-producing scatter in the time series is induced by uncertainties in the source location, addition of real oceanic noise, and imperfect tidal removal. Comparison with an error-free standard leads to root-mean-square errors (RMSEs) for DART® buoys located near a subduction zone. The RMSEs indicate which buoy provides the best forecast (lowest RMSE) for sections of the zone, under a warning-time constraint for the forecasts of 3 h. The analysis also shows how the forecasts are degraded (larger minimum RMSE among the remaining buoys) when one or more buoys become inoperative. The RMSEs provide a way to assess array augmentation or redesign such as moving buoys to more optimal locations. Examples are shown for buoys off the Aleutian Islands and off the West Coast of South America for impact sites at Hilo HI and along the US West Coast (Crescent City CA and Port San Luis CA, USA). A simple measure (coded green, yellow or red) of the current status of the network's ability to deliver accurate forecasts is proposed to flag the urgency of buoy repair.
Sound source localization and segregation with internally coupled ears: the treefrog model
Christensen-Dalsgaard, Jakob
2016-01-01
Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384
Shackley, M. Steven; Morgan, Leah; Pyle, Douglas
2017-01-01
Solving issues of intersource discrimination in archaeological obsidian is a recurring problem in geoarchaeological investigation, particularly since the number of known sources of archaeological obsidian worldwide has grown nearly exponentially in the last few decades, and the complexity of archaeological questions asked has grown equally so. These two parallel aspects of archaeological investigation have required more exacting understanding of the geological relationship between sources and the more accurate analysis of these sources of archaeological obsidian. This is particularly the case in the North American Southwest where the frequency of archaeological investigation is some of the highest in the world, and the theory and method used to interpret that record has become increasingly nuanced. Here, we attempt to unravel the elemental similarity of archaeological obsidian in the Mogollon-Datil volcanic province of southwestern New Mexico where some of the most important and extensively distributed sources are located and the elemental similarity between the sources is great even though the distance between the sources is large. Uniting elemental, isotopic, and geochronological analyses as an intensive pilot study, we unpack this complexity to provide greater understanding of these important sources of archaeological obsidian.
Source localization of temporal lobe epilepsy using PCA-LORETA analysis on ictal EEG recordings.
Stern, Yaki; Neufeld, Miriam Y; Kipervasser, Svetlana; Zilberstein, Amir; Fried, Itzhak; Teicher, Mina; Adi-Japha, Esther
2009-04-01
Localizing the source of an epileptic seizure using noninvasive EEG suffers from inaccuracies produced by other generators not related to the epileptic source. The authors isolated the ictal epileptic activity, and applied a source localization algorithm to identify its estimated location. Ten ictal EEG scalp recordings from five different patients were analyzed. The patients were known to have temporal lobe epilepsy with a single epileptic focus that had a concordant MRI lesion. The patients had become seizure-free following partial temporal lobectomy. A midinterval (approximately 5 seconds) period of ictal activity was used for Principal Component Analysis starting at ictal onset. The level of epileptic activity at each electrode (i.e., the eigenvector of the component that manifest epileptic characteristic), was used as an input for low-resolution tomography analysis for EEG inverse solution (Zilberstain et al., 2004). The algorithm accurately and robustly identified the epileptic focus in these patients. Principal component analysis and source localization methods can be used in the future to monitor the progression of an epileptic seizure and its expansion to other areas.
NASA Astrophysics Data System (ADS)
Zimmermann, Bernhard B.; Fang, Qianqian; Boas, David A.; Carp, Stefan A.
2016-01-01
Frequency domain near-infrared spectroscopy (FD-NIRS) has proven to be a reliable method for quantification of tissue absolute optical properties. We present a full-sampling direct analog-to-digital conversion FD-NIR imager. While we developed this instrument with a focus on high-speed optical breast tomographic imaging, the proposed design is suitable for a wide-range of biophotonic applications where fast, accurate quantification of absolute optical properties is needed. Simultaneous dual wavelength operation at 685 and 830 nm is achieved by concurrent 67.5 and 75 MHz frequency modulation of each laser source, respectively, followed by digitization using a high-speed (180 MS/s) 16-bit A/D converter and hybrid FPGA-assisted demodulation. The instrument supports 25 source locations and features 20 concurrently operating detectors. The noise floor of the instrument was measured at <1.4 pW/√Hz, and a dynamic range of 115+ dB, corresponding to nearly six orders of magnitude, has been demonstrated. Titration experiments consisting of 200 different absorption and scattering values were conducted to demonstrate accurate optical property quantification over the entire range of physiologically expected values.
Zimmermann, Bernhard B.; Fang, Qianqian; Boas, David A.; Carp, Stefan A.
2016-01-01
Abstract. Frequency domain near-infrared spectroscopy (FD-NIRS) has proven to be a reliable method for quantification of tissue absolute optical properties. We present a full-sampling direct analog-to-digital conversion FD-NIR imager. While we developed this instrument with a focus on high-speed optical breast tomographic imaging, the proposed design is suitable for a wide-range of biophotonic applications where fast, accurate quantification of absolute optical properties is needed. Simultaneous dual wavelength operation at 685 and 830 nm is achieved by concurrent 67.5 and 75 MHz frequency modulation of each laser source, respectively, followed by digitization using a high-speed (180 MS/s) 16-bit A/D converter and hybrid FPGA-assisted demodulation. The instrument supports 25 source locations and features 20 concurrently operating detectors. The noise floor of the instrument was measured at <1.4 pW/√Hz, and a dynamic range of 115+ dB, corresponding to nearly six orders of magnitude, has been demonstrated. Titration experiments consisting of 200 different absorption and scattering values were conducted to demonstrate accurate optical property quantification over the entire range of physiologically expected values. PMID:26813081
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
A draft map of the mouse pluripotent stem cell spatial proteome
Christoforou, Andy; Mulvey, Claire M.; Breckels, Lisa M.; Geladaki, Aikaterini; Hurrell, Tracey; Hayward, Penelope C.; Naake, Thomas; Gatto, Laurent; Viner, Rosa; Arias, Alfonso Martinez; Lilley, Kathryn S.
2016-01-01
Knowledge of the subcellular distribution of proteins is vital for understanding cellular mechanisms. Capturing the subcellular proteome in a single experiment has proven challenging, with studies focusing on specific compartments or assigning proteins to subcellular niches with low resolution and/or accuracy. Here we introduce hyperLOPIT, a method that couples extensive fractionation, quantitative high-resolution accurate mass spectrometry with multivariate data analysis. We apply hyperLOPIT to a pluripotent stem cell population whose subcellular proteome has not been extensively studied. We provide localization data on over 5,000 proteins with unprecedented spatial resolution to reveal the organization of organelles, sub-organellar compartments, protein complexes, functional networks and steady-state dynamics of proteins and unexpected subcellular locations. The method paves the way for characterizing the impact of post-transcriptional and post-translational modification on protein location and studies involving proteome-level locational changes on cellular perturbation. An interactive open-source resource is presented that enables exploration of these data. PMID:26754106
Electric field mill network products to improve detection of the lightning hazard
NASA Technical Reports Server (NTRS)
Maier, Launa M.
1987-01-01
An electric field mill network has been used at Kennedy Space Center for over 10 years as part of the thunderstorm detection system. Several algorithms are currently available to improve the informational output of the electric field mill data. The charge distributions of roughly 50 percent of all lightning can be modeled as if they reduced the charged cloud by a point charge or a point dipole. Using these models, the spatial differences in the lightning induced electric field changes, and a least squares algorithm to obtain an optimum solution, the three-dimensional locations of the lightning charge centers can be located. During the lifetime of a thunderstorm, dynamically induced charging, modeled as a current source, can be located spatially with measurements of Maxwell current density. The electric field mills can be used to calculate the Maxwell current density at times when it is equal to the displacement current density. These improvements will produce more accurate assessments of the potential electrical activity, identify active cells, and forecast thunderstorm termination.
G2S: a web-service for annotating genomic variants on 3D protein structures.
Wang, Juexin; Sheridan, Robert; Sumer, S Onur; Schultz, Nikolaus; Xu, Dong; Gao, Jianjiong
2018-06-01
Accurately mapping and annotating genomic locations on 3D protein structures is a key step in structure-based analysis of genomic variants detected by recent large-scale sequencing efforts. There are several mapping resources currently available, but none of them provides a web API (Application Programming Interface) that supports programmatic access. We present G2S, a real-time web API that provides automated mapping of genomic variants on 3D protein structures. G2S can align genomic locations of variants, protein locations, or protein sequences to protein structures and retrieve the mapped residues from structures. G2S API uses REST-inspired design and it can be used by various clients such as web browsers, command terminals, programming languages and other bioinformatics tools for bringing 3D structures into genomic variant analysis. The webserver and source codes are freely available at https://g2s.genomenexus.org. g2s@genomenexus.org. Supplementary data are available at Bioinformatics online.
Requirements for Coregistration Accuracy in On-Scalp MEG.
Zetter, Rasmus; Iivanainen, Joonas; Stenroos, Matti; Parkkonen, Lauri
2018-06-22
Recent advances in magnetic sensing has made on-scalp magnetoencephalography (MEG) possible. In particular, optically-pumped magnetometers (OPMs) have reached sensitivity levels that enable their use in MEG. In contrast to the SQUID sensors used in current MEG systems, OPMs do not require cryogenic cooling and can thus be placed within millimetres from the head, enabling the construction of sensor arrays that conform to the shape of an individual's head. To properly estimate the location of neural sources within the brain, one must accurately know the position and orientation of sensors in relation to the head. With the adaptable on-scalp MEG sensor arrays, this coregistration becomes more challenging than in current SQUID-based MEG systems that use rigid sensor arrays. Here, we used simulations to quantify how accurately one needs to know the position and orientation of sensors in an on-scalp MEG system. The effects that different types of localisation errors have on forward modelling and source estimates obtained by minimum-norm estimation, dipole fitting, and beamforming are detailed. We found that sensor position errors generally have a larger effect than orientation errors and that these errors affect the localisation accuracy of superficial sources the most. To obtain similar or higher accuracy than with current SQUID-based MEG systems, RMS sensor position and orientation errors should be [Formula: see text] and [Formula: see text], respectively.
Location-assured, multifactor authentication on smartphones via LTE communication
NASA Astrophysics Data System (ADS)
Kuseler, Torben; Lami, Ihsan A.; Al-Assam, Hisham
2013-05-01
With the added security provided by LTE, geographical location has become an important factor for authentication to enhance the security of remote client authentication during mCommerce applications using Smartphones. Tight combination of geographical location with classic authentication factors like PINs/Biometrics in a real-time, remote verification scheme over the LTE layer connection assures the authenticator about the client itself (via PIN/biometric) as well as the client's current location, thus defines the important aspects of "who", "when", and "where" of the authentication attempt without eaves dropping or man on the middle attacks. To securely integrate location as an authentication factor into the remote authentication scheme, client's location must be verified independently, i.e. the authenticator should not solely rely on the location determined on and reported by the client's Smartphone. The latest wireless data communication technology for mobile phones (4G LTE, Long-Term Evolution), recently being rolled out in various networks, can be employed to enhance this location-factor requirement of independent location verification. LTE's Control Plane LBS provisions, when integrated with user-based authentication and independent source of localisation factors ensures secure efficient, continuous location tracking of the Smartphone. This feature can be performed during normal operation of the LTE-based communication between client and network operator resulting in the authenticator being able to verify the client's claimed location more securely and accurately. Trials and experiments show that such algorithm implementation is viable for nowadays Smartphone-based banking via LTE communication.
NASA Astrophysics Data System (ADS)
Anderson, F. S. B.; Middleton, F.; Colchin, R. J.; Million, D.
1989-04-01
A method of accurately supporting and positioning an electron source inside a large cross-sectional area magnetic field which provides very low electron beam occlusion is reported. The application of electrical discharge machining to the fabrication of a 1-m truss support structure has provided an extremely long, rigid and mechanically strong electron gun support. Reproducible electron gun positioning to within 1 mm has been achieved at any location within a 1×0.6-m2 area. The extremely thin sections of the support truss (≤1.5 mm) have kept the electron beam occlusion to less than 3 mm. The support and drive mechanism have been designed and fabricated at the University of Wisconsin for application to the mapping of the magnetic surface structure of the Advanced Toroidal Facility torsatron1 at the Oak Ridge National Laboratory.
A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components
NASA Astrophysics Data System (ADS)
Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa
2016-10-01
Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.
Test to Extract Soil Properties Using the Seismic HammerTM Active Seismic Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Rebekah F.; Abbott, Robert E.
Geologic material properties are necessary parameters for ground motion modeling and are difficult and expensive to obtain via traditional methods. Alternative methods to estimate soil properties require a measurement of the ground's response to a force. A possible method of obtaining these measurements is active-source seismic surveys, but measurements of the ground response at the source must also be available. The potential of seismic sources to obtain soil properties is limited, however, by the repeatability of the source. Explosives, and hammer surveys are not repeatable because of variable ground coupling or swing strength. On the other hand, the Seismic Hammermore » TM (SH) is consistent in the amount of energy it inputs into the ground. In addition, it leaves large physical depressions as a result of ground compaction. The volume of ground compaction varies by location. Here, we hypothesize that physical depressions left in the earth by the SH correlate to energy recorded by nearby geophones, and therefore are a measurement of soil physical properties. Using measurements of the volume of shot holes, we compare the spatial distribution of the volume of ground compacted between the different shot locations. We then examine energy recorded by the nearest 50 geophones and compare the change in amplitude across hits at the same location. Finally, we use the percent difference between the energy recorded by the first and later hits at a location to test for a correlation to the volume of the shot depressions. We find that: * Ground compaction at the shot-depression does cluster geographically, but does not correlate to known surface features. * Energy recorded by nearby geophones reflects ground refusal after several hits. * There is no correlation to shot volume and changes in energy at particular shot locations. Deeper material properties (i.e. below the depth of surface compaction) may be contributing to the changes in energy propagation. * Without further processing of the data, shot-depression volumes are insufficient to understanding ground response to the SH. Without an accurate understanding of the ground response, we cannot extract material properties in conjunction with the SH survey. Additional processing including picking direct arrivals and static corrections may yield positive results.« less
Effect of Blast Injury on Auditory Localization in Military Service Members.
Kubli, Lina R; Brungart, Douglas; Northern, Jerry
Among the many advantages of binaural hearing are the abilities to localize sounds in space and to attend to one sound in the presence of many sounds. Binaural hearing provides benefits for all listeners, but it may be especially critical for military personnel who must maintain situational awareness in complex tactical environments with multiple speech and noise sources. There is concern that Military Service Members who have been exposed to one or more high-intensity blasts during their tour of duty may have difficulty with binaural and spatial ability due to degradation in auditory and cognitive processes. The primary objective of this study was to assess the ability of blast-exposed Military Service Members to localize speech sounds in quiet and in multisource environments with one or two competing talkers. Participants were presented with one, two, or three topic-related (e.g., sports, food, travel) sentences under headphones and required to attend to, and then locate the source of, the sentence pertaining to a prespecified target topic within a virtual space. The listener's head position was monitored by a head-mounted tracking device that continuously updated the apparent spatial location of the target and competing speech sounds as the subject turned within the virtual space. Measurements of auditory localization ability included mean absolute error in locating the source of the target sentence, the time it took to locate the target sentence within 30 degrees, target/competitor confusion errors, response time, and cumulative head motion. Twenty-one blast-exposed Active-Duty or Veteran Military Service Members (blast-exposed group) and 33 non-blast-exposed Service Members and beneficiaries (control group) were evaluated. In general, the blast-exposed group performed as well as the control group if the task involved localizing the source of a single speech target. However, if the task involved two or three simultaneous talkers, localization ability was compromised for some participants in the blast-exposed group. Blast-exposed participants were less accurate in their localization responses and required more exploratory head movements to find the location of the target talker. Results suggest that blast-exposed participants have more difficulty than non-blast-exposed participants in localizing sounds in complex acoustic environments. This apparent deficit in spatial hearing ability highlights the need to develop new diagnostic tests using complex listening tasks that involve multiple sound sources that require speech segregation and comprehension.
Development of Vertical Cable Seismic System (2)
NASA Astrophysics Data System (ADS)
Asakawa, E.; Murakami, F.; Tsukahara, H.; Ishikawa, K.
2012-12-01
The vertical cable seismic is one of the reflection seismic methods. It uses hydrophone arrays vertically moored from the seafloor to record acoustic waves generated by surface, deep-towed or ocean bottom sources. Analyzing the reflections from the sub-seabed, we could look into the subsurface structure. This type of survey is generally called VCS (Vertical Cable Seismic). Because VCS is an efficient high-resolution 3D seismic survey method for a spatially-bounded area, we proposed the method for the hydrothermal deposit survey tool development program that the Ministry of Education, Culture, Sports, Science and Technology (MEXT) started in 2009. We are now developing a VCS system, including not only data acquisition hardware but data processing and analysis technique. Our first experiment of VCS surveys has been carried out in Lake Biwa, JAPAN in November 2009 for a feasibility study. Prestack depth migration is applied to the 3D VCS data to obtain a high quality 3D depth volume. Based on the results from the feasibility study, we have developed two autonomous recording VCS systems. After we carried out a trial experiment in the actual ocean at a water depth of about 400m and we carried out the second VCS survey at Iheya Knoll with a deep-towed source. In this survey, we could establish the procedures for the deployment/recovery of the system and could examine the locations and the fluctuations of the vertical cables at a water depth of around 1000m. The acquired VCS data clearly shows the reflections from the sub-seafloor. Through the experiment, we could confirm that our VCS system works well even in the severe circumstances around the locations of seafloor hydrothermal deposits. We have carried out two field surveys in 2011. One is a 3D survey with a boomer for a high-resolution surface source and the other one for an actual field survey in the Izena Cauldron an active hydrothermal area in the Okinawa Trough. Through these surveys, we have confirmed that the uncertainty in the locations of the source and of the hydrophones in water could lower the quality of subsurface image. It is, therefore, strongly necessary to develop a total survey system that assures an accurate positioning and a deployment techniques. In case of shooting on sea surface, GPS navigation system are available, but in case of deep-towed source or ocean bottom source, the accuracy of shot position with SSBL/USBL is not sufficient for the very high-resolution imaging as requested for the SMS survey. We will incorporate the accurate LBL navigation systems with VCs. The LBL navigation system has been developed by IIS of the University of Tokyo. The error is estimated less than 10cm at the water depth of 3000m. Another approach is that the shot points can be calculated using the first break of the VCS after the VCS locations are estimated by slant-ranging from the sea surface. Our VCS system has been designed as a survey tool for hydrothermal deposit, but it will be also applicable for deep water site surveys or geohazard assessment such as active faults.
NASA Astrophysics Data System (ADS)
Dannemann, F. K.; Park, J.; Marcillo, O. E.; Blom, P. S.; Stump, B. W.; Hayward, C.
2016-12-01
Data from five infrasound arrays in the western US jointly operated by University of Utah Seismograph Station and Southern Methodist University are used to test a database-centric processing pipeline, InfraPy, for automated event detection, association and location. Infrasonic array data from a one-year time period (January 1 2012 to December 31 2012) are used. This study focuses on the identification and location of 53 ground-truth verified events produced from near surface military explosions at the Utah Test and Training Range (UTTR). Signals are detected using an adaptive F-detector, which accounts for correlated and uncorrelated time-varying noise in order to reduce false detections due to the presence of coherent noise. Variations in detection azimuth and correlation are found to be consistent with seasonal changes in atmospheric winds. The Bayesian infrasonic source location (BISL) method is used to produce source location and time credibility contours based on posterior probability density functions. Updates to the previous BISL methodology include the application of celerity range and azimuth deviation distributions in order to accurately account for the spatial and temporal variability of infrasound propagation through the atmosphere. These priors are estimated by ray tracing through Ground-to-Space (G2S) atmospheric models as a function of season and time of day using historic atmospheric characterizations from 2007 to 2013. Out of the 53 events, 31 are successfully located using the InfraPy pipeline. Confidence contour areas for maximum a posteriori event locations produce error estimates which are reduced a maximum of 98% and an average of 25% from location estimates utilizing a simple time independent uniform atmosphere. We compare real-time ray tracing results with the statistical atmospheric priors used in this study to examine large time differences between known origin times and estimated origin times that might be due to the misidentification of infrasonic phases. This work provides an opportunity to improve atmospheric model predictions by understanding atmospheric variability at a station-level.
Advanced computer techniques for inverse modeling of electric current in cardiac tissue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, S.A.; Romero, L.A.; Diegert, C.F.
1996-08-01
For many years, ECG`s and vector cardiograms have been the tools of choice for non-invasive diagnosis of cardiac conduction problems, such as found in reentrant tachycardia or Wolff-Parkinson-White (WPW) syndrome. Through skillful analysis of these skin-surface measurements of cardiac generated electric currents, a physician can deduce the general location of heart conduction irregularities. Using a combination of high-fidelity geometry modeling, advanced mathematical algorithms and massively parallel computing, Sandia`s approach would provide much more accurate information and thus allow the physician to pinpoint the source of an arrhythmia or abnormal conduction pathway.
NASA Technical Reports Server (NTRS)
Pierce, E. T.
1969-01-01
The properties of sferics (the electric and magnetic fields generated by electrified clouds and lightning flashes) are briefly surveyed; the source disturbance and the influence of propagation being examined. Methods of observing sferics and their meteorological implications are discussed. It is concluded that close observations of electrostatic and radiation fields are very informative, respectively, upon the charge distribution and spark processes in a cloud; that ground-level sferics stations can accurately locate the positions of individual lightning flashes and furnish valuable knowledge on the properties of the discharges; but that satellite measurements only provide general information on the level of thundery activity over large geographical regions.
Zinc sulfide in intestinal cell granules of Ancylostoma caninum adults
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gianotti, A.J.; Clark, D.T.; Dash, J.
1991-04-01
A source of confusion has existed since the turn of the century about the reddish brown, weakly birefringent 'sphaerocrystals' located in the intestines of strongyle nematodes, Strongylus and Ancylostoma. X-ray diffraction and energy dispersive spectrometric analyses were used for accurate determination of the crystalline order and elemental composition of the granules in the canine hookworm Ancylostoma caninum. The composition of the intestinal pigmented granules was identified unequivocally as zinc sulfide. It seems most probable that the granules serve to detoxify high levels of metallic ions (specifically zinc) present due to the large intake of host blood.
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.
1997-01-01
A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from in Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions and solutions for the plane (i.e.. no Earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated data sets and the relative influence of bearing and arrival time data on the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA's Optical Transient Detector (OTD) and Lightning Imaging System (LIS). We also introduce a quadratic planar solution that is useful when only three arrival time measurements are available. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in source location. Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated data sets and the results are generally better than those obtained from the three station linear planar method when bearing errors are about 2 degrees.
Drilling and Testing the DOI041A Coalbed Methane Well, Fort Yukon, Alaska
Clark, Arthur; Barker, Charles E.; Weeks, Edwin P.
2009-01-01
The need for affordable energy sources is acute in rural communities of Alaska where costly diesel fuel must be delivered by barge or plane for power generation. Additionally, the transport, transfer, and storage of fuel pose great difficulty in these regions. Although small-scale energy development in remote Arctic locations presents unique challenges, identifying and developing economic, local sources of energy remains a high priority for state and local government. Many areas in rural Alaska contain widespread coal resources that may contain significant amounts of coalbed methane (CBM) that, when extracted, could be used for power generation. However, in many of these areas, little is known concerning the properties that control CBM occurrence and production, including coal bed geometry, coalbed gas content and saturation, reservoir permeability and pressure, and water chemistry. Therefore, drilling and testing to collect these data are required to accurately assess the viability of CBM as a potential energy source in most locations. In 2004, the U.S. Geological Survey (USGS) and Bureau of Land Management (BLM), in cooperation with the U.S. Department of Energy (DOE), the Alaska Department of Geological and Geophysical Surveys (DGGS), the University of Alaska Fairbanks (UAF), the Doyon Native Corporation, and the village of Fort Yukon, organized and funded the drilling of a well at Fort Yukon, Alaska to test coal beds for CBM developmental potential. Fort Yukon is a town of about 600 people and is composed mostly of Gwich'in Athabascan Native Americans. It is located near the center of the Yukon Flats Basin, approximately 145 mi northeast of Fairbanks.
NASA Astrophysics Data System (ADS)
Torosean, Sason; Flynn, Brendan; Samkoe, Kimberley S.; Davis, Scott C.; Gunn, Jason; Axelsson, Johan; Pogue, Brian W.
2012-02-01
An ultrasound coupled handheld-probe-based optical fluorescence molecular tomography (FMT) system has been in development for the purpose of quantifying the production of Protoporphyrin IX (PPIX) in aminolevulinic acid treated (ALA), Basal Cell Carcinoma (BCC) in vivo. The design couples fiber-based spectral sampling of PPIX fluorescence emission with a high frequency ultrasound imaging system, allowing regionally localized fluorescence intensities to be quantified [1]. The optical data are obtained by sequential excitation of the tissue with a 633nm laser, at four source locations and five parallel detections at each of the five interspersed detection locations. This method of acquisition permits fluorescence detection for both superficial and deep locations in ultrasound field. The optical boundary data, tissue layers segmented from ultrasound image and diffusion theory are used to estimate the fluorescence in tissue layers. To improve the recovery of the fluorescence signal of PPIX, eliminating tissue autofluorescence is of great importance. Here the approach was to utilize measurements which straddled the steep Qband excitation peak of PPIX, via the integration of an additional laser source, exciting at 637 nm; a wavelength with a 2 fold lower PPIX excitation value than 633nm.The auto-fluorescence spectrum acquired from the 637 nm laser is then used to spectrally decouple the fluorescence data and produce an accurate fluorescence emission signal, because the two wavelengths have very similar auto-fluorescence but substantially different PPIX excitation levels. The accuracy of this method, using a single source detector pair setup, is verified through animal tumor model experiments, and the result is compared to different methods of fluorescence signal recovery.
Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem
Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...
2016-12-12
In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less
Resolved Star Formation in Galaxies Using Slitless Spectroscopy
NASA Astrophysics Data System (ADS)
Pirzkal, Norbert; Finkelstein, Steven L.; Larson, Rebecca L.; Malhotra, Sangeeta; Rhoads, James E.; Ryan, Russell E.; Tilvi, Vithal; FIGS Team
2018-06-01
The ability to spatially resolve individual star-formation regions in distant galaxies and simultaneously extract their physical properties via emission lines is a critical step forward in studying the evolution of galaxies. While efficient, deep slitless spectroscopic observations offer a blurry view of the summed properties of galaxies. We present our studies of resolved star formation over a wide range of redshifts, including high redshift Ly-a sources. The unique capabilities of the WFC3 IR Grism and our two-dimensional emission line method (EM2D) allows us to accurately identify the specific spatial origin of emission lines in galaxies, thus creating a spatial map of star-formation sites in any given galaxy. This method requires the use of multiple position angles on the sky to accurately derive both the location and the observed wavelengths of these emission lines. This has the added benefit of producing better defined redshifts for these sources. Building on our success in applying the EM2D method towards galaxies with [OII]. [OIII], and Ha emission lines, we have also applied EM2D to high redshift (z>6) Ly-a emitting galaxies. We are also able to produce accurate 2D emission line maps (MAP2D) of the Ly-a emission in WFC3 IR grism observations, looking for evidence that a significant amount of resonant scattering is taking place in high redshift galaxies such as in a newly identified z=7.5 Faint Infrared Galaxy Survey (FIGS) Ly-a galaxy.
NASA Astrophysics Data System (ADS)
Horstmann, T.; Harrington, R. M.; Cochran, E.; Shelly, D. R.
2013-12-01
Observations of non-volcanic tremor have become ubiquitous in recent years. In spite of the abundance of observations, locating tremor remains a difficult task because of the lack of distinctive phase arrivals. Here we use time-reverse-imaging techniques that do not require identifying phase arrivals to locate individual low-frequency-earthquakes (LFEs) within tremor episodes on the San Andreas fault near Cholame, California. Time windows of 1.5-second duration containing LFEs are selected from continuously recorded waveforms of the local seismic network filtered between 1-5 Hz. We propagate the time-reversed seismic signal back through the subsurface using a staggered-grid finite-difference code. Assuming all rebroadcasted waveforms result from similar wave fields at the source origin, we search for wave field coherence in time and space to obtain the source location and origin time where the constructive interference is a maximum. We use an interpolated velocity model with a grid spacing of 100 m and a 5 ms time step to calculate the relative curl field energy amplitudes for each rebroadcasted seismogram every 50 ms for each grid point in the model. Finally, we perform a grid search for coherency in the curl field using a sliding time window, and taking the absolute value of the correlation coefficient to account for differences in radiation pattern. The highest median cross-correlation coefficient value over at a given grid point indicates the source location for the rebroadcasted event. Horizontal location errors based on the spatial extent of the highest 10% cross-correlation coefficient are on the order of 4 km, and vertical errors on the order of 3 km. Furthermore, a test of the method using earthquake data shows that the method produces an identical hypocentral location (within errors) as that obtained by standard ray-tracing methods. We also compare the event locations to a LFE catalog that locates the LFEs from stacked waveforms of repeated LFEs identified by cross-correlation techniques [Shelly and Hardebeck, 2010]. The LFE catalog uses stacks of at least several hundred templates to identify phase arrivals used to estimate the location. We find epicentral locations for individual LFEs based on the time-reverse-imaging technique are within ~4 km relative to the LFE catalog [Shelly and Hardebeck, 2010]. LFEs locate between 15-25 km depth, and have similar focal depths found in previous studies of the region. Overall, the method can provide robust locations of individual LFEs without identifying and stacking hundreds of LFE templates; the locations are also more accurate than envelope location methods, which have errors on the order of tens of km [Horstmann et al., 2013].
Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo
NASA Astrophysics Data System (ADS)
Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.; Kouzes, Richard T.; Kulisek, Jonathan A.; Robinson, Sean M.; Wittman, Richard A.
2015-10-01
Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide an estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.
NASA Astrophysics Data System (ADS)
Gately, Conor; Hutyra, Lucy
2016-04-01
In 2013, on-road mobile sources were responsible for over 26% of U.S. fossil fuel carbon dioxide (ffCO2) emissions, and over 34% of both CO and NOx emissions. However, accurate representations of these emissions at the scale of urban areas remains a difficult challenge. Quantifying emissions at the scale of local streets and highways is critical to provide policymakers with the information needed to develop appropriate mitigation strategies and to guide research into the underlying process that drive mobile emissions. Quantification of vehicle ffCO2 emissions at high spatial and temporal resolutions requires a detailed synthesis of data on traffic activity, roadway attributes, fleet characteristics and vehicle speeds. To accurately characterize criteria air pollutant emissions, information on local meteorology is also critical, as the temperature and relative humidity can affect emissions rates of these pollutants by as much as 400%. As the health impacts of air pollutants are more severe for residents living in close proximity (<500m) to road sources, it is critical that inventories of these emissions rely on highly resolved source data to locate potential hot-spots of exposure. In this study we utilize real-time GPS estimates of vehicle speeds to estimate ffCO2 and criteria air pollutant emissions at multiple spatial and temporal scales across a large metropolitan area. We observe large variations in emissions associated with diurnal activity patterns, congestion, sporting and civic events, and weather anomalies. We discuss the advantages and challenges of using highly-resolved source data to quantify emissions at a roadway scale, and the potential of this methodology for forecasting the air quality impacts of changes in infrastructure, urban planning policies, and regional climate.
NASA Astrophysics Data System (ADS)
Gately, C.; Hutyra, L.; Sue Wing, I.; Peterson, S.; Janetos, A.
2015-12-01
In 2013, on-road mobile sources were responsible for over 26% of U.S. fossil fuel carbon dioxide (ffCO2) emissions, and over 34% of both CO and NOx emissions. However, accurate representations of these emissions at the scale of urban areas remains a difficult challenge. Quantifying emissions at the scale of local streets and highways is critical to provide policymakers with the information needed to develop appropriate mitigation strategies and to guide research into the underlying process that drive mobile emissions. Quantification of vehicle ffCO2 emissions at high spatial and temporal resolutions requires a detailed synthesis of data on traffic activity, roadway attributes, fleet characteristics and vehicle speeds. To accurately characterize criteria air pollutant emissions, information on local meteorology is also critical, as the temperature and relative humidity can affect emissions rates of these pollutants by as much as 400%. As the health impacts of air pollutants are more severe for residents living in close proximity (<500m) to road sources, it is critical that inventories of these emissions rely on highly resolved source data to locate potential hot-spots of exposure. In this study we utilize real-time GPS estimates of vehicle speeds to estimate ffCO2 and criteria air pollutant emissions at multiple spatial and temporal scales across a large metropolitan area. We observe large variations in emissions associated with diurnal activity patterns, congestion, sporting and civic events, and weather anomalies. We discuss the advantages and challenges of using highly-resolved source data to quantify emissions at a roadway scale, and the potential of this methodology for forecasting the air quality impacts of changes in infrastructure, urban planning policies, and regional climate.
Magma-Tectonic Interactions in the Main Ethiopian Rift; Insights into Rifting Processes
NASA Astrophysics Data System (ADS)
Greenfield, T.; Keir, D.; Tessema, T.; Lloyd, R.; Biggs, J.; Ayele, A.; Kendall, J. M.
2017-12-01
We report observations made around the Bora-Tulu Moye volcanic field, in the Main Ethiopian Rift (MER). A network of seismometers deployed around the volcano for one and a half years reveals the recent state of the volcano. Accurate earthquake locations and focal mechanisms are combined with surface deformation and mapping of faults, fissures and geothermally active areas to reveal the interaction between magmatism and intra-rift faulting. More than 1000 earthquakes are detected and located, making the Bora-Tulu Moye volcanic field one of the most seismically active regions of the MER. Earthquakes are located at depths of less than 5 km below the surface and range between magnitudes of 1.5 - 3.5. Surface deformation of Bora-Tulu Moye is observed using satellite based radar interferometry (InSAR) recorded before and during the seismic deployment. Since 2004, deformation has oscillated between uplift and subsidence centered at the same spatial location but different depths. We constrain the source of the uplift to be at 7 km depth while the source of the subsidence is shallower. Micro-earthquake locations reveal that earthquakes are located around the edge of the observed deformation and record the activation of normal faults orientated at 025°. The spatial link between surface deformation and brittle failure suggest that significant hydrothermal circulation driven by an inflating shallow heat source is inducing brittle failure. Elsewhere, seismicity is focused in areas of significant surface alteration from hydrothermal processes. We use shear wave splitting using local earthquakes to image the stress state of the volcano. A combination of rift parallel and rift-oblique fast directions are observed, indicating the volcano has a significant influence on the crustal stresses. Volcanic activity around Bora-Tulu Moye has migrated eastwards over time, closer to the intra-rift fault system, the Wonji Fault Belt. How and why this occurs relates to changes in the melt supply to the upper crust from depth and has implications for the early stages of rift evolution and for volcanic and tectonic hazard in Ethiopia and rifts generally.
NASA Astrophysics Data System (ADS)
Bao, X.; Shen, Y.; Wang, N.
2017-12-01
Accurate estimation of the source moment is important for discriminating underground explosions from earthquakes and other seismic sources. In this study, we invert for the full moment tensors of the recent seismic events (since 2016) at the Democratic People's Republic of Korea (PRRK) Punggye-ri test site. We use waveform data from broadband seismic stations located in China, Korea, and Japan in the inversion. Using a non-staggered-grid, finite-difference algorithm, we calculate the strain Green's tensors (SGT) based on one-dimensional (1D) and three-dimensional (3D) Earth models. Taking advantage of the source-receiver reciprocity, a SGT database pre-calculated and stored for the Punggye-ri test site is used in inversion for the source mechanism of each event. With the source locations estimated from cross-correlation using regional Pn and Pn-coda waveforms, we obtain the optimal source mechanism that best fits synthetics to the observed waveforms of both body and surface waves. The moment solutions of the first three events (2016-01-06, 2016-09-09, and 2017-09-03) show dominant isotropic components, as expected from explosions, though there are also notable non-isotropic components. The last event ( 8 minutes after the mb6.3 explosion in 2017) contained mainly implosive component, suggesting a collapse following the explosion. The solutions from the 3D model can better fit observed waveforms than the corresponding solutions from the 1D model. The uncertainty in the resulting moment solution is influenced by heterogeneities not resolved by the Earth model according to the waveform misfit. Using the moment solutions, we predict the peak ground acceleration at the Punggye-ri test site and compare the prediction with corresponding InSAR and other satellite images.
LLNL Location and Detection Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, S C; Harris, D B; Anderson, M L
2003-07-16
We present two LLNL research projects in the topical areas of location and detection. The first project assesses epicenter accuracy using a multiple-event location algorithm, and the second project employs waveform subspace Correlation to detect and identify events at Fennoscandian mines. Accurately located seismic events are the bases of location calibration. A well-characterized set of calibration events enables new Earth model development, empirical calibration, and validation of models. In a recent study, Bondar et al. (2003) develop network coverage criteria for assessing the accuracy of event locations that are determined using single-event, linearized inversion methods. These criteria are conservative andmore » are meant for application to large bulletins where emphasis is on catalog completeness and any given event location may be improved through detailed analysis or application of advanced algorithms. Relative event location techniques are touted as advancements that may improve absolute location accuracy by (1) ensuring an internally consistent dataset, (2) constraining a subset of events to known locations, and (3) taking advantage of station and event correlation structure. Here we present the preliminary phase of this work in which we use Nevada Test Site (NTS) nuclear explosions, with known locations, to test the effect of travel-time model accuracy on relative location accuracy. Like previous studies, we find that the reference velocity-model and relative-location accuracy are highly correlated. We also find that metrics based on travel-time residual of relocated events are not a reliable for assessing either velocity-model or relative-location accuracy. In the topical area of detection, we develop specialized correlation (subspace) detectors for the principal mines surrounding the ARCES station located in the European Arctic. Our objective is to provide efficient screens for explosions occurring in the mines of the Kola Peninsula (Kovdor, Zapolyarny, Olenogorsk, Khibiny) and the major iron mines of northern Sweden (Malmberget, Kiruna). In excess of 90% of the events detected by the ARCES station are mining explosions, and a significant fraction are from these northern mining groups. The primary challenge in developing waveform correlation detectors is the degree of variation in the source time histories of the shots, which can result in poor correlation among events even in close proximity. Our approach to solving this problem is to use lagged subspace correlation detectors, which offer some prospect of compensating for variation and uncertainty in source time functions.« less
Systems and methods for optically measuring properties of hydrocarbon fuel gases
Adler-Golden, S.; Bernstein, L.S.; Bien, F.; Gersh, M.E.; Goldstein, N.
1998-10-13
A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution. 14 figs.
NASA Technical Reports Server (NTRS)
Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret
1992-01-01
Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.
NASA Technical Reports Server (NTRS)
Herrero, F. A.; Mayr, H. G.; Harris, I.; Varosi, F.; Meriwether, J. W., Jr.
1984-01-01
Theoretical predictions of thermospheric gravity wave oscillations are compared with observed neutral temperatures and velocities. The data were taken in February 1983 using a Fabry-Perot interferometer located on Greenland, close to impulse heat sources in the auroral oval. The phenomenon was modeled in terms of linearized equations of motion of the atmosphere on a slowly rotating sphere. Legendre polynomials were used as eigenfunctions and the transfer function amplitude surface was characterized by maxima in the wavenumber frequency plane. Good agreement for predicted and observed velocities and temperatures was attained in the 250-300 km altitude. The amplitude of the vertical velocity, however, was not accurately predicted, nor was the temperature variability. The vertical velocity did exhibit maxima and minima in response to corresponding temperature changes.
NASA Astrophysics Data System (ADS)
Herrero, F. A.; Mayr, H. G.; Harris, I.; Varosi, F.; Meriwether, J. W., Jr.
1984-09-01
Theoretical predictions of thermospheric gravity wave oscillations are compared with observed neutral temperatures and velocities. The data were taken in February 1983 using a Fabry-Perot interferometer located on Greenland, close to impulse heat sources in the auroral oval. The phenomenon was modeled in terms of linearized equations of motion of the atmosphere on a slowly rotating sphere. Legendre polynomials were used as eigenfunctions and the transfer function amplitude surface was characterized by maxima in the wavenumber frequency plane. Good agreement for predicted and observed velocities and temperatures was attained in the 250-300 km altitude. The amplitude of the vertical velocity, however, was not accurately predicted, nor was the temperature variability. The vertical velocity did exhibit maxima and minima in response to corresponding temperature changes.
Systems and methods for optically measuring properties of hydrocarbon fuel gases
Adler-Golden, Steven; Bernstein, Lawrence S.; Bien, Fritz; Gersh, Michael E.; Goldstein, Neil
1998-10-13
A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution.
Comparison of Model Prediction with Measurements of Galactic Background Noise at L-Band
NASA Technical Reports Server (NTRS)
LeVine, David M.; Abraham, Saji; Kerr, Yann H.; Wilson, Willam J.; Skou, Niels; Sobjaerg, S.
2004-01-01
The spectral window at L-band (1.413 GHz) is important for passive remote sensing of surface parameters such as soil moisture and sea surface salinity that are needed to understand the hydrological cycle and ocean circulation. Radiation from celestial (mostly galactic) sources is strong in this window and an accurate accounting for this background radiation is often needed for calibration. Modem radio astronomy measurements in this spectral window have been converted into a brightness temperature map of the celestial sky at L-band suitable for use in correcting passive measurements. This paper presents a comparison of the background radiation predicted by this map with measurements made with several modem L-band remote sensing radiometers. The agreement validates the map and the procedure for locating the source of down-welling radiation.
NASA Astrophysics Data System (ADS)
Hyer, E. J.; Zhang, J. L.; Reid, J. S.; Curtis, C. A.; Westphal, D. L.
2007-12-01
Quantitative models of the transport and evolution of atmospheric pollution have graduated from the laboratory to become a part of the operational activity of forecast centers. Scientists studying the composition and variability of the atmosphere put great efforts into developing methods for accurately specifying sources of pollution, including natural and anthropogenic biomass burning. These methods must be adapted for use in operational contexts, which impose additional strictures on input data and methods. First, only input data sources available in near real-time are suitable for use in operational applications. Second, operational applications must make use of redundant data sources whenever possible. This is a shift in philosophy: in a research context, the most accurate and complete data set will be used, whereas in an operational context, the system must be designed with maximum redundancy. The goal in an operational context is to produce, to the extent possible, consistent and timely output, given sometimes inconsistent inputs. The Naval Aerosol Analysis and Prediction System (NAAPS), a global operational aerosol analysis and forecast system, recently began incorporating assimilation of satellite-derived aerosol optical depth. Assimilation of satellite AOD retrievals has dramatically improved aerosol analyses and forecasts from this system. The use of aerosol data assimilation also changes the strategy for improving the smoke source function. The absolute magnitude of emissions events can be refined through feedback from the data assimilation system, both in real- time operations and in post-processing analysis of data assimilation results. In terms of the aerosol source functions, the largest gains in model performance are now to be gained by reducing data latency and minimizing missed detections. In this presentation, recent model development work on the Fire Locating and Monitoring of Burning Emissions (FLAMBE) system that provides smoke aerosol boundary conditions for NAAPS is described, including redundant integration of multiple satellite platforms and development of feedback loops between the data assimilation system and smoke source.
Huang, Ming-Xiong; Anderson, Bill; Huang, Charles W.; Kunde, Gerd J.; Vreeland, Erika C.; Huang, Jeffrey W.; Matlashov, Andrei N.; Karaulanov, Todor; Nettles, Christopher P.; Gomez, Andrew; Minser, Kayla; Weldon, Caroline; Paciotti, Giulio; Harsh, Michael; Lee, Roland R.; Flynn, Edward R.
2017-01-01
Superparamagnetic Relaxometry (SPMR) is a highly sensitive technique for the in vivo detection of tumor cells and may improve early stage detection of cancers. SPMR employs superparamagnetic iron oxide nanoparticles (SPION). After a brief magnetizing pulse is used to align the SPION, SPMR measures the time decay of SPION using Super-conducting Quantum Interference Device (SQUID) sensors. Substantial research has been carried out in developing the SQUID hardware and in improving the properties of the SPION. However, little research has been done in the pre-processing of sensor signals and post-processing source modeling in SPMR. In the present study, we illustrate new pre-processing tools that were developed to: 1) remove trials contaminated with artifacts, 2) evaluate and ensure that a single decay process associated with bounded SPION exists in the data, 3) automatically detect and correct flux jumps, and 4) accurately fit the sensor signals with different decay models. Furthermore, we developed an automated approach based on multi-start dipole imaging technique to obtain the locations and magnitudes of multiple magnetic sources, without initial guesses from the users. A regularization process was implemented to solve the ambiguity issue related to the SPMR source variables. A procedure based on reduced chi-square cost-function was introduced to objectively obtain the adequate number of dipoles that describe the data. The new pre-processing tools and multi-start source imaging approach have been successfully evaluated using phantom data. In conclusion, these tools and multi-start source modeling approach substantially enhance the accuracy and sensitivity in detecting and localizing sources from the SPMR signals. Furthermore, multi-start approach with regularization provided robust and accurate solutions for a poor SNR condition similar to the SPMR detection sensitivity in the order of 1000 cells. We believe such algorithms will help establishing the industrial standards for SPMR when applying the technique in pre-clinical and clinical settings. PMID:28072579
Modeling the Meteoroid Input Function at Mid-Latitude Using Meteor Observations by the MU Radar
NASA Technical Reports Server (NTRS)
Pifko, Steven; Janches, Diego; Close, Sigrid; Sparks, Jonathan; Nakamura, Takuji; Nesvorny, David
2012-01-01
The Meteoroid Input Function (MIF) model has been developed with the purpose of understanding the temporal and spatial variability of the meteoroid impact in the atmosphere. This model includes the assessment of potential observational biases, namely through the use of empirical measurements to characterize the minimum detectable radar cross-section (RCS) for the particular High Power Large Aperture (HPLA) radar utilized. This RCS sensitivity threshold allows for the characterization of the radar system s ability to detect particles at a given mass and velocity. The MIF has been shown to accurately predict the meteor detection rate of several HPLA radar systems, including the Arecibo Observatory (AO) and the Poker Flat Incoherent Scatter Radar (PFISR), as well as the seasonal and diurnal variations of the meteor flux at various geographic locations. In this paper, the MIF model is used to predict several properties of the meteors observed by the Middle and Upper atmosphere (MU) radar, including the distributions of meteor areal density, speed, and radiant location. This study offers new insight into the accuracy of the MIF, as it addresses the ability of the model to predict meteor observations at middle geographic latitudes and for a radar operating frequency in the low VHF band. Furthermore, the interferometry capability of the MU radar allows for the assessment of the model s ability to capture information about the fundamental input parameters of meteoroid source and speed. This paper demonstrates that the MIF is applicable to a wide range of HPLA radar instruments and increases the confidence of using the MIF as a global model, and it shows that the model accurately considers the speed and sporadic source distributions for the portion of the meteoroid population observable by MU.
Time-Frequency Analysis of the Dispersion of Lamb Modes
NASA Technical Reports Server (NTRS)
Prosser, W. H.; Seale, Michael D.; Smith, Barry T.
1999-01-01
Accurate knowledge of the velocity dispersion of Lamb modes is important for ultrasonic nondestructive evaluation methods used in detecting and locating flaws in thin plates and in determining their elastic stiffness coefficients. Lamb mode dispersion is also important in the acoustic emission technique for accurately triangulating the location of emissions in thin plates. In this research, the ability to characterize Lamb mode dispersion through a time-frequency analysis (the pseudo Wigner-Ville distribution) was demonstrated. A major advantage of time-frequency methods is the ability to analyze acoustic signals containing multiple propagation modes, which overlap and superimpose in the time domain signal. By combining time-frequency analysis with a broadband acoustic excitation source, the dispersion of multiple Lamb modes over a wide frequency range can be determined from as little as a single measurement. In addition, the technique provides a direct measurement of the group velocity dispersion. The technique was first demonstrated in the analysis of a simulated waveform in an aluminum plate in which the Lamb mode dispersion was well known. Portions of the dispersion curves of the A(sub 0), A(sub 1), S(sub 0), and S(sub 2)Lamb modes were obtained from this one waveform. The technique was also applied for the analysis of experimental waveforms from a unidirectional graphite/epoxy composite plate. Measurements were made both along, and perpendicular to the fiber direction. In this case, the signals contained only the lowest order symmetric and antisymmetric modes. A least squares fit of the results from several source to detector distances was used. Theoretical dispersion curves were calculated and are shown to be in good agreement with experimental results.
The Scaling of Broadband Shock-Associated Noise with Increasing Temperature
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2013-01-01
A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. To isolate the relevant physics, the scaling of BBSAN peak intensity level at the sideline observer location is examined. The equivalent source within the framework of an acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green's function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for saturation of BBSAN with increasing stagnation temperature. The sources and vector Green's function have arguments involving the steady Reynolds- Averaged Navier-Stokes solution of the jet. It is proposed that saturation of BBSAN with increasing jet temperature occurs due to a balance between the amplication of the sound propagation through the shear layer and the source term scaling.
Robust and Rapid Air-Borne Odor Tracking without Casting1,2,3
Bhattacharyya, Urvashi
2015-01-01
Abstract Casting behavior (zigzagging across an odor stream) is common in air/liquid-borne odor tracking in open fields; however, terrestrial odor localization often involves path selection in a familiar environment. To study this, we trained rats to run toward an odor source in a multi-choice olfactory arena with near-laminar airflow. We find that rather than casting, rats run directly toward an odor port, and if this is incorrect, they serially sample other sources. This behavior is consistent and accurate in the presence of perturbations, such as novel odors, background odor, unilateral nostril stitching, and turbulence. We developed a model that predicts that this run-and-scan tracking of air-borne odors is faster than casting, provided there are a small number of targets at known locations. Thus, the combination of best-guess target selection with fallback serial sampling provides a rapid and robust strategy for finding odor sources in familiar surroundings. PMID:26665165
Sams, James I.; Veloski, Garret; Ackman, T.E.
2003-01-01
Nighttime high-resolution airborne thermal infrared imagery (TIR) data were collected in the predawn hours during Feb 5-8 and March 11-12, 1999, from a helicopter platform for 72.4 km of the Youghiogheny River, from Connellsville to McKeesport, in southwestern Pennsylvania. The TIR data were used to identify sources of mine drainage from abandoned mines that discharge directly into the Youghiogheny River. Image-processing and geographic information systems (GIS) techniques were used to identify 70 sites within the study area as possible mine drainage sources. The combination of GIS datasets and the airborne TIR data provided a fast and accurate method to target the possible sources. After field reconnaissance, it was determined that 24 of the 70 sites were mine drainage. This paper summarizes: the procedures used to process the TIR data and extract potential mine-drainage sites; methods used for verification of the TIR data; a discussion of factors affecting the TIR data; and a brief summary of water quality.
Enhancing source location protection in wireless sensor networks
NASA Astrophysics Data System (ADS)
Chen, Juan; Lin, Zhengkui; Wu, Di; Wang, Bailing
2015-12-01
Wireless sensor networks are widely deployed in the internet of things to monitor valuable objects. Once the object is monitored, the sensor nearest to the object which is known as the source informs the base station about the object's information periodically. It is obvious that attackers can capture the object successfully by localizing the source. Thus, many protocols have been proposed to secure the source location. However, in this paper, we examine that typical source location protection protocols generate not only near but also highly localized phantom locations. As a result, attackers can trace the source easily from these phantom locations. To address these limitations, we propose a protocol to enhance the source location protection (SLE). With phantom locations far away from the source and widely distributed, SLE improves source location anonymity significantly. Theory analysis and simulation results show that our SLE provides strong source location privacy preservation and the average safety period increases by nearly one order of magnitude compared with existing work with low communication cost.
McPherson, Malcolm J.; Bellman, Robert A.
1984-01-01
A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.
McPherson, M.J.; Bellman, R.A.
1982-09-27
A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.
A Neighboring Dwarf Irregular Galaxy Hidden by the Milky Way
NASA Astrophysics Data System (ADS)
Massey, Philip; Henning, P. A.; Kraan-Korteweg, R. C.
2003-11-01
We have obtained VLA and optical follow-up observations of the low-velocity H I source HIZSS 3 discovered by Henning et al. and Rivers in a survey for nearby galaxies hidden by the disk of the Milky Way. Its radio characteristics are consistent with this being a nearby (~1.8 Mpc) low-mass dwarf irregular galaxy (dIm). Our optical imaging failed to reveal a resolved stellar population but did detect an extended Hα emission region. The location of the Hα source is coincident with a partially resolved H I cloud in the 21 cm map. Spectroscopy confirms that the Hα source has a similar radial velocity to that of the H I emission at this location, and thus we have identified an optical counterpart. The Hα emission (100 pc in diameter and with a luminosity of 1.4×1038 ergs s-1) is characteristic of a single H II region containing a modest population of OB stars. The galaxy's radial velocity and distance from the solar apex suggests that it is not a Local Group member, although a more accurate distance is needed to be certain. The properties of HIZSS 3 are comparable to those of GR 8, a nearby dIm with a modest amount of current star formation. Further observations are needed to characterize its stellar population, determine the chemical abundances, and obtain a more reliable distance estimate.
NASA Astrophysics Data System (ADS)
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.
2017-01-01
Declared North Korean nuclear tests in 2006, 2009, 2013 and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-D global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25 per cent shorter than the distances between events estimated using regional Pn phases. The 2009, 2013 and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of metres. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-D velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The modified velocity gradients reduce the residuals, the relative location uncertainties and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.
NASA Technical Reports Server (NTRS)
Brooks, Thomas F.; Humphreys, William M.
2006-01-01
Current processing of acoustic array data is burdened with considerable uncertainty. This study reports an original methodology that serves to demystify array results, reduce misinterpretation, and accurately quantify position and strength of acoustic sources. Traditional array results represent noise sources that are convolved with array beamform response functions, which depend on array geometry, size (with respect to source position and distributions), and frequency. The Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) method removes beamforming characteristics from output presentations. A unique linear system of equations accounts for reciprocal influence at different locations over the array survey region. It makes no assumption beyond the traditional processing assumption of statistically independent noise sources. The full rank equations are solved with a new robust iterative method. DAMAS is quantitatively validated using archival data from a variety of prior high-lift airframe component noise studies, including flap edge/cove, trailing edge, leading edge, slat, and calibration sources. Presentations are explicit and straightforward, as the noise radiated from a region of interest is determined by simply summing the mean-squared values over that region. DAMAS can fully replace existing array processing and presentations methodology in most applications. It appears to dramatically increase the value of arrays to the field of experimental acoustics.
Groundwater Pollution Source Identification using Linked ANN-Optimization Model
NASA Astrophysics Data System (ADS)
Ayaz, Md; Srivastava, Rajesh; Jain, Ashu
2014-05-01
Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.
NASA Astrophysics Data System (ADS)
Ranjeva, Minna; Thompson, Lee; Perlitz, Daniel; Bonness, William; Capone, Dean; Elbing, Brian
2011-11-01
Cavitation is a major concern for the US Navy since it can cause ship damage and produce unwanted noise. The ability to precisely locate cavitation onset in laboratory scale experiments is essential for proper design that will minimize this undesired phenomenon. Measuring the cavitation onset is more accurately determined acoustically than visually. However, if other parts of the model begin to cavitate prior to the component of interest the acoustic data is contaminated with spurious noise. Consequently, cavitation onset is widely determined by optically locating the event of interest. The current research effort aims at developing an acoustic localization scheme for reverberant environments such as water tunnels. Currently cavitation bubbles are being induced in a static water tank with a laser, allowing the localization techniques to be refined with the bubble at a known location. The source is located with the use of acoustic data collected with hydrophones and analyzed using signal processing techniques. To verify the accuracy of the acoustic scheme, the events are simultaneously monitored visually with the use of a high speed camera. Once refined testing will be conducted in a water tunnel. This research was sponsored by the Naval Engineering Education Center (NEEC).
Collaborative mining of graph patterns from multiple sources
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Colonna-Romanoa, John
2016-05-01
Intelligence analysts require automated tools to mine multi-source data, including answering queries, learning patterns of life, and discovering malicious or anomalous activities. Graph mining algorithms have recently attracted significant attention in intelligence community, because the text-derived knowledge can be efficiently represented as graphs of entities and relationships. However, graph mining models are limited to use-cases involving collocated data, and often make restrictive assumptions about the types of patterns that need to be discovered, the relationships between individual sources, and availability of accurate data segmentation. In this paper we present a model to learn the graph patterns from multiple relational data sources, when each source might have only a fragment (or subgraph) of the knowledge that needs to be discovered, and segmentation of data into training or testing instances is not available. Our model is based on distributed collaborative graph learning, and is effective in situations when the data is kept locally and cannot be moved to a centralized location. Our experiments show that proposed collaborative learning achieves learning quality better than aggregated centralized graph learning, and has learning time comparable to traditional distributed learning in which a knowledge of data segmentation is needed.
A novel method to calibrate DOI function of a PET detector with a dual-ended-scintillator readout.
Shao, Yiping; Yao, Rutao; Ma, Tianyu
2008-12-01
The detection of depth-of-interaction (DOI) is a critical detector capability to improve the PET spatial resolution uniformity across the field-of-view and will significantly enhance, in particular, small bore system performance for brain, breast, and small animal imaging. One promising technique of DOI detection is to use dual-ended-scintillator readout that uses two photon sensors to detect scintillation light from both ends of a scintillator array and estimate DOI based on the ratio of signals (similar to Anger logic). This approach needs a careful DOI function calibration to establish accurate relationship between DOI and signal ratios, and to recalibrate if the detection condition is shifted due to the drift of sensor gain, bias variations, or degraded optical coupling, etc. However, the current calibration method that uses coincident events to locate interaction positions inside a single scintillator crystal has severe drawbacks, such as complicated setup, long and repetitive measurements, and being prone to errors from various possible misalignments among the source and detector components. This method is also not practically suitable to calibrate multiple DOI functions of a crystal array. To solve these problems, a new method has been developed that requires only a uniform flood source to irradiate a crystal array without the need to locate the interaction positions, and calculates DOI functions based solely on the uniform probability distribution of interactions over DOI positions without knowledge or assumption of detector responses. Simulation and experiment have been studied to validate the new method, and the results show that the new method, with a simple setup and one single measurement, can provide consistent and accurate DOI functions for the entire array of multiple scintillator crystals. This will enable an accurate, simple, and practical DOI function calibration for the PET detectors based on the design of dual-ended-scintillator readout. In addition, the new method can be generally applied to calibrating other types of detectors that use the similar dual-ended readout to acquire the radiation interaction position.
A novel method to calibrate DOI function of a PET detector with a dual-ended-scintillator readout
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao Yiping; Yao Rutao; Ma Tianyu
The detection of depth-of-interaction (DOI) is a critical detector capability to improve the PET spatial resolution uniformity across the field-of-view and will significantly enhance, in particular, small bore system performance for brain, breast, and small animal imaging. One promising technique of DOI detection is to use dual-ended-scintillator readout that uses two photon sensors to detect scintillation light from both ends of a scintillator array and estimate DOI based on the ratio of signals (similar to Anger logic). This approach needs a careful DOI function calibration to establish accurate relationship between DOI and signal ratios, and to recalibrate if the detectionmore » condition is shifted due to the drift of sensor gain, bias variations, or degraded optical coupling, etc. However, the current calibration method that uses coincident events to locate interaction positions inside a single scintillator crystal has severe drawbacks, such as complicated setup, long and repetitive measurements, and being prone to errors from various possible misalignments among the source and detector components. This method is also not practically suitable to calibrate multiple DOI functions of a crystal array. To solve these problems, a new method has been developed that requires only a uniform flood source to irradiate a crystal array without the need to locate the interaction positions, and calculates DOI functions based solely on the uniform probability distribution of interactions over DOI positions without knowledge or assumption of detector responses. Simulation and experiment have been studied to validate the new method, and the results show that the new method, with a simple setup and one single measurement, can provide consistent and accurate DOI functions for the entire array of multiple scintillator crystals. This will enable an accurate, simple, and practical DOI function calibration for the PET detectors based on the design of dual-ended-scintillator readout. In addition, the new method can be generally applied to calibrating other types of detectors that use the similar dual-ended readout to acquire the radiation interaction position.« less
Research on fully distributed optical fiber sensing security system localization algorithm
NASA Astrophysics Data System (ADS)
Wu, Xu; Hou, Jiacheng; Liu, Kun; Liu, Tiegen
2013-12-01
A new fully distributed optical fiber sensing and location technology based on the Mach-Zehnder interferometers is studied. In this security system, a new climbing point locating algorithm based on short-time average zero-crossing rate is presented. By calculating the zero-crossing rates of the multiple grouped data separately, it not only utilizes the advantages of the frequency analysis method to determine the most effective data group more accurately, but also meets the requirement of the real-time monitoring system. Supplemented with short-term energy calculation group signal, the most effective data group can be quickly picked out. Finally, the accurate location of the climbing point can be effectively achieved through the cross-correlation localization algorithm. The experimental results show that the proposed algorithm can realize the accurate location of the climbing point and meanwhile the outside interference noise of the non-climbing behavior can be effectively filtered out.
Ship localization in Santa Barbara Channel using machine learning classifiers.
Niu, Haiqiang; Ozanich, Emma; Gerstoft, Peter
2017-11-01
Machine learning classifiers are shown to outperform conventional matched field processing for a deep water (600 m depth) ocean acoustic-based ship range estimation problem in the Santa Barbara Channel Experiment when limited environmental information is known. Recordings of three different ships of opportunity on a vertical array were used as training and test data for the feed-forward neural network and support vector machine classifiers, demonstrating the feasibility of machine learning methods to locate unseen sources. The classifiers perform well up to 10 km range whereas the conventional matched field processing fails at about 4 km range without accurate environmental information.
Balance Velocities of the Greenland Ice Sheet
NASA Technical Reports Server (NTRS)
Joughin, Ian; Fahnestock, Mark; Ekholm, Simon; Kwok, Ron
1997-01-01
We present a map of balance velocities for the Greenland ice sheet. The resolution of the underlying DEM, which was derived primarily from radar altimetry data, yields far greater detail than earlier balance velocity estimates for Greenland. The velocity contours reveal in striking detail the location of an ice stream in northeastern Greenland, which was only recently discovered using satellite imagery. Enhanced flow associated with all of the major outlets is clearly visible, although small errors in the source data result in less accurate estimates of the absolute flow speeds. Nevertheless, the balance map is useful for ice-sheet modelling, mass balance studies, and field planning.
1988-09-01
si2sna- loss was hige tra-e-: td A.R concluded that the- a~oroxim-at _’ns u-sed in heanalytical model may h-ave intrcou,_ce,, scom.e error :nothe...amplitudes f-om the Phase 2 measurements were in general significant1y 104 co cl -\\ Lnf C~’j tD .0n 1 -O21 w -- -L a u4 oCLa - I N -4 - n -7 c -4 > V) r...responses of the same device are uispla~ td . Tie Fr-equiency Domain refie~tion measuremient is a composite iesponise of* all of’ the discontinuities
Xu, Yan; Zhu, Quing
2015-01-01
Abstract. A new two-step estimation and imaging method is developed for a two-layer breast tissue structure consisting of a breast tissue layer and a chest wall underneath. First, a smaller probe with shorter distance source-detector pairs was used to collect the reflected light mainly from the breast tissue layer. Then, a larger probe with 9×14 source-detector pairs and a centrally located ultrasound transducer was used to collect reflected light from the two-layer tissue structure. The data collected from the smaller probe were used to estimate breast tissue optical properties. With more accurate estimation of the average breast tissue properties, the second layer properties can be assessed from data obtained from the larger probe. Using this approach, the unknown variables have been reduced from four to two and the estimated bulk tissue optical properties are more accurate and robust. In addition, a two-step reconstruction using a genetic algorithm and conjugate gradient method is implemented to simultaneously reconstruct the absorption and reduced scattering maps of targets inside a two-layer tissue structure. Simulations and phantom experiments have been performed to validate the new reconstruction method, and a clinical example is given to demonstrate the feasibility of this approach. PMID:26046722
Pelvic orientation for total hip arthroplasty in lateral decubitus: can it be accurately measured?
Sykes, Alice M; Hill, Janet C; Orr, John F; Gill, Harinderjit S; Salazar, Jose J; Humphreys, Lee D; Beverland, David E
2016-05-16
During total hip arthroplasty (THA), accurately predicting acetabular cup orientation remains a key challenge, in great part because of uncertainty about pelvic orientation. This pilot study aimed to develop and validate a technique to measure pelvic orientation; establish its accuracy in the location of anatomical landmarks and subsequently; investigate if limb movement during a simulated surgical procedure alters pelvic orientation. The developed technique measured 3-D orientation of an isolated Sawbone pelvis, it was then implemented to measure pelvic orientation in lateral decubitus with post-THA patients (n = 20) using a motion capture system. Orientation of the isolated Sawbone pelvis was accurately measured, demonstrated by high correlations with angular data from a coordinate measurement machine; R-squared values close to 1 for all pelvic axes. When applied to volunteer subjects, largest movements occurred about the longitudinal pelvic axis; internal and external pelvic rotation. Rotations about the anteroposterior axis, which directly affect inclination angles, showed >75% of participants had movement within ±5° of neutral, 0°. The technique accurately measured orientation of the isolated bony pelvis. This was not the case in a simulated theatre environment. Soft tissue landmarks were difficult to palpate repeatedly. These findings have direct clinical relevance, landmark registration in lateral decubitus is a potential source of error, contributing here to large ranges in measured movement. Surgeons must be aware that present techniques using bony landmarks to reference pelvic orientation for cup implantation, both computer-based and mechanical, may not be sufficiently accurate.
A novel endoscopic fluorescent band ligation method for tumor localization.
Hyun, Jong Hee; Kim, Seok-Ki; Kim, Kwang Gi; Kim, Hong Rae; Lee, Hyun Min; Park, Sunup; Kim, Sung Chun; Choi, Yongdoo; Sohn, Dae Kyung
2016-10-01
Accurate tumor localization is essential for minimally invasive surgery. This study describes the development of a novel endoscopic fluorescent band ligation method for the rapid and accurate identification of tumor sites during surgery. The method utilized a fluorescent rubber band, made of indocyanine green (ICG) and a liquid rubber solution mixture, as well as a near-infrared fluorescence laparoscopic system with a dual light source using a high-powered light-emitting diode (LED) and a 785-nm laser diode. The fluorescent rubber bands were endoscopically placed on the mucosae of porcine stomachs and colons. During subsequent conventional laparoscopic stomach and colon surgery, the fluorescent bands were assayed using the near-infrared fluorescence laparoscopy system. The locations of the fluorescent clips were clearly identified on the fluorescence images in real time. The system was able to distinguish the two or three bands marked on the mucosal surfaces of the stomach and colon. Resection margins around the fluorescent bands were sufficient in the resected specimens obtained during stomach and colon surgery. These novel endoscopic fluorescent bands could be rapidly and accurately localized during stomach and colon surgery. Use of these bands may make possible the excision of exact target sites during minimally invasive gastrointestinal surgery.
Wave-equation migration velocity inversion using passive seismic sources
NASA Astrophysics Data System (ADS)
Witten, B.; Shragge, J. C.
2015-12-01
Seismic monitoring at injection sites (e.g., CO2 sequestration, waste water disposal, hydraulic fracturing) has become an increasingly important tool for hazard identification and avoidance. The information obtained from this data is often limited to seismic event properties (e.g., location, approximate time, moment tensor), the accuracy of which greatly depends on the estimated elastic velocity models. However, creating accurate velocity models from passive array data remains a challenging problem. Common techniques rely on picking arrivals or matching waveforms requiring high signal-to-noise data that is often not available for the magnitude earthquakes observed over injection sites. We present a new method for obtaining elastic velocity information from earthquakes though full-wavefield wave-equation imaging and adjoint-state tomography. The technique exploits the fact that the P- and S-wave arrivals originate at the same time and location in the subsurface. We generate image volumes by back-propagating P- and S-wave data through initial Earth models and then applying a correlation-based extended-imaging condition. Energy focusing away from zero lag in the extended image volume is used as a (penalized) residual in an adjoint-state tomography scheme to update the P- and S-wave velocity models. We use an acousto-elastic approximation to greatly reduce the computational cost. Because the method requires neither an initial source location or origin time estimate nor picking of arrivals, it is suitable for low signal-to-noise datasets, such as microseismic data. Synthetic results show that with a realistic distribution of microseismic sources, P- and S-velocity perturbations can be recovered. Although demonstrated at an oil and gas reservoir scale, the technique can be applied to problems of all scales from geologic core samples to global seismology.
Measuring and monitoring KIPT Neutron Source Facility Reactivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yan; Gohar, Yousry; Zhong, Zhaopeng
2015-08-01
Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on developing and constructing a neutron source facility at Kharkov, Ukraine. The facility consists of an accelerator-driven subcritical system. The accelerator has a 100 kW electron beam using 100 MeV electrons. The subcritical assembly has k eff less than 0.98. To ensure the safe operation of this neutron source facility, the reactivity of the subcritical core has to be accurately determined and continuously monitored. A technique which combines the area-ratio method and the flux-to-current ratio method is purposed to determine themore » reactivity of the KIPT subcritical assembly at various conditions. In particular, the area-ratio method can determine the absolute reactivity of the subcritical assembly in units of dollars by performing pulsed-neutron experiments. It provides reference reactivities for the flux-to-current ratio method to track and monitor the reactivity deviations from the reference state while the facility is at other operation modes. Monte Carlo simulations are performed to simulate both methods using the numerical model of the KIPT subcritical assembly. It is found that the reactivities obtained from both the area-ratio method and the flux-to-current ratio method are spatially dependent on the neutron detector locations and types. Numerical simulations also suggest optimal neutron detector locations to minimize the spatial effects in the flux-to-current ratio method. The spatial correction factors are calculated using Monte Carlo methods for both measuring methods at the selected neutron detector locations. Monte Carlo simulations are also performed to verify the accuracy of the flux-to-current ratio method in monitoring the reactivity swing during a fuel burnup cycle.« less
NASA Astrophysics Data System (ADS)
Darrh, A.; Downs, C. M.; Poppeliers, C.
2017-12-01
Born Scattering Inversion (BSI) of electromagnetic (EM) data is a geophysical imaging methodology for mapping weak conductivity, permeability, and/or permittivity contrasts in the subsurface. The high computational cost of full waveform inversion is reduced by adopting the First Born Approximation for scattered EM fields. This linearizes the inverse problem in terms of Born scattering amplitudes for a set of effective EM body sources within a 3D imaging volume. Estimation of scatterer amplitudes is subsequently achieved by solving the normal equations. Our present BSI numerical experiments entail Fourier transforming real-valued synthetic EM data to the frequency-domain, and minimizing the L2 residual between complex-valued observed and predicted data. We are testing the ability of BSI to resolve simple scattering models. For our initial experiments, synthetic data are acquired by three-component (3C) electric field receivers distributed on a plane above a single point electric dipole within a homogeneous and isotropic wholespace. To suppress artifacts, candidate Born scatterer locations are confined to a volume beneath the receiver array. Also, we explore two different numerical linear algebra algorithms for solving the normal equations: Damped Least Squares (DLS), and Non-Negative Least Squares (NNLS). Results from NNLS accurately recover the source location only for a large dense 3C receiver array, but fail when the array is decimated, or is restricted to horizontal component data. Using all receiver stations and all components per station, NNLS results are relatively insensitive to a sub-sampled frequency spectrum, suggesting that coarse frequency-domain sampling may be adequate for good target resolution. Results from DLS are insensitive to diminishing array density, but contain spatially oscillatory structure. DLS-generated images are consistently centered at the known point source location, despite an abundance of surrounding structure.
Nagata, Jun; Fukunaga, Yosuke; Akiyoshi, Takashi; Konishi, Tsuyoshi; Fujimoto, Yoshiya; Nagayama, Satoshi; Yamamoto, Noriko; Ueno, Masashi
2016-02-01
Accurate identification of the location of colorectal lesions is crucial during laparoscopic surgery. Endoscopic marking has been used as an effective preoperative marker for tumor identification. We investigated the feasibility and safety of an imaging method using near-infrared, light-emitting, diode-activated indocyanine green fluorescence in colorectal laparoscopic surgery. This was a single-institution, prospective study. This study was conducted in a tertiary referral hospital. We enrolled 24 patients who underwent laparoscopic surgery. Indocyanine green and India ink were injected into the same patients undergoing preoperative colonoscopy for colon cancer. During subsequent laparoscopic resection of colorectal tumors, the colon was first observed with white light. Then, indocyanine green was activated with a light-emitting diode at 760 nm as the light source. Near-infrared-induced fluorescence showed tumor location clearly and accurately in all 24 of the patients. All of the patients who underwent laparoscopic surgery after marking had positive indocyanine green staining at the time of surgery. Perioperative complications attributed to dye use were not observed. This study is limited by the cost of indocyanine green detection, the timing of the colonoscopy and tattooing in relation to the operation and identification with indocyanine green, and the small size of the series. These data suggest that our novel method for colonic marking with fluorescence imaging of near-infrared, light-emitting, diode-activated indocyanine green is feasible and safe. This method is useful, has no adverse effects, and can be used for perioperative identification of tumor location. Near-infrared, light-emitting, diode-activated indocyanine green has potential use as a colonic marking agent.
This is a provisional dataset that contains point locations for all grants given out by the USEPA going back to the 1960s through today. There are many limitations to the data so it is advised that these metadata be read carefully before use. Although the records for these grant locations are drawn directly from the official EPA grants repository (IGMS ?? Integrated Grants Management System), it is important to know that the IGMS was designed for purposes that did not include accurately portraying the grant??s place of performance on a map. Instead, the IGMS grant recipient??s mailing address is the primary source for grant locations. Particularly for statewide grants that are administered via State and Regional headquarters, the grant location data should not be interpreted as the grant??s place of performance. In 2012, a policy was established to start to collect the place of performance as a pilot for newly awarded grants ?? that were deemed ??community-based?? in nature and for these the grant location depicted in this database will be a more reliable indicator of the actual place of performance. As for the locational accuracy of these points, there is no programmatic certification process, however, they are being entered by the Grant Project Officers who are most familiar with the details of the grants, apart from the grantees themselves. Limitations notwithstanding, this is a first-of-breed attempt to map all of the Agency??s grants, using the best
US EPA EJ Grants/IGD: PERF_EJ_GRANTS_INT_MV
This is a provisional dataset that contains point locations for all Environmental Justice (EJ) grants given out by the US EPA. There are many limitations to the data so it is advised that these metadata be read carefully before use. Although the records for these grant locations are drawn directly from the official EPA grants repository (IGMS fffd Integrated Grants Management System), it is important to know that the IGMS was designed for purposes that did not include accurately portraying the grantfffds place of performance on a map. Instead, the IGMS grant recipientfffds mailing address is the primary source for grant locations. Particularly for statewide grants that are administered via State and Regional headquarters, the grant location data should not be interpreted as the grantfffds place of performance. In 2012, a policy was established to start to collect the place of performance as a pilot for newly awarded grants fffd that were deemed fffdcommunity-basedfffd in nature and for these the grant location depicted in this database will be a more reliable indicator of the actual place of performance. As for the locational accuracy of these points, there is no programmatic certification process, however, they are being entered by the Grant Project Officers who are most familiar with the details of the grants, apart from the grantees themselves. Limitations notwithstanding, this is a first-of-breed attempt to map all of the Agencyfffds grants, using the
This is a provisional dataset that contains point locations for all Environmental Justice (EJ) grants given out by the US EPA. There are many limitations to the data so it is advised that these metadata be read carefully before use. Although the records for these grant locations are drawn directly from the official EPA grants repository (IGMS Integrated Grants Management System), it is important to know that the IGMS was designed for purposes that did not include accurately portraying the grants place of performance on a map. Instead, the IGMS grant recipients mailing address is the primary source for grant locations. Particularly for statewide grants that are administered via State and Regional headquarters, the grant location data should not be interpreted as the grants place of performance. In 2012, a policy was established to start to collect the place of performance as a pilot for newly awarded grants that were deemed community-based in nature and for these the grant location depicted in this database will be a more reliable indicator of the actual place of performance. As for the locational accuracy of these points, there is no programmatic certification process, however, they are being entered by the Grant Project Officers who are most familiar with the details of the grants, apart from the grantees themselves. Limitations notwithstanding, this is a first-of-breed attempt to map all of the Agencys grants, using the best internal geocoding algorithms avail
Guo, Lili; Qi, Junwei; Xue, Wei
2018-01-01
This article proposes a novel active localization method based on the mixed polarization multiple signal classification (MP-MUSIC) algorithm for positioning a metal target or an insulator target in the underwater environment by using a uniform circular antenna (UCA). The boundary element method (BEM) is introduced to analyze the boundary of the target by use of a matrix equation. In this method, an electric dipole source as a part of the locating system is set perpendicularly to the plane of the UCA. As a result, the UCA can only receive the induction field of the target. The potential of each electrode of the UCA is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields-based localization method, which can be easily implemented in practical engineering applications. A simulation model and a physical experiment are constructed. The simulation and the experiment results provide accurate positioning performance, with the help of verifying the effectiveness of the proposed localization method in underwater target locating. PMID:29439495
NASA Astrophysics Data System (ADS)
Chtcheprov, Pavel; Inscoe, Christina; Burk, Laurel; Ger, Rachel; Yuan, Hong; Lu, Jianping; Chang, Sha; Zhou, Otto
2014-03-01
Microbeam radiation therapy (MRT) uses an array of high-dose, narrow (~100 μm) beams separated by a fraction of a millimeter to treat various radio-resistant, deep-seated tumors. MRT has been shown to spare normal tissue up to 1000 Gy of entrance dose while still being highly tumoricidal. Current methods of tumor localization for our MRT treatments require MRI and X-ray imaging with subject motion and image registration that contribute to the measurement error. The purpose of this study is to develop a novel form of imaging to quickly and accurately assist in high resolution target positioning for MRT treatments using X-ray fluorescence (XRF). The key to this method is using the microbeam to both treat and image. High Z contrast media is injected into the phantom or blood pool of the subject prior to imaging. Using a collimated spectrum analyzer, the region of interest is scanned through the MRT beam and the fluorescence signal is recorded for each slice. The signal can be processed to show vascular differences in the tissue and isolate tumor regions. Using the radiation therapy source as the imaging source, repositioning and registration errors are eliminated. A phantom study showed that a spatial resolution of a fraction of microbeam width can be achieved by precision translation of the mouse stage. Preliminary results from an animal study showed accurate iodine profusion, confirmed by CT. The proposed image guidance method, using XRF to locate and ablate tumors, can be used as a fast and accurate MRT treatment planning system.
Evaluation of a head-repositioner and Z-plate system for improved accuracy of dose delivery.
Charney, Sarah C; Lutz, Wendell R; Klein, Mary K; Jones, Pamela D
2009-01-01
Radiation therapy requires accurate dose delivery to targets often identifiable only on computed tomography (CT) images. Translation between the isocenter localized on CT and laser setup for radiation treatment, and interfractional head repositioning are frequent sources of positioning error. The objective was to design a simple, accurate apparatus to eliminate these sources of error. System accuracy was confirmed with phantom and in vivo measurements. A head repositioner that fixates the maxilla via dental mold with fiducial marker Z-plates attached was fabricated to facilitate the connection between the isocenter on CT and laser treatment setup. A phantom study targeting steel balls randomly located within the head repositioner was performed. The center of each ball was marked on a transverse CT slice on which six points of the Z-plate were also visible. Based on the relative position of the six Z-plate points and the ball center, the laser setup position on each Z-plate and a top plate was calculated. Based on these setup marks, orthogonal port films, directed toward each target, were evaluated for accuracy without regard to visual setup. A similar procedure was followed to confirm accuracy of in vivo treatment setups in four dogs using implanted gold seeds. Sequential port films of three dogs were made to confirm interfractional accuracy. Phantom and in vivo measurements confirmed accuracy of 2 mm between isocenter on CT and the center of the treatment dose distribution. Port films confirmed similar accuracy for interfractional treatments. The system reliably connects CT target localization to accurate initial and interfractional radiation treatment setup.
Assessment of radio frequency exposures in schools, homes, and public places in Belgium.
Verloock, Leen; Joseph, Wout; Goeminne, Francis; Martens, Luc; Verlaek, Mart; Constandt, Kim
2014-12-01
Characterization of exposure from emerging radio frequency (RF) technologies in areas where children are present is important. Exposure to RF electromagnetic fields (EMF) was assessed in three "sensitive" microenvironments; namely, schools, homes, and public places located in urban environments and compared to exposure in offices. In situ assessment was conducted by performing spatial broadband and accurate narrowband measurements, providing 6-min averaged electric-field strengths. A distinction between internal (transmitters that are located indoors) and external (outdoor sources from broadcasting and telecommunication) sources was made. Ninety-four percent of the broadband measurements were below 1 V m(-1). The average and maximal total electric-field values in schools, homes, and public places were 0.2 and 3.2 V m(-1) (WiFi), 0.1 and 1.1 V m(-1) (telecommunication), and 0.6 and 2.4 V m(-1) (telecommunication), respectively, while for offices, average and maximal exposure were 0.9 and 3.3 V m(-1) (telecommunication), satisfying the ICNIRP reference levels. In the schools considered, the highest maximal and average field values were due to internal signals (WiFi). In the homes, public places, and offices considered, the highest maximal and average field values originated from telecommunication signals. Lowest exposures were obtained in homes. Internal sources contributed on average more indoors (31.2%) than outdoors (2.3%), while the average contributions of external sources (broadcast and telecommunication sources) were higher outdoors (97.7%) than at indoor positions (68.8%). FM, GSM, and UMTS dominate the total downlink exposure in the outdoor measurements. In indoor measurements, FM, GSM, and WiFi dominate the total exposure. The average contribution of the emerging technology LTE was only 0.6%.
Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.
Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide anmore » estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.« less
Consider the source: Children link the accuracy of text-based sources to the accuracy of the author.
Vanderbilt, Kimberly E; Ochoa, Karlena D; Heilbrun, Jayd
2018-05-06
The present research investigated whether young children link the accuracy of text-based information to the accuracy of its author. Across three experiments, three- and four-year-olds (N = 231) received information about object labels from accurate and inaccurate sources who provided information both in text and verbally. Of primary interest was whether young children would selectively rely on information provided by more accurate sources, regardless of the form in which the information was communicated. Experiment 1 tested children's trust in text-based information (e.g., books) written by an author with a history of either accurate or inaccurate verbal testimony and found that children showed greater trust in books written by accurate authors. Experiment 2 replicated the findings of Experiment 1 and extended them by showing that children's selective trust in more accurate text-based sources was not dependent on experience trusting or distrusting the author's verbal testimony. Experiment 3 investigated this understanding in reverse by testing children's trust in verbal testimony communicated by an individual who had authored either accurate or inaccurate text-based information. Experiment 3 revealed that children showed greater trust in individuals who had authored accurate rather than inaccurate books. Experiment 3 also demonstrated that children used the accuracy of text-based sources to make inferences about the mental states of the authors. Taken together, these results suggest children do indeed link the reliability of text-based sources to the reliability of the author. Statement of Contribution Existing knowledge Children use sources' prior accuracy to predict future accuracy in face-to-face verbal interactions. Children who are just learning to read show increased trust in text bases (vs. verbal) information. It is unknown whether children consider authors' prior accuracy when judging the accuracy of text-based information. New knowledge added by this article Preschool children track sources' accuracy across communication mediums - from verbal to text-based modalities and vice versa. Children link the reliability of text-based sources to the reliability of the author. © 2018 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Burman, Jerry; Hespanha, Joao; Madhow, Upamanyu; Pham, Tien
2011-06-01
A team consisting of Teledyne Scientific Company, the University of California at Santa Barbara and the Army Research Laboratory* is developing technologies in support of automated data exfiltration from heterogeneous battlefield sensor networks to enhance situational awareness for dismounts and command echelons. Unmanned aerial vehicles (UAV) provide an effective means to autonomously collect data from a sparse network of unattended ground sensors (UGSs) that cannot communicate with each other. UAVs are used to reduce the system reaction time by generating autonomous collection routes that are data-driven. Bio-inspired techniques for search provide a novel strategy to detect, capture and fuse data. A fast and accurate method has been developed to localize an event by fusing data from a sparse number of UGSs. This technique uses a bio-inspired algorithm based on chemotaxis or the motion of bacteria seeking nutrients in their environment. A unique acoustic event classification algorithm was also developed based on using swarm optimization. Additional studies addressed the problem of routing multiple UAVs, optimally placing sensors in the field and locating the source of gunfire at helicopters. A field test was conducted in November of 2009 at Camp Roberts, CA. The field test results showed that a system controlled by bio-inspired software algorithms can autonomously detect and locate the source of an acoustic event with very high accuracy and visually verify the event. In nine independent test runs of a UAV, the system autonomously located the position of an explosion nine times with an average accuracy of 3 meters. The time required to perform source localization using the UAV was on the order of a few minutes based on UAV flight times. In June 2011, additional field tests of the system will be performed and will include multiple acoustic events, optimal sensor placement based on acoustic phenomenology and the use of the International Technology Alliance (ITA) Sensor Network Fabric (IBM).
FRB as products of accretion disc funnels
NASA Astrophysics Data System (ADS)
Katz, J. I.
2017-10-01
The repeating FRB 121102, the only fast radio burst (FRB) with an accurately determined position, is associated with a variable persistent radio source. I suggest that an FRB originates in the accretion disc funnels of black holes. Narrowly collimated radiation is emitted along the wandering instantaneous angular momentum axis of accreted matter. This emission is observed as a fast radio burst when it sweeps across the direction to the observer. In this model, in contrast to neutron star (pulsar, RRAT or SGR) models, repeating FRBs do not have underlying periodicity and are co-located with persistent radio sources resulting from their off-axis emission. The model is analogous, on smaller spatial, lower mass and accretion rate and shorter temporal scales, to an active galactic nucleus (AGN), with FRB corresponding to blazars in which the jets point towards us. The small inferred black hole masses imply that FRBs are not associated with galactic nuclei.
NASA Technical Reports Server (NTRS)
Lee, Harry
1994-01-01
A highly accurate transmission line fault locator based on the traveling-wave principle was developed and successfully operated within B.C. Hydro. A transmission line fault produces a fast-risetime traveling wave at the fault point which propagates along the transmission line. This fault locator system consists of traveling wave detectors located at key substations which detect and time tag the leading edge of the fault-generated traveling wave as if passes through. A master station gathers the time-tagged information from the remote detectors and determines the location of the fault. Precise time is a key element to the success of this system. This fault locator system derives its timing from the Global Positioning System (GPS) satellites. System tests confirmed the accuracy of locating faults to within the design objective of +/-300 meters.
NASA Astrophysics Data System (ADS)
Huang, W.; Jiang, J.; Zha, Z.; Zhang, H.; Wang, C.; Zhang, J.
2014-04-01
Geospatial data resources are the foundation of the construction of geo portal which is designed to provide online geoinformation services for the government, enterprise and public. It is vital to keep geospatial data fresh, accurate and comprehensive in order to satisfy the requirements of application and development of geographic location, route navigation, geo search and so on. One of the major problems we are facing is data acquisition. For us, integrating multi-sources geospatial data is the mainly means of data acquisition. This paper introduced a practice integration approach of multi-source geospatial data with different data model, structure and format, which provided the construction of National Geospatial Information Service Platform of China (NGISP) with effective technical supports. NGISP is the China's official geo portal which provides online geoinformation services based on internet, e-government network and classified network. Within the NGISP architecture, there are three kinds of nodes: national, provincial and municipal. Therefore, the geospatial data is from these nodes and the different datasets are heterogeneous. According to the results of analysis of the heterogeneous datasets, the first thing we do is to define the basic principles of data fusion, including following aspects: 1. location precision; 2.geometric representation; 3. up-to-date state; 4. attribute values; and 5. spatial relationship. Then the technical procedure is researched and the method that used to process different categories of features such as road, railway, boundary, river, settlement and building is proposed based on the principles. A case study in Jiangsu province demonstrated the applicability of the principle, procedure and method of multi-source geospatial data integration.
3D Monte Carlo model with direct photon flux recording for optimal optogenetic light delivery
NASA Astrophysics Data System (ADS)
Shin, Younghoon; Kim, Dongmok; Lee, Jihoon; Kwon, Hyuk-Sang
2017-02-01
Configuring the light power emitted from the optical fiber is an essential first step in planning in-vivo optogenetic experiments. However, diffusion theory, which was adopted for optogenetic research, precluded accurate estimates of light intensity in the semi-diffusive region where the primary locus of the stimulation is located. We present a 3D Monte Carlo model that provides an accurate and direct solution for light distribution in this region. Our method directly records the photon trajectory in the separate volumetric grid planes for the near-source recording efficiency gain, and it incorporates a 3D brain mesh to support both homogeneous and heterogeneous brain tissue. We investigated the light emitted from optical fibers in brain tissue in 3D, and we applied the results to design optimal light delivery parameters for precise optogenetic manipulation by considering the fiber output power, wavelength, fiber-to-target distance, and the area of neural tissue activation.
An overview of DANCE: a 4II BaF[2] detector for neutron capture measurements at LANSCE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ullmann, J. L.
2004-01-01
The Detector for Advanced Neutron Capture experiments (DANCE) is a 162-element, 4{pi} BaF{sub 2} array designed to make neutron capture cross-section measurements on rare or radioactive targets with masses as little as 1 mg. Accurate capture cross sections are needed in many research areas, including stellar nucleosynthesis, advanced nuclear fuel cycles, waste transmutation, and other applied programs. These cross sections are difficult to calculate accurately and must be measured. Up to now, except for a few long-lived nuclides there are essentially no differential capture measurements on radioactive nuclei. The DANCE array is located at the Lujan Neutron Scattering Center atmore » LANSCE, which is a continuous-spectrum neutron source with useable energies from below thermal to about 100 keV. Data acquisition is done with 320 fast waveform digitizers. The design and initial performance results, including background minimization, will be discussed.« less
Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong
2014-01-01
In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758
Mathematical Model and Calibration Procedure of a PSD Sensor Used in Local Positioning Systems.
Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Domingo-Perez, Francisco; Tsirigotis, Georgios
2016-09-15
Here, we propose a mathematical model and a calibration procedure for a PSD (position sensitive device) sensor equipped with an optical system, to enable accurate measurement of the angle of arrival of one or more beams of light emitted by infrared (IR) transmitters located at distances of between 4 and 6 m. To achieve this objective, it was necessary to characterize the intrinsic parameters that model the system and obtain their values. This first approach was based on a pin-hole model, to which system nonlinearities were added, and this was used to model the points obtained with the nA currents provided by the PSD. In addition, we analyzed the main sources of error, including PSD sensor signal noise, gain factor imbalances and PSD sensor distortion. The results indicated that the proposed model and method provided satisfactory calibration and yielded precise parameter values, enabling accurate measurement of the angle of arrival with a low degree of error, as evidenced by the experimental results.
Concurrent and Accurate Short Read Mapping on Multicore Processors.
Martínez, Héctor; Tárraga, Joaquín; Medina, Ignacio; Barrachina, Sergio; Castillo, Maribel; Dopazo, Joaquín; Quintana-Ortí, Enrique S
2015-01-01
We introduce a parallel aligner with a work-flow organization for fast and accurate mapping of RNA sequences on servers equipped with multicore processors. Our software, HPG Aligner SA (HPG Aligner SA is an open-source application. The software is available at http://www.opencb.org, exploits a suffix array to rapidly map a large fraction of the RNA fragments (reads), as well as leverages the accuracy of the Smith-Waterman algorithm to deal with conflictive reads. The aligner is enhanced with a careful strategy to detect splice junctions based on an adaptive division of RNA reads into small segments (or seeds), which are then mapped onto a number of candidate alignment locations, providing crucial information for the successful alignment of the complete reads. The experimental results on a platform with Intel multicore technology report the parallel performance of HPG Aligner SA, on RNA reads of 100-400 nucleotides, which excels in execution time/sensitivity to state-of-the-art aligners such as TopHat 2+Bowtie 2, MapSplice, and STAR.
Pre-fire warning system and method using a perfluorocarbon tracer
Dietz, R.N.; Senum, G.I.
1994-11-08
A composition and method are disclosed for detecting thermal overheating of an apparatus or system and for quickly and accurately locating the portions of the apparatus or system that experience a predetermined degree of such overheating. A composition made according to the invention includes perfluorocarbon tracers (PFTs) mixed with certain non-reactive carrier compounds that are effective to trap or block the PFTs within the composition at normal room temperature or at normal operating temperature of the coated apparatus or system. When a predetermined degree of overheating occurs in any of the coated components of the apparatus or system, PFTs are emitted from the compositions at a rate corresponding to the degree of overheating of the component. An associated PFT detector (or detectors) is provided and monitored to quickly identify the type of PFTs emitted so that the PFTs can be correlated with the respective PFT in the coating compositions applied on respective components in the system, thereby to quickly and accurately localize the source of the overheating of such components. 4 figs.
Pre-fire warning system and method using a perfluorocarbon tracer
Dietz, Russell N.; Senum, Gunnar I.
1994-01-01
A composition and method for detecting thermal overheating of an apparatus or system and for quickly and accurately locating the portions of the apparatus or system that experience a predetermined degree of such overheating. A composition made according to the invention includes perfluorocarbon tracers (PFTs) mixed with certain non-reactive carrier compounds that are effective to trap or block the PFTs within the composition at normal room temperature or at normal operating temperature of the coated apparatus or system. When a predetermined degree of overheating occurs in any of the coated components of the apparatus or system, PFTs are emitted from the compositions at a rate corresponding to the degree of overheating of the component. An associated PFT detector (or detectors) is provided and monitored to quickly identify the type of PFTs emitted so that the PFTs can be correlated with the respective PFT in the coating compositions applied on respective components in the system, thereby to quickly and accurately localize the source of the overheating of such components.
Photogrammetric Method and Software for Stream Planform Identification
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Stonedahl, F.; Lohberg, M. M.; Lusk, K.; Miller, D.
2013-12-01
Accurately characterizing the planform of a stream is important for many purposes, including recording measurement and sampling locations, monitoring change due to erosion or volumetric discharge, and spatial modeling of stream processes. While expensive surveying equipment or high resolution aerial photography can be used to obtain planform data, our research focused on developing a close-range photogrammetric method (and accompanying free/open-source software) to serve as a cost-effective alternative. This method involves securing and floating a wooden square frame on the stream surface at several locations, taking photographs from numerous angles at each location, and then post-processing and merging data from these photos using the corners of the square for reference points, unit scale, and perspective correction. For our test field site we chose a ~35m reach along Black Hawk Creek in Sunderbruch Park (Davenport, IA), a small, slow-moving stream with overhanging trees. To quantify error we measured 88 distances between 30 marked control points along the reach. We calculated error by comparing these 'ground truth' distances to the corresponding distances extracted from our photogrammetric method. We placed the square at three locations along our reach and photographed it from multiple angles. The square corners, visible control points, and visible stream outline were hand-marked in these photos using the GIMP (open-source image editor). We wrote an open-source GUI in Java (hosted on GitHub), which allows the user to load marked-up photos, designate square corners and label control points. The GUI also extracts the marked pixel coordinates from the images. We also wrote several scripts (currently in MATLAB) that correct the pixel coordinates for radial distortion using Brown's lens distortion model, correct for perspective by forcing the four square corner pixels to form a parallelogram in 3-space, and rotate the points in order to correctly orient all photos of the same square location. Planform data from multiple photos (and multiple square locations) are combined using weighting functions that mitigate the error stemming from the markup-process, imperfect camera calibration, etc. We have used our (beta) software to mark and process over 100 photos, yielding an average error of only 1.5% relative to our 88 measured lengths. Next we plan to translate the MATLAB scripts into Python and release their source code, at which point only free software, consumer-grade digital cameras, and inexpensive building materials will be needed for others to replicate this method at new field sites. Three sample photographs of the square with the created planform and control points
Scalar Dispersion from Point Sources in a Realistic Urban Environment
NASA Astrophysics Data System (ADS)
Salesky, S.; Giometto, M. G.; Christen, A.; Parlange, M. B.
2016-12-01
Accurate modeling of scalar dispersion within and above urban canopies is critical to properly predict air quality and dispersion (e.g. accidental contaminant release) in urban environments. We perform large eddy simulations (LES) of scalar dispersion from point sources in a typical North American neighborhood using topography and foliage density derived from airborne LIDAR scans with 1 m resolution in Vancouver, BC, Canada. The added drag force due to trees is parameterized in the LES as a function of the leaf area density (LAD) profile. Conversely, drag from buildings is accounted for using a direct forcing approach immersed-boundary method. The scalar advection-diffusion equation is discretized in a finite-volume framework, and accurate mass conservation is enforced through a recently developed Cartesian cut cell method. Simulations are performed with trees for different values of LAD, representative of summer and winter conditions, as well as a case without trees. The effects of varying mean wind direction (derived from observed wind climatologies) on dispersion patterns are also considered. Scalar release locations in the LES are informed by spatially distributed measurements of carbon dioxide concentration; CO2 is used as a tracer for fossil fuel emissions, since source strengths are well-known and the contribution from biological processes in this setting is small (<10%). The effects of leaf area density, source height, and wind direction on scalar statistics including the growth of the mean concentration plume and the fraction that escapes the urban canopy layer will be considered. In a companion study, the presence of trees was found to strongly modify sweep and ejection patterns for the momentum flux; here we consider the related issue of how vegetation influences coherent structures responsible for scalar transport.
Swift Burst Alert Telescope (BAT) Instrument Response
NASA Technical Reports Server (NTRS)
Parsons, A.; Hullinger, D.; Markwardt, C.; Barthelmy, S.; Cummings, J.; Gehrels, N.; Krimm, H.; Tueller, J.; Fenimore, E.; Palmer, D.
2004-01-01
The Burst Alert Telescope (BAT), a large coded aperture instrument with a wide field-of-view (FOV), provides the gamma-ray burst triggers and locations for the Swift Gamma-Ray Burst Explorer. In addition to providing this imaging information, BAT will perform a 15 keV - 150 keV all-sky hard x-ray survey based on the serendipitous pointings resulting from the study of gamma-ray bursts and will also monitor the sky for transient hard x-ray sources. For BAT to provide spectral and photometric information for the gamma-ray bursts, the transient sources and the all-sky survey, the BAT instrument response must be determined to an increasingly greater accuracy. In this talk, we describe the BAT instrument response as determined to an accuracy suitable for gamma-ray burst studies. We will also discuss the public data analysis tools developed to calculate the BAT response to sources at different energies and locations in the FOV. The level of accuracy required for the BAT instrument response used for the hard x-ray survey is significantly higher because this response must be used in the iterative clean algorithm for finding fainter sources. Because the bright sources add a lot of coding noise to the BAT sky image, fainter sources can be seen only after the counts due to the bright sources are removed. The better we know the BAT response, the lower the noise in the cleaned spectrum and thus the more sensitive the survey. Since the BAT detector plane consists of 32768 individual, 4 mm square CZT gamma-ray detectors, the most accurate BAT response would include 32768 individual detector response functions to separate mask modulation effects from differences in detector efficiencies! We describe OUT continuing work to improve the accuracy of the BAT instrument response and will present the current results of Monte Carlo simulations as well as BAT ground calibration data.
Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.; Ovall, J.; Holst, M.
2014-12-01
We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented adaptive refinement code named MARE2DEM. We demonstrate the performance and parallel scaling of this algorithm on a medium-scale computing cluster with a marine controlled-source EM example that includes a 3D array of receivers located over a 3D model that includes significant seafloor bathymetry variations and a heterogeneous subsurface.
Richardson, Claire; Rutherford, Shannon; Agranovski, Igor
2018-06-01
Given the significance of mining as a source of particulates, accurate characterization of emissions is important for the development of appropriate emission estimation techniques for use in modeling predictions and to inform regulatory decisions. The currently available emission estimation methods for Australian open-cut coal mines relate primarily to total suspended particulates and PM 10 (particulate matter with an aerodynamic diameter <10 μm), and limited data are available relating to the PM 2.5 (<2.5 μm) size fraction. To provide an initial analysis of the appropriateness of the currently available emission estimation techniques, this paper presents results of sampling completed at three open-cut coal mines in Australia. The monitoring data demonstrate that the particulate size fraction varies for different mining activities, and that the region in which the mine is located influences the characteristics of the particulates emitted to the atmosphere. The proportion of fine particulates in the sample increased with distance from the source, with the coarse fraction being a more significant proportion of total suspended particulates close to the source of emissions. In terms of particulate composition, the results demonstrate that the particulate emissions are predominantly sourced from naturally occurring geological material, and coal comprises less than 13% of the overall emissions. The size fractionation exhibited by the sampling data sets is similar to that adopted in current Australian emission estimation methods but differs from the size fractionation presented in the U.S. Environmental Protection Agency methodology. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Comprehensive air quality monitoring was undertaken, and corresponding recommendations were provided.
Mapping air quality zones for coastal urban centers.
Freeman, Brian; Gharabaghi, Bahram; Thé, Jesse; Munshed, Mohammad; Faisal, Shah; Abdullah, Meshal; Al Aseed, Athari
2017-05-01
This study presents a new method that incorporates modern air dispersion models allowing local terrain and land-sea breeze effects to be considered along with political and natural boundaries for more accurate mapping of air quality zones (AQZs) for coastal urban centers. This method uses local coastal wind patterns and key urban air pollution sources in each zone to more accurately calculate air pollutant concentration statistics. The new approach distributes virtual air pollution sources within each small grid cell of an area of interest and analyzes a puff dispersion model for a full year's worth of 1-hr prognostic weather data. The difference of wind patterns in coastal and inland areas creates significantly different skewness (S) and kurtosis (K) statistics for the annually averaged pollutant concentrations at ground level receptor points for each grid cell. Plotting the S-K data highlights grouping of sources predominantly impacted by coastal winds versus inland winds. The application of the new method is demonstrated through a case study for the nation of Kuwait by developing new AQZs to support local air management programs. The zone boundaries established by the S-K method were validated by comparing MM5 and WRF prognostic meteorological weather data used in the air dispersion modeling, a support vector machine classifier was trained to compare results with the graphical classification method, and final zones were compared with data collected from Earth observation satellites to confirm locations of high-exposure-risk areas. The resulting AQZs are more accurate and support efficient management strategies for air quality compliance targets effected by local coastal microclimates. A novel method to determine air quality zones in coastal urban areas is introduced using skewness (S) and kurtosis (K) statistics calculated from grid concentrations results of air dispersion models. The method identifies land-sea breeze effects that can be used to manage local air quality in areas of similar microclimates.
Microearthquake sequences along the Irpinia normal fault system in Southern Apennines, Italy
NASA Astrophysics Data System (ADS)
Orefice, Antonella; Festa, Gaetano; Alfredo Stabile, Tony; Vassallo, Maurizio; Zollo, Aldo
2013-04-01
Microearthquakes reflect a continuous readjustment of tectonic structures, such as faults, under the action of local and regional stress fields. Low magnitude seismicity in the vicinity of active fault zones may reveal insights into the mechanics of the fault systems during the inter-seismic period and shine a light on the role of fluids and other physical parameters in promoting or disfavoring the nucleation of larger size events in the same area. Here we analyzed several earthquake sequences concentrated in very limited regions along the 1980 Irpinia earthquake fault zone (Southern Italy), a complex system characterized by normal stress regime, monitored by the dense, multi-component, high dynamic range seismic network ISNet (Irpinia Seismic Network). On a specific single sequence, the May 2008 Laviano swarm, we performed accurate absolute and relative locations and estimated source parameters and scaling laws that were compared with standard stress-drops computed for the area. Additionally, from EGF deconvolution, we computed a slip model for the mainshock and investigated the space-time evolution of the events in the sequence to reveal possible interactions among earthquakes. Through the massive analysis of cross-correlation based on the master event scanning of the continuous recording, we also reconstructed the catalog of repeated earthquakes and recognized several co-located sequences. For these events, we analyzed the statistical properties, location and source parameters and their space-time evolution with the aim of inferring the processes that control the occurrence and the size of microearthquakes in a swarm.
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.
2000-01-01
A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from an Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions. Solutions for the plane (i.e., no earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated datasets, and the relative influence of bearing and arrival time data an the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA:s Optical Transient Detector and Lightning Imaging Sensor. A quadratic planar solution that is useful when only three arrival time measurements are available is also introduced. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in sc)iirce location, Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated datasets, and the results are generally better than those obtained from the three-station linear planar method when bearing errors are about 2 deg.
Accurate Simulation of Acoustic Emission Sources in Composite Plates
NASA Technical Reports Server (NTRS)
Prosser, W. H.; Gorman, M. R.
1994-01-01
Acoustic emission (AE) signals propagate as the extensional and flexural plate modes in thin composite plates and plate-like geometries such as shells, pipes, and tubes. The relative amplitude of the two modes depends on the directionality of the source motion. For source motions with large out-of-plane components such as delaminations or particle impact, the flexural or bending plate mode dominates the AE signal with only a small extensional mode detected. A signal from such a source is well simulated with the standard pencil lead break (Hsu-Neilsen source) on the surface of the plate. For other sources such as matrix cracking or fiber breakage in which the source motion is primarily in-plane, the resulting AE signal has a large extensional mode component with little or no flexural mode observed. Signals from these type sources can also be simulated with pencil lead breaks. However, the lead must be fractured on the edge of the plate to generate an in-plane source motion rather than on the surface of the plate. In many applications such as testing of pressure vessels and piping or aircraft structures, a free edge is either not available or not in a desired location for simulation of in-plane type sources. In this research, a method was developed which allows the simulation of AE signals with a predominant extensional mode component in composite plates requiring access to only the surface of the plate.
Tremor Hypocenters Form a Narrow Zone at the Plate Interface in Two Areas of SW Japan
NASA Astrophysics Data System (ADS)
Armbruster, J. G.
2015-12-01
The tremor detectors developed for accurately locating tectonic tremor in Cascadia [Armbruster et al., JGR 2014] have been applied to data from the HINET seismic network in Japan. In the overview by Obara [Science 2002] there are three strong sources of tectonic tremor in southwest Japan: Shikoku, Kii Pen. and Tokai. The daily epicentral distributions of tremor on the HINET web site allow the identification of days when tremor in each source is active. The worst results were obtained in Shikoku, in spite of the high level of tremor activity observed there by others. This method requires a clear direct arrival of the S and P waves at the stations for coherence to be seen, so scattering and shear wave splitting are possible reasons for poor results there. Relatively wide station spacing, 19-30 km, is another possible reason. The best results were obtained in Tokai with stations STR, HRY and TYE spacing 18-19 km, and Kii Pen. with stations KRT, HYS and KAW spacing 15-22 km. In both of those areas the three station detectors see strong episodes of tremor. If detections with three stations are located by constraining them to the plate interface, a pattern of persistent sources is seen, with some intense sources. This is similar to what was seen in Cascadia. Detections with four stations give S and P arrival times of high accuracy. In Tokai the hypocenters form a narrow, 2-3 km thick, zone dipping to the north, consistent with the plate interface there. In Kii Pen. the hypocenters dip to the northwest in a thin, 2-3 km thick, zone but approximately 5 km shallower than a plate interface model for this area [Yoshioka and Murakami, GJI 2007]. The overlap of tremor sources in the 12 years analyzed here suggests relative hypocentral location errors as small as 2-3 km. We conclude that the methods developed in Cascadia will work in Japan but the typical spacing of HINET stations, ~20 km, is greater than the optimum distance found in analysis of data from Cascadia, 8 to 15 km.
Edge detection of magnetic anomalies using analytic signal of tilt angle (ASTA)
NASA Astrophysics Data System (ADS)
Alamdar, K.; Ansari, A. H.; Ghorbani, A.
2009-04-01
Magnetic is a commonly used geophysical technique to identify and image potential subsurface targets. Interpretation of magnetic anomalies is a complex process due to the superposition of multiple magnetic sources, presence of geologic and cultural noise and acquisition and positioning error. Both the vertical and horizontal derivatives of potential field data are useful; horizontal derivative, enhance edges whereas vertical derivative narrow the width of anomaly and so locate source bodies more accurately. We can combine vertical and horizontal derivative of magnetic field to achieve analytic signal which is independent to body magnetization direction and maximum value of this lies over edges of body directly. Tilt angle filter is phased-base filter and is defined as angle between vertical derivative and total horizontal derivative. Tilt angle value differ from +90 degree to -90 degree and its zero value lies over body edge. One of disadvantage of this filter is when encountering with deep sources the detected edge is blurred. For overcome this problem many authors introduced new filters such as total horizontal derivative of tilt angle or vertical derivative of tilt angle which Because of using high-order derivative in these filters results may be too noisy. If we combine analytic signal and tilt angle, a new filter termed (ASTA) is produced which its maximum value lies directly over body edge and is easer than tilt angle to delineate body edge and no complicity of tilt angle. In this work new filter has been demonstrated on magnetic data from an area in Sar- Cheshme region in Iran. This area is located in 55 degree longitude and 32 degree latitude and is a copper potential region. The main formation in this area is Andesith and Trachyandezite. Magnetic surveying was employed to separate the boundaries of Andezite and Trachyandezite from adjacent area. In this regard a variety of filters such as analytic signal, tilt angle and ASTA filter have been applied which new ASTA filter determined Andezite boundaries from surrounded more accurately than other filters. Keywords: Horizontal derivative, Vertical derivative, Tilt angle, Analytic signal, ASTA, Sar-Cheshme.
PRECISION INTEGRATOR FOR MINUTE ELECTRIC CURRENTS
Hemmendinger, A.; Helmer, R.J.
1961-10-24
An integrator is described for measuring the value of integrated minute electrical currents. The device consists of a source capacitor connected in series with the source of such electrical currents, a second capacitor of accurately known capacitance and a source of accurately known and constant potential, means responsive to the potentials developed across the source capacitor for reversibly connecting the second capacitor in series with the source of known potential and with the source capacitor and at a rate proportional to the potential across the source capacitor to maintain the magnitude of the potential across the source capacitor at approximately zero. (AEC)
NASA Astrophysics Data System (ADS)
Vanorio, T.; Virieux, J.; Capuano, P.; Russo, G.
2005-03-01
The Campi Flegrei (CF) Caldera experiences dramatic ground deformations unsurpassed anywhere in the world. The source responsible for this phenomenon is still debated. With the aim of exploring the structure of the caldera as well as the role of hydrothermal fluids on velocity changes, a multidisciplinary approach dealing with three-dimensional delay time tomography and rock physics characterization has been followed. Selected seismic data were modeled by using a tomographic method based on an accurate finite difference travel time computation which simultaneously inverts P wave and S wave first-arrival times for both velocity model parameters and hypocenter locations. The retrieved P wave and S wave velocity images as well as the deduced Vp/Vs images were interpreted by using experimental measurements of rock physical properties on CF samples to take into account steam/water phase transition mechanisms affecting P wave and S wave velocities. Also, modeling of petrophysical properties for site-relevant rocks constrains the role of overpressured fluids on velocity. A flat and low Vp/Vs anomaly lies at 4 km depth under the city of Pozzuoli. Earthquakes are located at the top of this anomaly. This anomaly implies the presence of fractured overpressured gas-bearing formations and excludes the presence of melted rocks. At shallow depth, a high Vp/Vs anomaly located at 1 km suggests the presence of rocks containing fluids in the liquid phase. Finally, maps of the Vp*Vs product show a high Vp*Vs horseshoe-shaped anomaly located at 2 km depth. It is consistent with gravity data and well data and might constitute the on-land remainder of the caldera rim, detected below sea level by tomography using active source seismic data.
Hiding the Source Based on Limited Flooding for Sensor Networks.
Chen, Juan; Lin, Zhengkui; Hu, Ying; Wang, Bailing
2015-11-17
Wireless sensor networks are widely used to monitor valuable objects such as rare animals or armies. Once an object is detected, the source, i.e., the sensor nearest to the object, generates and periodically sends a packet about the object to the base station. Since attackers can capture the object by localizing the source, many protocols have been proposed to protect source location. Instead of transmitting the packet to the base station directly, typical source location protection protocols first transmit packets randomly for a few hops to a phantom location, and then forward the packets to the base station. The problem with these protocols is that the generated phantom locations are usually not only near the true source but also close to each other. As a result, attackers can easily trace a route back to the source from the phantom locations. To address the above problem, we propose a new protocol for source location protection based on limited flooding, named SLP. Compared with existing protocols, SLP can generate phantom locations that are not only far away from the source, but also widely distributed. It improves source location security significantly with low communication cost. We further propose a protocol, namely SLP-E, to protect source location against more powerful attackers with wider fields of vision. The performance of our SLP and SLP-E are validated by both theoretical analysis and simulation results.
Light fluence dosimetry in lung-simulating cavities
NASA Astrophysics Data System (ADS)
Zhu, Timothy C.; Kim, Michele M.; Padawer, Jonah; Dimofte, Andreea; Potasek, Mary; Beeson, Karl; Parilov, Evgueni
2018-02-01
Accurate light dosimery is critical to ensure consistent outcome for pleural photodynamic therapy (pPDT). Ellipsoid shaped cavities with different sizes surrounded by turbid medium are used to simulate the intracavity lung geometry. An isotropic light source is introduced and surrounded by turbid media. Direct measurements of light fluence rate were compared to Monte Carlo simulated values on the surface of the cavities for various optical properties. The primary component of the light was determined by measurements performed in air in the same geometry. The scattered component was found by submerging the air-filled cavity in scattering media (Intralipid) and absorbent media (ink). The light source was located centrally with the azimuthal angle, but placed in two locations (vertically centered and 2 cm below the center) for measurements. Light fluence rate was measured using isotropic detectors placed at various angles on the ellipsoid surface. The measurements and simulations show that the scattered dose is uniform along the surface of the intracavity ellipsoid geometries in turbid media. One can express the light fluence rate empirically as φ =4S/As*Rd/(1- Rd), where Rd is the diffuse reflectance, As is the surface area, and S is the source power. The measurements agree with this empirical formula to within an uncertainty of 10% for the range of optical properties studied. GPU voxel-based Monte-Carlo simulation is performed to compare with measured results. This empirical formula can be applied to arbitrary geometries, such as the pleural or intraperitoneal cavity.
NASA Astrophysics Data System (ADS)
Pezzo, Giuseppe; Merryman Boncori, John Peter; Atzori, Simone; Antonioli, Andrea; Salvi, Stefano
2014-07-01
In this study, we use Differential Synthetic Aperture Radar Interferometry (DInSAR) and multi-aperture interferometry (MAI) to constrain the sources of the three largest events of the 2008 Baluchistan (western Pakistan) seismic sequence, namely two Mw 6.4 events only 12 hr apart and an Mw 5.7 event that occurred 40 d later. The sequence took place in the Quetta Syntaxis, the most seismically active region of Baluchistan, tectonically located between the colliding Indian Plate and the Afghan Block of the Eurasian Plate. Surface displacements estimated from ascending and descending ENVISAT ASAR acquisitions were used to derive elastic dislocation models for the sources of the two main events. The estimated slip distributions have peak values of 120 and 130 cm on a pair of almost parallel and near-vertical faults striking NW-SE, and of 50 cm and 60 cm on two high-angle faults striking NE-SW. Values up to 50 cm were found for the largest aftershock on an NE-SW fault located between the sources of the main shocks. The MAI measurements, with their high sensitivity to the north-south motion component, are crucial in this area to accurately describe the coseismic displacement field. Our results provide insight into the deformation style of the Quetta Syntaxis, suggesting that right-lateral slip released at shallow depths on large NW fault planes is compatible with left-lateral activation on smaller NE-SW faults.
EEG source localization: Sensor density and head surface coverage.
Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don
2015-12-30
The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne
Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less
Recording and quantification of ultrasonic echolocation clicks from free-ranging toothed whales
NASA Astrophysics Data System (ADS)
Madsen, P. T.; Wahlberg, M.
2007-08-01
Toothed whales produce short, ultrasonic clicks of high directionality and source level to probe their environment acoustically. This process, termed echolocation, is to a large part governed by the properties of the emitted clicks. Therefore derivation of click source parameters from free-ranging animals is of increasing importance to understand both how toothed whales use echolocation in the wild and how they may be monitored acoustically. This paper addresses how source parameters can be derived from free-ranging toothed whales in the wild using calibrated multi-hydrophone arrays and digital recorders. We outline the properties required of hydrophones, amplifiers and analog to digital converters, and discuss the problems of recording echolocation clicks on the axis of a directional sound beam. For accurate localization the hydrophone array apertures must be adapted and scaled to the behavior of, and the range to, the clicking animal, and precise information on hydrophone locations is critical. We provide examples of localization routines and outline sources of error that lead to uncertainties in localizing clicking animals in time and space. Furthermore we explore approaches to time series analysis of discrete versions of toothed whale clicks that are meaningful in a biosonar context.
Fast and accurate detection of spread source in large complex networks.
Paluch, Robert; Lu, Xiaoyan; Suchecki, Krzysztof; Szymański, Bolesław K; Hołyst, Janusz A
2018-02-06
Spread over complex networks is a ubiquitous process with increasingly wide applications. Locating spread sources is often important, e.g. finding the patient one in epidemics, or source of rumor spreading in social network. Pinto, Thiran and Vetterli introduced an algorithm (PTVA) to solve the important case of this problem in which a limited set of nodes act as observers and report times at which the spread reached them. PTVA uses all observers to find a solution. Here we propose a new approach in which observers with low quality information (i.e. with large spread encounter times) are ignored and potential sources are selected based on the likelihood gradient from high quality observers. The original complexity of PTVA is O(N α ), where α ∈ (3,4) depends on the network topology and number of observers (N denotes the number of nodes in the network). Our Gradient Maximum Likelihood Algorithm (GMLA) reduces this complexity to O (N 2 log (N)). Extensive numerical tests performed on synthetic networks and real Gnutella network with limitation that id's of spreaders are unknown to observers demonstrate that for scale-free networks with such limitation GMLA yields higher quality localization results than PTVA does.
A Robust Sound Source Localization Approach for Microphone Array with Model Errors
NASA Astrophysics Data System (ADS)
Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong
In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.
NASA Astrophysics Data System (ADS)
Vergallo, P.; Lay-Ekuakille, A.
2013-08-01
Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to have a significant improvement compared to the classical MUSIC method, with a small margin of uncertainty about the exact location of the sources. In fact, the constraints of the spatial sparsity on the signal field allow to concentrate power in the directions of active sources, and consequently it is possible to calculate the position of the sources within the considered volume conductor. Later, the method is tested on the real EEG data too. The result is in accordance with the clinical report even if improvements are necessary to have further accurate estimates of the positions of the sources.
On the ability of human listeners to distinguish between front and back.
Zhang, Peter Xinya; Hartmann, William M
2010-02-01
In order to determine whether a sound source is in front or in back, listeners can use location-dependent spectral cues caused by diffraction from their anatomy. This capability was studied using a precise virtual reality technique (VRX) based on a transaural technology. Presented with a virtual baseline simulation accurate up to 16 kHz, listeners could not distinguish between the simulation and a real source. Experiments requiring listeners to discriminate between front and back locations were performed using controlled modifications of the baseline simulation to test hypotheses about the important spectral cues. The experiments concluded: (1) Front/back cues were not confined to any particular 1/3rd or 2/3rd octave frequency region. Often adequate cues were available in any of several disjoint frequency regions. (2) Spectral dips were more important than spectral peaks. (3) Neither monaural cues nor interaural spectral level difference cues were adequate. (4) Replacing baseline spectra by sharpened spectra had minimal effect on discrimination performance. (5) When presented with an interaural time difference less than 200 micros, which pulled the image to the side, listeners still successfully discriminated between front and back, suggesting that front/back discrimination is independent of azimuthal localization within certain limits. Copyright 2009 Elsevier B.V. All rights reserved.
On the ability of human listeners to distinguish between front and back
Zhang, Peter Xinya; Hartmann, William M.
2009-01-01
In order to determine whether a sound source is in front or in back, listeners can use location-dependent spectral cues caused by diffraction from their anatomy. This capability was studied using a precise virtual-reality technique (VRX) based on a transaural technology. Presented with a virtual baseline simulation accurate up to 16 kHz, listeners could not distinguish between the simulation and a real source. Experiments requiring listeners to discriminate between front and back locations were performed using controlled modifications of the baseline simulation to test hypotheses about the important spectral cues. The experiments concluded: (1) Front/back cues were not confined to any particular 1/3rd or 2/3rd octave frequency region. Often adequate cues were available in any of several disjoint frequency regions. (2) Spectral dips were more important than spectral peaks. (3) Neither monaural cues nor interaural spectral level difference cues were adequate. (4) Replacing baseline spectra by sharpened spectra had minimal effect on discrimination performance. (5) When presented with an interaural time difference less than 200 μs, which pulled the image to the side, listeners still successfully discriminated between front and back, suggesting that front/back discrimination is independent of azimuthal localization within certain limits. PMID:19900525
Martiniano, Robert; Mcginnis, Sandra; Moore, Jean
2010-01-01
Health workforce researchers routinely conduct studies to determine whether a profession is currently in short supply and whether future shortages are likely. This is particularly important for registered nursing since the profession has experienced periodic shortages over the past three decades. Registered nurse (RN) forecast studies can be valuable in quantifying supply and demand gaps and identifying the most appropriate strategies to avert future shortages. In order to quantify RN supply/demand gaps, it is important to have accurate data on RNs, including the number of active RNs as well as their demographic, education, and practice characteristics, and work location(s). A lack of relevant and timely data on the nursing workforce is a significant barrier to identifying where nursing shortages exist, where they are most severe, and determining the factors that contribute to them. This lack of understanding impedes the development of effective health workforce programs and policies to mitigate shortages and the ability to evaluate these programs and policies for effectiveness. This study describes the national data sources available to nursing researchers to study the supply and distribution of the RN workforce and assesses the sources' strengths and limitations. This study also explores the potential for using state-level data for nursing workforce research.
A Modeling Approach to Enhance Animal-Obtained Oceanographic Data Geo- Position
NASA Astrophysics Data System (ADS)
Tremblay, Y.; Robinson, P.; Weise, M. J.; Costa, D. P.
2006-12-01
Diving animals are increasingly being used as platforms to collect oceanographic data such as CTD profiles. Animal borne sensors provide an amazing amount of data that have to be spatially referenced. Because of technical limitations geo-position of these data mostly comes from the interpolation of locations obtained through the ARGOS positioning system. This system lacks spatio-temporal resolution compared to the Global Positioning System (GPS) and therefore, the positions of these oceanographic data are not well defined. A consequence of this is that many data collected in coastal regions are discarded, because many casts' records fell on land. Using modeling techniques, we propose a method to deal with this problem. The method is rather intuitive, and instead of deleting unreasonable or low-quality locations, it uses them by taking into account their lack of precision as a source of information. In a similar way, coastlines are used as sources of information, because marine animals do not travel over land. The method was evaluated using simultaneously obtained tracks with the Argos and GPS system. The tracks obtained from this method are considerably enhanced and allow a more accurate geo-reference of oceanographic data. In addition, the method provides a way to evaluate spatial errors for each cast that is not otherwise possible with classical filtering methods.
NASA Astrophysics Data System (ADS)
Yu, Kuangyou; Xing, Zhenyu; Huang, Xiaofeng; Deng, Junjun; Andersson, August; Fang, Wenzheng; Gustafsson, Örjan; Zhou, Jiabin; Du, Ke
2018-03-01
Regional haze over China has severe implications for air quality and regional climate. To effectively combat these effects the high uncertainties regarding the emissions from different sources needs to be reduced. In this paper, which is the third in a series on the sources of PM2.5 in pollution hotspot regions of China, we focus on the sources of black carbon aerosols (BC), using carbon isotope signatures. Four-season samples were collected at two key locations: Beijing-Tianjin-Hebei (BTH, part of Northern China plain), and the Pearl River Delta (PRD). We find that that fossil fuel combustion was the predominant source of BC in both BTH and PRD regions, accounting for 75 ± 5%. However, the contributions of what fossil fuel components were dominating differed significantly between BTH and PRD, and varied dramatically with seasons. Coal combustion is overall the all-important BC source in BTH, accounting for 46 ± 12% of the BC in BTH, with the maximum value (62%) found in winter. In contrast for the PRD region, liquid fossil fuel combustion (e.g., oil, diesel, and gasoline) is the dominant source of BC, with an annual mean value of 41 ± 15% and the maximum value of 55% found in winter. Region- and season-specific source apportionments are recommended to both accurately assess the climate impact of carbonaceous aerosol emissions and to effectively mitigate deteriorating air quality caused by carbonaceous aerosols.
NASA Astrophysics Data System (ADS)
Prabhat, Prashant; Peet, Michael; Erdogan, Turan
2016-03-01
In order to design a fluorescence experiment, typically the spectra of a fluorophore and of a filter set are overlaid on a single graph and the spectral overlap is evaluated intuitively. However, in a typical fluorescence imaging system the fluorophores and optical filters are not the only wavelength dependent variables - even the excitation light sources have been changing. For example, LED Light Engines may have a significantly different spectral response compared to the traditional metal-halide lamps. Therefore, for a more accurate assessment of fluorophore-to-filter-set compatibility, all sources of spectral variation should be taken into account simultaneously. Additionally, intuitive or qualitative evaluation of many spectra does not necessarily provide a realistic assessment of the system performance. "SearchLight" is a freely available web-based spectral plotting and analysis tool that can be used to address the need for accurate, quantitative spectral evaluation of fluorescence measurement systems. This tool is available at: http://searchlight.semrock.com/. Based on a detailed mathematical framework [1], SearchLight calculates signal, noise, and signal-to-noise ratio for multiple combinations of fluorophores, filter sets, light sources and detectors. SearchLight allows for qualitative and quantitative evaluation of the compatibility of filter sets with fluorophores, analysis of bleed-through, identification of optimized spectral edge locations for a set of filters under specific experimental conditions, and guidance regarding labeling protocols in multiplexing imaging assays. Entire SearchLight sessions can be shared with colleagues and collaborators and saved for future reference. [1] Anderson, N., Prabhat, P. and Erdogan, T., Spectral Modeling in Fluorescence Microscopy, http://www.semrock.com (2010).
NASA Astrophysics Data System (ADS)
Prasad, K.; Thorpe, A. K.; Duren, R. M.; Thompson, D. R.; Whetstone, J. R.
2016-12-01
The National Institute of Standards and Technology (NIST) has supported the development and demonstration of a measurement capability to accurately locate greenhouse gas sources and measure their flux to the atmosphere over urban domains. However, uncertainties in transport models which form the basis of all top-down approaches can significantly affect our capability to attribute sources and predict their flux to the atmosphere. Reducing uncertainties between bottom-up and top-down models will require high resolution transport models as well as validation and verification of dispersion models over an urban domain. Tracer experiments involving the release of Perfluorocarbon Tracers (PFTs) at known flow rates offer the best approach for validating dispersion / transport models. However, tracer experiments are limited by cost, ability to make continuous measurements, and environmental concerns. Natural tracer experiments, such as the leak from the Aliso Canyon underground storage facility offers a unique opportunity to improve and validate high resolution transport models, test leak hypothesis, and to estimate the amount of methane released.High spatial resolution (10 m) Large Eddy Simulations (LES) coupled with WRF atmospheric transport models were performed to simulate the dynamics of the Aliso Canyon methane plume and to quantify the source. High resolution forward simulation results were combined with aircraft and tower based in-situ measurements as well as data from NASA airborne imaging spectrometers. Comparison of simulation results with measurement data demonstrate the capability of the LES models to accurately model transport and dispersion of methane plumes over urban domains.
Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.
NASA Astrophysics Data System (ADS)
Liu, Y.; Li, Y.
2016-12-01
We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.
NASA Astrophysics Data System (ADS)
Krings, Thomas; Neininger, Bruno; Gerilowski, Konstantin; Krautwurst, Sven; Buchwitz, Michael; Burrows, John P.; Lindemann, Carsten; Ruhtz, Thomas; Schüttemeyer, Dirk; Bovensmann, Heinrich
2018-02-01
Reliable techniques to infer greenhouse gas emission rates from localised sources require accurate measurement and inversion approaches. In this study airborne remote sensing observations of CO2 by the MAMAP instrument and airborne in situ measurements are used to infer emission estimates of carbon dioxide released from a cluster of coal-fired power plants. The study area is complex due to sources being located in close proximity and overlapping associated carbon dioxide plumes. For the analysis of in situ data, a mass balance approach is described and applied, whereas for the remote sensing observations an inverse Gaussian plume model is used in addition to a mass balance technique. A comparison between methods shows that results for all methods agree within 10 % or better with uncertainties of 10 to 30 % for cases in which in situ measurements were made for the complete vertical plume extent. The computed emissions for individual power plants are in agreement with results derived from emission factors and energy production data for the time of the overflight.
Use of Tritium Accelerator Mass Spectrometry for Tree Ring Analysis
LOVE, ADAM H.; HUNT, JAMES R.; ROBERTS, MARK L.; SOUTHON, JOHN R.; CHIARAPPA - ZUCCA, MARINA L.; DINGLEY, KAREN H.
2010-01-01
Public concerns over the health effects associated with low-level and long-term exposure to tritium released from industrial point sources have generated the demand for better methods to evaluate historical tritium exposure levels for these communities. The cellulose of trees accurately reflects the tritium concentration in the source water and may contain the only historical record of tritium exposure. The tritium activity in the annual rings of a tree was measured using accelerator mass spectrometry to reconstruct historical annual averages of tritium exposure. Milligram-sized samples of the annual tree rings from a Tamarix located at the Nevada Test Site are used for validation of this methodology. The salt cedar was chosen since it had a single source of tritiated water that was well-characterized as it varied over time. The decay-corrected tritium activity of the water in which the salt cedar grew closely agrees with the organically bound tritium activity in its annual rings. This demonstrates that the milligram-sized samples used in tritium accelerator mass spectrometry are suited for reconstructing anthropogenic tritium levels in the environment. PMID:12144257
Characteristic Analysis of Air-gun Source Wavelet based on the Vertical Cable Data
NASA Astrophysics Data System (ADS)
Xing, L.
2016-12-01
Air guns are important sources for marine seismic exploration. Far-field wavelets of air gun arrays, as a necessary parameter for pre-stack processing and source models, plays an important role during marine seismic data processing and interpretation. When an air gun fires, it generates a series of air bubbles. Similar to onshore seismic exploration, the water forms a plastic fluid near the bubble; the farther the air gun is located from the measurement, the more steady and more accurately represented the wavelet will be. In practice, hydrophones should be placed more than 100 m from the air gun; however, traditional seismic cables cannot meet this requirement. On the other hand, vertical cables provide a viable solution to this problem. This study uses a vertical cable to receive wavelets from 38 air guns and data are collected offshore Southeast Qiong, where the water depth is over 1000 m. In this study, the wavelets measured using this technique coincide very well with the simulated wavelets and can therefore represent the real shape of the wavelets. This experiment fills a technology gap in China.
MR Imaging-Guided Attenuation Correction of PET Data in PET/MR Imaging.
Izquierdo-Garcia, David; Catana, Ciprian
2016-04-01
Attenuation correction (AC) is one of the most important challenges in the recently introduced combined PET/magnetic resonance (MR) scanners. PET/MR AC (MR-AC) approaches aim to develop methods that allow accurate estimation of the linear attenuation coefficients of the tissues and other components located in the PET field of view. MR-AC methods can be divided into 3 categories: segmentation, atlas, and PET based. This review provides a comprehensive list of the state-of-the-art MR-AC approaches and their pros and cons. The main sources of artifacts are presented. Finally, this review discusses the current status of MR-AC approaches for clinical applications. Copyright © 2016 Elsevier Inc. All rights reserved.
Main magnetic field of Jupiter and its implications for future orbiter missions
NASA Technical Reports Server (NTRS)
Acuna, M. H.; Ness, N. F.
1975-01-01
A very strong planetary magnetic field and an enormous magnetosphere with extremely intense radiation belts exist at Jupiter. Pioneer 10 and 11 fly-bys confirmed and extended the earlier ground based estimates of many of these characteristics but left unanswered or added to the list of several important and poorly understood features: the source mechanism and location of decametric emissions, and the absorption effects by the natural satellites Amalthea, Io, Europa and Ganymede. High inclination orbits (exceeding 60 deg) with low periapses (less than 2 Jupiter radii) are required to map the radiation belts and main magnetic field of Jupiter accurately so as to permit full investigation of these and associated phenomena.
Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.
Rideout, Brendan P; Dosso, Stan E; Hannay, David E
2013-09-01
This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.
Lonini, Luca; Reissman, Timothy; Ochoa, Jose M; Mummidisetty, Chaithanya K; Kording, Konrad; Jayaraman, Arun
2017-10-01
The objective of rehabilitation after spinal cord injury is to enable successful function in everyday life and independence at home. Clinical tests can assess whether patients are able to execute functional movements but are limited in assessing such information at home. A prototype system is developed that detects stand-to-reach activities, a movement with important functional implications, at multiple locations within a mock kitchen. Ten individuals with incomplete spinal cord injuries performed a sequence of standing and reaching tasks. The system monitored their movements by combining two sources of information: a triaxial accelerometer, placed on the subject's thigh, detected sitting or standing, and a network of radio frequency tags, wirelessly connected to a wrist-worn device, detected reaching at three locations. A threshold-based algorithm detected execution of the combined tasks and accuracy was measured by the number of correctly identified events. The system was shown to have an average accuracy of 98% for inferring when individuals performed stand-to-reach activities at each tag location within the same room. The combination of accelerometry and tags yielded accurate assessments of functional stand-to-reach activities within a home environment. Optimization of this technology could simplify patient compliance and allow clinicians to assess functional home activities.
Comparing Paper and Tablet Modes of Retrospective Activity Space Data Collection.
Yabiku, Scott T; Glick, Jennifer E; Wentz, Elizabeth A; Ghimire, Dirgha; Zhao, Qunshan
2017-01-01
Individual actions are both constrained and facilitated by the social context in which individuals are embedded. But research to test specific hypotheses about the role of space on human behaviors and well-being is limited by the difficulty of collecting accurate and personally relevant social context data. We report on a project in Chitwan, Nepal, that directly addresses challenges to collect accurate activity space data. We test if a computer assisted interviewing (CAI) tablet-based approach to collecting activity space data was more accurate than a paper map-based approach; we also examine which subgroups of respondents provided more accurate data with the tablet mode compared to paper. Results show that the tablet approach yielded more accurate data when comparing respondent-indicated locations to the known locations as verified by on-the-ground staff. In addition, the accuracy of the data provided by older and less healthy respondents benefited more from the tablet mode.
Comparing Paper and Tablet Modes of Retrospective Activity Space Data Collection*
Yabiku, Scott T.; Glick, Jennifer E.; Wentz, Elizabeth A.; Ghimire, Dirgha; Zhao, Qunshan
2018-01-01
Individual actions are both constrained and facilitated by the social context in which individuals are embedded. But research to test specific hypotheses about the role of space on human behaviors and well-being is limited by the difficulty of collecting accurate and personally relevant social context data. We report on a project in Chitwan, Nepal, that directly addresses challenges to collect accurate activity space data. We test if a computer assisted interviewing (CAI) tablet-based approach to collecting activity space data was more accurate than a paper map-based approach; we also examine which subgroups of respondents provided more accurate data with the tablet mode compared to paper. Results show that the tablet approach yielded more accurate data when comparing respondent-indicated locations to the known locations as verified by on-the-ground staff. In addition, the accuracy of the data provided by older and less healthy respondents benefited more from the tablet mode. PMID:29623133
Enhancements to the Bayesian Infrasound Source Location Method
2012-09-01
ENHANCEMENTS TO THE BAYESIAN INFRASOUND SOURCE LOCATION METHOD Omar E. Marcillo, Stephen J. Arrowsmith, Rod W. Whitaker, and Dale N. Anderson Los...ABSTRACT We report on R&D that is enabling enhancements to the Bayesian Infrasound Source Location (BISL) method for infrasound event location...the Bayesian Infrasound Source Location Method 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER
Zhou, Ruojing; Mou, Weimin
2016-08-01
Cognitive mapping is assumed to be through hippocampus-dependent place learning rather than striatum-dependent response learning. However, we proposed that either type of spatial learning, as long as it involves encoding metric relations between locations and reference points, could lead to a cognitive map. Furthermore, the fewer reference points to specify individual locations, the more accurate a cognitive map of these locations will be. We demonstrated that participants have more accurate representations of vectors between 2 locations and of configurations among 3 locations when locations are individually encoded in terms of a single landmark than when locations are encoded in terms of a boundary. Previous findings have shown that learning locations relative to a boundary involve stronger place learning and higher hippocampal activation whereas learning relative to a single landmark involves stronger response learning and higher striatal activation. Recognizing this, we have provided evidence challenging the cognitive map theory but favoring our proposal. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
DeGrandpre, K.; Pesicek, J. D.; Lu, Z.
2016-12-01
During the summer of 2014 and the early spring of 2015 two notable increases in seismic activity at Semisopochnoi volcano in the western Aleutian islands were recorded on AVO seismometers on Semisopochnoi and neighboring islands. These seismic swarms did not lead to an eruption. This study employs differential SAR techniques using TerraSAR-X images in conjunction with more accurately relocating the recorded seismic events through simultaneous inversion of event travel times and a three-dimensional velocity model using tomoDD. The interferograms created from the SAR images exhibit surprising coherence and an island wide spatial distribution of inflation that is then used in a Mogi model in order to define the three-dimensional location and volume change required for a source at Semisopochnoi to produce the observed surface deformation. The tomoDD relocations provide a more accurate and realistic three-dimensional velocity model as well as a tighter clustering of events for both swarms that clearly outline a linear seismic void within the larger group of shallow (<10 km) seismicity. While no direct conclusions as to the relationship of these seismic events and the observed surface deformation can be made at this time, these techniques are both complimentary and efficient forms of remotely monitoring volcanic activity that provide much deeper insights into the processes involved without having to risk hazardous or costly field work.
Applying geologic sensitivity analysis to environmental risk management: The financial implications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, D.T.
The financial risks associated with environmental contamination can be staggering and are often difficult to identify and accurately assess. Geologic sensitivity analysis is gaining recognition as a significant and useful tool that can empower the user with crucial information concerning environmental risk management and brownfield redevelopment. It is particularly useful when (1) evaluating the potential risks associated with redevelopment of historical industrial facilities (brownfields) and (2) planning for future development, especially in areas of rapid development because the number of potential contaminating sources often increases with an increase in economic development. An examination of the financial implications relating to geologicmore » sensitivity analysis in southeastern Michigan from numerous case studies indicate that the environmental cost of contamination may be 100 to 1,000 times greater at a geologically sensitive location compared to the least sensitive location. Geologic sensitivity analysis has demonstrated that near-surface geology may influence the environmental impact of a contaminated site to a greater extent than the amount and type of industrial development.« less
Calculated organ doses for Mayak production association central hall using ICRP and MCNP.
Choe, Dong-Ok; Shelkey, Brenda N; Wilde, Justin L; Walk, Heidi A; Slaughter, David M
2003-03-01
As part of an ongoing dose reconstruction project, equivalent organ dose rates from photons and neutrons were estimated using the energy spectra measured in the central hall above the graphite reactor core located in the Russian Mayak Production Association facility. Reconstruction of the work environment was necessary due to the lack of personal dosimeter data for neutrons in the time period prior to 1987. A typical worker scenario for the central hall was developed for the Monte Carlo Neutron Photon-4B (MCNP) code. The resultant equivalent dose rates for neutrons and photons were compared with the equivalent dose rates derived from calculations using the conversion coefficients in the International Commission on Radiological Protection Publications 51 and 74 in order to validate the model scenario for this Russian facility. The MCNP results were in good agreement with the results of the ICRP publications indicating the modeling scenario was consistent with actual work conditions given the spectra provided. The MCNP code will allow for additional orientations to accurately reflect source locations.
An Autonomous Distributed Fault-Tolerant Local Positioning System
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2017-01-01
We describe a fault-tolerant, GPS-independent (Global Positioning System) distributed autonomous positioning system for static/mobile objects and present solutions for providing highly-accurate geo-location data for the static/mobile objects in dynamic environments. The reliability and accuracy of a positioning system fundamentally depends on two factors; its timeliness in broadcasting signals and the knowledge of its geometry, i.e., locations and distances of the beacons. Existing distributed positioning systems either synchronize to a common external source like GPS or establish their own time synchrony using a scheme similar to a master-slave by designating a particular beacon as the master and other beacons synchronize to it, resulting in a single point of failure. Another drawback of existing positioning systems is their lack of addressing various fault manifestations, in particular, communication link failures, which, as in wireless networks, are increasingly dominating the process failures and are typically transient and mobile, in the sense that they typically affect different messages to/from different processes over time.
NASA Astrophysics Data System (ADS)
Kulisek, J. A.; Schweppe, J. E.; Stave, S. C.; Bernacki, B. E.; Jordan, D. V.; Stewart, T. N.; Seifert, C. E.; Kernan, W. J.
2015-06-01
Helicopter-mounted gamma-ray detectors can provide law enforcement officials the means to quickly and accurately detect, identify, and locate radiological threats over a wide geographical area. The ability to accurately distinguish radiological threat-generated gamma-ray signatures from background gamma radiation in real time is essential in order to realize this potential. This problem is non-trivial, especially in urban environments for which the background may change very rapidly during flight. This exacerbates the challenge of estimating background due to the poor counting statistics inherent in real-time airborne gamma-ray spectroscopy measurements. To address this challenge, we have developed a new technique for real-time estimation of background gamma radiation from aerial measurements without the need for human analyst intervention. The method can be calibrated using radiation transport simulations along with data from previous flights over areas for which the isotopic composition need not be known. Over the examined measured and simulated data sets, the method generated accurate background estimates even in the presence of a strong, 60Co source. The potential to track large and abrupt changes in background spectral shape and magnitude was demonstrated. The method can be implemented fairly easily in most modern computing languages and environments.
Senko, Jesse; Nichols, Wallace J; Ross, James Perran; Willcox, Adam S
2009-12-01
Sea turtles have historically been an important food resource for many coastal inhabitants of Mexico. Today, the consumption of sea turtle meat and eggs continues in northwestern Mexico despite well-documented legal protection and market conditions providing easier access to other more reliable protein sources. Although there is growing evidence that consuming sea turtles may be harmful to human health due to biotoxins, environmental contaminants, viruses, parasites, and bacteria, many at-risk individuals, trusted information sources, and risk communicators may be unaware of this information. Therefore, we interviewed 134 residents and 37 physicians in a region with high rates of sea turtle consumption to: (1) examine their knowledge and perceptions concerning these risks, as a function of sex, age, occupation, education and location; (2) document the occurrence of illness resulting from consumption; and (3) identify information needs for effective risk communication. We found that 32% of physicians reported having treated patients who were sickened from sea turtle consumption. Although physicians believed sea turtles were an unhealthy food source, they were largely unaware of specific health hazards found in regional sea turtles, regardless of location. By contrast, residents believed that sea turtles were a healthy food source, regardless of sex, age, occupation, and education, and they were largely unaware of specific health hazards found in regional sea turtles, regardless of age, occupation, and education. Although most residents indicated that they would cease consumption if their physician told them it was unhealthy, women were significantly more likely to do so than men. These results suggest that residents may lack the necessary knowledge to make informed dietary decisions and physicians do not have enough accurate information to effectively communicate risks with their patients.
NASA Astrophysics Data System (ADS)
Wang, Lina; Jayaratne, Rohan; Heuff, Darlene; Morawska, Lidia
A composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. Hence, this model was able to quickly quantify the time spent in each segment within the considered zone, as well as the composition and position of the requisite segments based on the vehicle fleet information, which not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bi-directional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. Although the CLSE model is intended to be applied in traffic management and transport analysis systems for the evaluation of exposure, as well as the simulation of vehicle emissions in traffic interrupted microenvironments, the bus station model can also be used for the input of initial source definitions in future dispersion models.
Wald, D.J.; Graves, R.W.
2001-01-01
Using numerical tests for a prescribed heterogeneous earthquake slip distribution, we examine the importance of accurate Green's functions (GF) for finite fault source inversions which rely on coseismic GPS displacements and leveling line uplift alone and in combination with near-source strong ground motions. The static displacements, while sensitive to the three-dimensional (3-D) structure, are less so than seismic waveforms and thus are an important contribution, particularly when used in conjunction with waveform inversions. For numerical tests of an earthquake source and data distribution modeled after the 1994 Northridge earthquake, a joint geodetic and seismic inversion allows for reasonable recovery of the heterogeneous slip distribution on the fault. In contrast, inaccurate 3-D GFs or multiple 1-D GFs allow only partial recovery of the slip distribution given strong motion data alone. Likewise, using just the GPS and leveling line data requires significant smoothing for inversion stability, and hence, only a blurred vision of the prescribed slip is recovered. Although the half-space approximation for computing the surface static deformation field is no longer justifiable based on the high level of accuracy for current GPS data acquisition and the computed differences between 3-D and half-space surface displacements, a layered 1-D approximation to 3-D Earth structure provides adequate representation of the surface displacement field. However, even with the half-space approximation, geodetic data can provide additional slip resolution in the joint seismic and geodetic inversion provided a priori fault location and geometry are correct. Nevertheless, the sensitivity of the static displacements to the Earth structure begs caution for interpretation of surface displacements, particularly those recorded at monuments located in or near basin environments. Copyright 2001 by the American Geophysical Union.
Ara, Perzila; Cheng, Shaokoon; Heimlich, Michael; Dutkiewicz, Eryk
2015-01-01
Recent developments in capsule endoscopy have highlighted the need for accurate techniques to estimate the location of a capsule endoscope. A highly accurate location estimation of a capsule endoscope in the gastrointestinal (GI) tract in the range of several millimeters is a challenging task. This is mainly because the radio-frequency signals encounter high loss and a highly dynamic channel propagation environment. Therefore, an accurate path-loss model is required for the development of accurate localization algorithms. This paper presents an in-body path-loss model for the human abdomen region at 2.4 GHz frequency. To develop the path-loss model, electromagnetic simulations using the Finite-Difference Time-Domain (FDTD) method were carried out on two different anatomical human models. A mathematical expression for the path-loss model was proposed based on analysis of the measured loss at different capsule locations inside the small intestine. The proposed path-loss model is a good approximation to model in-body RF propagation, since the real measurements are quite infeasible for the capsule endoscopy subject.
Subsurface solute transport with one-, two-, and three-dimensional arbitrary shape sources
NASA Astrophysics Data System (ADS)
Chen, Kewei; Zhan, Hongbin; Zhou, Renjie
2016-07-01
Solutions with one-, two-, and three-dimensional arbitrary shape source geometries will be very helpful tools for investigating a variety of contaminant transport problems in the geological media. This study proposed a general method to develop new solutions for solute transport in a saturated, homogeneous aquifer (confined or unconfined) with a constant, unilateral groundwater flow velocity. Several typical source geometries, such as arbitrary line sources, vertical and horizontal patch sources, circular and volumetric sources, were considered. The sources can sit on the upper or lower aquifer boundary to simulate light non-aqueous-phase-liquids (LNAPLs) or dense non-aqueous-phase-liquids (DNAPLs), respectively, or can be located anywhere inside the aquifer. The developed new solutions were tested against previous benchmark solutions under special circumstances and were shown to be robust and accurate. Such solutions can also be used as a starting point for the inverse problem of source zone and source geometry identification in the future. The following findings can be obtained from analyzing the solutions. The source geometry, including shape and orientation, generally played an important role for the concentration profile through the entire transport process. When comparing the inclined line sources with the horizontal line sources, the concentration contours expanded considerably along the vertical direction, and shrank considerably along the groundwater flow direction. A planar source sitting on the upper aquifer boundary (such as a LNAPL pool) would lead to significantly different concentration profiles compared to a planar source positioned in a vertical plane perpendicular to the flow direction. For a volumetric source, its dimension along the groundwater flow direction became less important compared to its other two dimensions.
NASA Astrophysics Data System (ADS)
Kim, K. T.; Kim, J. H.; Han, M. J.; Heo, Y. J.; Park, S. K.
2018-02-01
Imaging technology based on gamma-ray sources has been extensively used in non-destructive testing (NDT) to detect any possible internal defects in products without changing their shapes or functions. However, such technology has been subject to increasingly stricter regulations, and an international radiation-safety management system has been recently established. Consequently, radiation source location in NDT systems has become an essential process, given that it can prevent radiation accidents. In this study, we focused on developing a monitoring system that can detect, in real time, the position of a radioactive source in the source guide tube of a projector. We fabricated a lead iodide (PbI2) dosimeter based on the particle-in-binder method, which has a high production yield and facilitates thickness and shape adjustment. Using a gamma-ray source, we then tested the reproducibility, linearity of the dosimeter response, and the dosimeter's percentage interval distance (PID). It was found that the fabricated PbI2 dosimeter yields highly accurate, reproducible, and linear dose measurements. The PID analysis—conducted to investigate the possibility of developing a monitoring system based on the proposed dosimeter—indicated that the valid detection distance was approximately 11.3 cm. The results of this study are expected to contribute to the development of an easily usable radiation monitoring system capable of significantly reducing the risk of radiation accidents.
NASA Astrophysics Data System (ADS)
Alghoul, M. A.; Ali, Amer; Kannanaikal, F. V.; Amin, N.; Aljaafar, A. A.; Kadhim, Mohammed; Sopian, K.
2017-11-01
The aim of this study is to evaluate the variation in techno-economic feasibility of PV power system under different data sources of solar radiation. HOMER simulation tool is used to predict the techno-economic feasibility parameters of PV power system in Baghdad city, Iraq located at (33.3128° N, 44.3615° E) as a case study. Four data sources of solar radiation, different annual capacity shortages percentage (0, 2.5, 5, and 7.5), and wide range of daily load profile (10-100 kWh/day) are implemented. The analyzed parameters of the techno-economic feasibility are COE (/kWh), PV array power capacity (kW), PV electrical production (kWh/year), No. of batteries and battery lifetime (year). The main results of the study revealed the followings: (1) solar radiation from different data sources caused observed to significant variation in the values of the techno-economic feasibility parameters; therefore, careful attention must be paid to ensure the use of an accurate solar input data; (2) Average solar radiation from different data sources can be recommended as a reasonable input data; (3) it is observed that as the size and of PV power system increases, the effect of different data sources of solar radiation increases and causes significant variation in the values of the techno-economic feasibility parameters.
NASA Astrophysics Data System (ADS)
Bonini, Lorenzo; Toscani, Giovanni; Seno, Silvio
2016-10-01
Carannante et al. (2015) proposed an original seismotectonic interpretation of the Ferrara arc in the Po Plain (Italy) based on an accurate hypocenter relocation of the 2012 Emilia earthquake sequence and on structural analyses of sub-surface data. They contend that the causative faults of the 2012 sequence do not belong to the fold-and-thrusts system comprising the Ferrara Arc but in fact are located in the underlying basement. In our view this interpretation does not agree with observations, including: 1) the structural interpretation of the seismic reflection lines, that contrasts with some of the available data, e.g. the stratigraphy inferred from deep wells; 2) the seismotectonic setting, that is based exclusively on the correlation between inferred structural features and the location of late aftershocks; and 3) the inconsistency of the proposed seismogenic sources with the elevation changes caused by the sequence. All these points compromise the Carannante et al.'s interpretation and, as a consequence, previously proposed seismotectonic models are still valid.
The influence of visual motion on interceptive actions and perception.
Marinovic, Welber; Plooy, Annaliese M; Arnold, Derek H
2012-05-01
Visual information is an essential guide when interacting with moving objects, yet it can also be deceiving. For instance, motion can induce illusory position shifts, such that a moving ball can seem to have bounced past its true point of contact with the ground. Some evidence suggests illusory motion-induced position shifts bias pointing tasks to a greater extent than they do perceptual judgments. This, however, appears at odds with other findings and with our success when intercepting moving objects. Here we examined the accuracy of interceptive movements and of perceptual judgments in relation to simulated bounces. Participants were asked to intercept a moving disc at its bounce location by positioning a virtual paddle, and then to report where the disc had landed. Results showed that interceptive actions were accurate whereas perceptual judgments were inaccurate, biased in the direction of motion. Successful interceptions necessitated accurate information concerning both the location and timing of the bounce, so motor planning evidently had privileged access to an accurate forward model of bounce timing and location. This would explain why people can be accurate when intercepting a moving object, but lack insight into the accurate information that had guided their actions when asked to make a perceptual judgment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Day, Kevin; Oliva, Isabel; Krupinski, Elizabeth; Marcus, Frank
2015-01-01
Precordial ECG lead placement is difficult in obese patients with increased chest wall soft tissues due to inaccurate palpation of the intercostal spaces. We investigated whether the length of the sternum (distance between the sternal notch and xiphoid process) can accurately predict the location of the 4th intercostal space, which is the traditional location for V1 lead position. Fifty-five consecutive adult chest computed tomography examinations were reviewed for measurements. The sternal notch to right 4th intercostal space distance was 67% of the sternal notch to xiphoid process length with an overall correlation of r=0.600 (p<0.001). The above measurement may be utilized to locate the 4th intercostal space for accurate placement of the precordial electrodes in adults in whom the 4th intercostal space cannot be found by physical exam. Copyright © 2015 Elsevier Inc. All rights reserved.
MO-D-213-07: RadShield: Semi- Automated Calculation of Air Kerma Rate and Barrier Thickness
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeLorenzo, M; Wu, D; Rutel, I
2015-06-15
Purpose: To develop the first Java-based semi-automated calculation program intended to aid professional radiation shielding design. Air-kerma rate and barrier thickness calculations are performed by implementing NCRP Report 147 formalism into a Graphical User Interface (GUI). The ultimate aim of this newly created software package is to reduce errors and improve radiographic and fluoroscopic room designs over manual approaches. Methods: Floor plans are first imported as images into the RadShield software program. These plans serve as templates for drawing barriers, occupied regions and x-ray tube locations. We have implemented sub-GUIs that allow the specification in regions and equipment for occupancymore » factors, design goals, number of patients, primary beam directions, source-to-patient distances and workload distributions. Once the user enters the above parameters, the program automatically calculates air-kerma rate at sampled points beyond all barriers. For each sample point, a corresponding minimum barrier thickness is calculated to meet the design goal. RadShield allows control over preshielding, sample point location and material types. Results: A functional GUI package was developed and tested. Examination of sample walls and source distributions yields a maximum percent difference of less than 0.1% between hand-calculated air-kerma rates and RadShield. Conclusion: The initial results demonstrated that RadShield calculates air-kerma rates and required barrier thicknesses with reliable accuracy and can be used to make radiation shielding design more efficient and accurate. This newly developed approach differs from conventional calculation methods in that it finds air-kerma rates and thickness requirements for many points outside the barriers, stores the information and selects the largest value needed to comply with NCRP Report 147 design goals. Floor plans, parameters, designs and reports can be saved and accessed later for modification and recalculation. We have confirmed that this software accurately calculates air-kerma rates and required barrier thicknesses for diagnostic radiography and fluoroscopic rooms.« less
The North Alabama Lightning Mapping Array (LMA): A Network Overview
NASA Technical Reports Server (NTRS)
Blakeslee, R. J.; Bailey, J.; Buechler, D.; Goodman, S. J.; McCaul, E. W., Jr.; Hall, J.
2005-01-01
The North Alabama Lightning Mapping Array (LMA) is s a 3-D VHF regional lightning detection system that provides on-orbit algorithm validation and instrument performance assessments for the NASA Lightning Imaging Sensor, as well as information on storm kinematics and updraft evolution that offers the potential to improve severe storm warning lead time by up t o 50% and decrease te false alarm r a t e ( for non-tornado producing storms). In support of this latter function, the LMA serves as a principal component of a severe weather test bed to infuse new science and technology into the short-term forecasting of severe and hazardous weather, principally within nearby National Weather Service forecast offices. The LMA, which became operational i n November 2001, consists of VHF receivers deployed across northern Alabama and a base station located at the National Space Science and Technology Center (NSSTC), which is on t h e campus of the University of Alabama in Huntsville. The LMA system locates the sources of impulsive VHF radio signals s from lightning by accurately measuring the time that the signals aririve at the different receiving stations. Each station's records the magnitude and time of the peak lightning radiation signal in successive 80 ms intervals within a local unused television channel (channel 5, 76-82 MHz in our case ) . Typically hundreds of sources per flash can be reconstructed, which i n t u r n produces accurate 3-dimensional lightning image maps (nominally <50 m error within 150 la. range). The data are transmitted back t o a base station using 2.4 GHz wireless Ethernet data links and directional parabolic grid antennas. There are four repeaters in the network topology and the links have an effective data throughput rate ranging from 600 kbits s -1 t o 1.5 %its s -1. This presentation provides an overview of t h e North Alabama network, the data processing (both real-time and post processing) and network statistics.
Crowd Sourcing Approach for UAS Communication Resource Demand Forecasting
NASA Technical Reports Server (NTRS)
Wargo, Chris A.; Difelici, John; Roy, Aloke; Glaneuski, Jason; Kerczewski, Robert J.
2016-01-01
Congressional attention to Unmanned Aircraft Systems (UAS) has caused the Federal Aviation Administration (FAA) to move the National Airspace System (NAS) Integration project forward, but using guidelines, practices and procedures that are yet to be fully integrated with the FAA Aviation Management System. The real drive for change in the NAS will to come from both UAS operators and the government jointly seeing an accurate forecast of UAS usage demand data. This solid forecast information would truly get the attention of planners. This requires not an aggregate demand, but rather a picture of how the demand is spread across small to large UAS, how it is spread across a wide range of missions, how it is expected over time and where, in terms of geospatial locations, will the demand appear. In 2012 the Volpe Center performed a study of the overall future demand for UAS. This was done by aggregate classes of aircraft types. However, the realistic expected demand will appear in clusters of aircraft activities grouped by similar missions on a smaller geographical footprint and then growing from those small cells. In general, there is not a demand forecast that is tightly coupled to the real purpose of the mission requirements (e.g. in terms of real locations and physical structures such as wind mills to inspect, farms to survey, pipelines to patrol, etc.). Being able to present a solid basis for the demand is crucial to getting the attention of investment, government and other fiscal planners. To this end, Mosaic ATM under NASA guidance is developing a crowd sourced, demand forecast engine that can draw forecast details from commercial and government users and vendors. These forecasts will be vetted by a governance panel and then provide for a sharable accurate set of projection data. Our paper describes the project and the technical approach we are using to design and create access for users to the forecast system.
Implosion Source Development and Diego Garcia Reflections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harben, P E; Boro, C
2001-06-01
Calibration of hydroacoustic stations for nuclear explosion monitoring is important for increasing monitoring capability and confidence from newly installed stations and from existing stations. Past work at Ascension Island has shown that ship-towed airguns can be effectively used for local calibrations such as sensor location, amplitude and phase response, and T-phase coupling in the case of T-phase stations. At regional and ocean-basin distances from a station, the calibration focus is on acoustic travel time, transmission loss, bathymetric shadowing, diffraction, and reflection as recorded at a particular station. Such station calibrations will lead to an overall network calibration that seeks tomore » maximize detection, location, and discrimination capability of events with acoustic signatures. Active-source calibration of hydroacoustic stations at regional and ocean-basin scales has not been attempted to date, but we have made significant headway addressing how such calibrations could be accomplished. We have developed an imploding sphere source that can be used instead of explosives on research and commercial vessels without restriction. The imploding sphere has been modeled using the Lawrence Livermore National Laboratory hydrodynamic code CALE and shown to agree with field data. The need for boosted energy in the monitoring band (2-100 Hz) has led us to develop a 5-sphere implosion device that was tested in the Pacific Ocean earlier this year. Boosting the energy in the monitoring band can be accomplished by a combination of increasing the implosion volume (i.e. the 5-sphere device) and imploding at shallower depths. Although active source calibrations will be necessary at particular locations and for particular objectives, the newly installed Diego Garcia station in the Indian Ocean has shown that earthquakes can be used to help understand regional blockages and the locations responsible for observed hydroacoustic reflections. We have analyzed several events with a back-azimuth from Diego Garcia between 100 and 140 degrees. The Diego Garcia records show a pronounced reflection that correlates in travel time and back-azimuth (calculated using the waveform cross-correlation of the tri-partite array elements to determine lag time across the array) with a reflector at the Saya de Malha Bank, on the Seychelles-Mauritius Plateau. We also show that to accurately predict blockage and reflection regions, it is essential to have detailed bathymetry in relatively small but critical areas.« less
Accuracy of telephone reference service in health sciences libraries.
Paskoff, B M
1991-01-01
Six factual queries were unobtrusively telephoned to fifty-one U.S. academic health sciences and hospital libraries. The majority of the queries (63.4%) were answered accurately. Referrals to another library or information source were made for 25.2% of the queries. Eleven answers (3.6%) were inaccurate, and no answer was provided for 7.8% of the queries. There was a correlation between the number of accurate answers provided and the presence of at least one staff member with a master's degree in library and information science. The correlation between employing a librarian certified by the Medical Library Association (MLA) and providing accurate answers was significant. The majority of referrals were to specific sources. If these "helpful referrals" are counted with accurate answers as correct responses, they total 76.8% of the answers. In a follow-up survey, five libraries stated that they did not provide accurate answers because they did not own an appropriate source. Staff-related problems were given as reasons for other than accurate answers by two of the libraries, while eight indicated that library policy prevented them from providing answers to the public. PMID:2039904
The Pearson-Readhead Survey of Compact Extragalactic Radio Sources from Space. I. The Images
NASA Astrophysics Data System (ADS)
Lister, M. L.; Tingay, S. J.; Murphy, D. W.; Piner, B. G.; Jones, D. L.; Preston, R. A.
2001-06-01
We present images from a space-VLBI survey using the facilities of the VLBI Space Observatory Programme (VSOP), drawing our sample from the well-studied Pearson-Readhead survey of extragalactic radio sources. Our survey has taken advantage of long space-VLBI baselines and large arrays of ground antennas, such as the Very Long Baseline Array and European VLBI Network, to obtain high-resolution images of 27 active galactic nuclei and to measure the core brightness temperatures of these sources more accurately than is possible from the ground. A detailed analysis of the source properties is given in accompanying papers. We have also performed an extensive series of simulations to investigate the errors in VSOP images caused by the relatively large holes in the (u,v)-plane when sources are observed near the orbit normal direction. We find that while the nominal dynamic range (defined as the ratio of map peak to off-source error) often exceeds 1000:1, the true dynamic range (map peak to on-source error) is only about 30:1 for relatively complex core-jet sources. For sources dominated by a strong point source, this value rises to approximately 100:1. We find the true dynamic range to be a relatively weak function of the difference in position angle (P.A.) between the jet P.A. and u-v coverage major axis P.A. For regions with low signal-to-noise ratios, typically located down the jet away from the core, large errors can occur, causing spurious features in VSOP images that should be interpreted with caution.
Direction-Sensitive Hand-Held Gamma-Ray Spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukhopadhyay, S.
2012-10-04
A novel, light-weight, hand-held gamma-ray detector with directional sensitivity is being designed. The detector uses a set of multiple rings around two cylindrical surfaces, which provides precise location of two interaction points on two concentric cylindrical planes, wherefrom the source location can be traced back by back projection and/or Compton imaging technique. The detectors are 2.0 × 2.0 mm europium-doped strontium iodide (SrI2:Eu2+) crystals, whose light output has been measured to exceed 120,000 photons/MeV, making it one of the brightest scintillators in existence. The crystal’s energy resolution, less than 3% at 662 keV, is also excellent, and the response ismore » highly linear over a wide range of gamma-ray energies. The emission of SrI2:Eu2+ is well matched to both photo-multiplier tubes and blue-enhanced silicon photodiodes. The solid-state photomultipliers used in this design (each 2.0 × 2.0 mm) are arrays of active pixel sensors (avalanche photodiodes driven beyond their breakdown voltage in reverse bias); each pixel acts as a binary photon detector, and their summed output is an analog representation of the total photon energy, while the individual pixel accurately defines the point of interaction. A simple back-projection algorithm involving cone-surface mapping is being modeled. The back projection for an event cone is a conical surface defining the possible location of the source. The cone axis is the straight line passing through the first and second interaction points.« less
Multi-ball and one-ball geolocation and location verification
NASA Astrophysics Data System (ADS)
Nelson, D. J.; Townsend, J. L.
2017-05-01
We present analysis methods that may be used to geolocate emitters using one or more moving receivers. While some of the methods we present may apply to a broader class of signals, our primary interest is locating and tracking ships from short pulsed transmissions, such as the maritime Automatic Identification System (AIS.) The AIS signal is difficult to process and track since the pulse duration is only 25 milliseconds, and the pulses may only be transmitted every six to ten seconds. Several fundamental problems are addressed, including demodulation of AIS/GMSK signals, verification of the emitter location, accurate frequency and delay estimation and identification of pulse trains from the same emitter. In particular, we present several new correlation methods, including cross-cross correlation that greatly improves correlation accuracy over conventional methods and cross- TDOA and cross-FDOA functions that make it possible to estimate time and frequency delay without the need of computing a two dimensional cross-ambiguity surface. By isolating pulses from the same emitter and accurately tracking the received signal frequency, we are able to accurately estimate the emitter location from the received Doppler characteristics.
Considerations in Phase Estimation and Event Location Using Small-aperture Regional Seismic Arrays
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Kværna, Tormod; Ringdal, Frode
2010-05-01
The global monitoring of earthquakes and explosions at decreasing magnitudes necessitates the fully automatic detection, location and classification of an ever increasing number of seismic events. Many seismic stations of the International Monitoring System are small-aperture arrays designed to optimize the detection and measurement of regional phases. Collaboration with operators of mines within regional distances of the ARCES array, together with waveform correlation techniques, has provided an unparalleled opportunity to assess the ability of a small-aperture array to provide robust and accurate direction and slowness estimates for phase arrivals resulting from well-constrained events at sites of repeating seismicity. A significant reason for the inaccuracy of current fully-automatic event location estimates is the use of f- k slowness estimates measured in variable frequency bands. The variability of slowness and azimuth measurements for a given phase from a given source region is reduced by the application of almost any constant frequency band. However, the frequency band resulting in the most stable estimates varies greatly from site to site. Situations are observed in which regional P- arrivals from two sites, far closer than the theoretical resolution of the array, result in highly distinct populations in slowness space. This means that the f- k estimates, even at relatively low frequencies, can be sensitive to source and path-specific characteristics of the wavefield and should be treated with caution when inferring a geographical backazimuth under the assumption of a planar wavefront arriving along the great-circle path. Moreover, different frequency bands are associated with different biases meaning that slowness and azimuth station corrections (commonly denoted SASCs) cannot be calibrated, and should not be used, without reference to the frequency band employed. We demonstrate an example where fully-automatic locations based on a source-region specific fixed-parameter template are more stable than the corresponding analyst reviewed estimates. The reason is that the analyst selects a frequency band and analysis window which appears optimal for each event. In this case, the frequency band which produces the most consistent direction estimates has neither the best SNR or the greatest beam-gain, and is therefore unlikely to be chosen by an analyst without calibration data.
Simulation of the spatial frequency-dependent sensitivities of Acoustic Emission sensors
NASA Astrophysics Data System (ADS)
Boulay, N.; Lhémery, A.; Zhang, F.
2018-05-01
Typical configurations of nondestructive testing by Acoustic Emission (NDT/AE) make use of multiple sensors positioned on the tested structure for detecting evolving flaws and possibly locating them by triangulation. Sensors positions must be optimized for ensuring global coverage sensitivity to AE events and minimizing their number. A simulator of NDT/AE is under development to provide help with designing testing configurations and with interpreting measurements. A global model performs sub-models simulating the various phenomena taking place at different spatial and temporal scales (crack growth, AE source and radiation, wave propagation in the structure, reception by sensors). In this context, accurate modelling of sensors behaviour must be developed. These sensors generally consist of a cylindrical piezoelectric element of radius approximately equal to its thickness, without damping and bonded to its case. Sensors themselves are bonded to the structure being tested. Here, a multiphysics finite element simulation tool is used to study the complex behaviour of AE sensor. The simulated behaviour is shown to accurately reproduce the high-amplitude measured contributions used in the AE practice.
Least squares deconvolution for leak detection with a pseudo random binary sequence excitation
NASA Astrophysics Data System (ADS)
Nguyen, Si Tran Nguyen; Gong, Jinzhe; Lambert, Martin F.; Zecchin, Aaron C.; Simpson, Angus R.
2018-01-01
Leak detection and localisation is critical for water distribution system pipelines. This paper examines the use of the time-domain impulse response function (IRF) for leak detection and localisation in a pressurised water pipeline with a pseudo random binary sequence (PRBS) signal excitation. Compared to the conventional step wave generated using a single fast operation of a valve closure, a PRBS signal offers advantageous correlation properties, in that the signal has very low autocorrelation for lags different from zero and low cross correlation with other signals including noise and other interference. These properties result in a significant improvement in the IRF signal to noise ratio (SNR), leading to more accurate leak localisation. In this paper, the estimation of the system IRF is formulated as an optimisation problem in which the l2 norm of the IRF is minimised to suppress the impact of noise and interference sources. Both numerical and experimental data are used to verify the proposed technique. The resultant estimated IRF provides not only accurate leak location estimation, but also good sensitivity to small leak sizes due to the improved SNR.
Geolocation and Pointing Accuracy Analysis for the WindSat Sensor
NASA Technical Reports Server (NTRS)
Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.
2006-01-01
Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.
A review of second law techniques applicable to basic thermal science research
NASA Astrophysics Data System (ADS)
Drost, M. Kevin; Zamorski, Joseph R.
1988-11-01
This paper reports the results of a review of second law analysis techniques which can contribute to basic research in the thermal sciences. The review demonstrated that second law analysis has a role in basic thermal science research. Unlike traditional techniques, second law analysis accurately identifies the sources and location of thermodynamic losses. This allows the development of innovative solutions to thermal science problems by directing research to the key technical issues. Two classes of second law techniques were identified as being particularly useful. First, system and component investigations can provide information of the source and nature of irreversibilities on a macroscopic scale. This information will help to identify new research topics and will support the evaluation of current research efforts. Second, the differential approach can provide information on the causes and spatial and temporal distribution of local irreversibilities. This information enhances the understanding of fluid mechanics, thermodynamics, and heat and mass transfer, and may suggest innovative methods for reducing irreversibilities.
All-sky brightness monitoring of light pollution with astronomical methods.
Rabaza, O; Galadí-Enríquez, D; Estrella, A Espín; Dols, F Aznar
2010-06-01
This paper describes a mobile prototype and a protocol to measure light pollution based on astronomical methods. The prototype takes three all-sky images using BVR filters of the Johnson-Cousins astronomical photometric system. The stars are then identified in the images of the Hipparcos and General Catalogue of Photometric Data II astronomical catalogues, and are used as calibration sources. This method permits the measurement of night-sky brightness and facilitates an estimate of which fraction is due to the light up-scattered in the atmosphere by a wide variety of man-made sources. This is achieved by our software, which compares the sky background flux to that of many stars of known brightness. The reduced weight and dimensions of the prototype allow the user to make measurements from virtually any location. This prototype is capable of measuring the sky distribution of light pollution, and also provides an accurate estimate of the background flux at each photometric band. (c) 2010 Elsevier Ltd. All rights reserved.
Distinct regions of the hippocampus are associated with memory for different spatial locations.
Jeye, Brittany M; MacEvoy, Sean P; Karanian, Jessica M; Slotnick, Scott D
2018-05-15
In the present functional magnetic resonance imaging (fMRI) study, we aimed to evaluate whether distinct regions of the hippocampus were associated with spatial memory for items presented in different locations of the visual field. In Experiment 1, during the study phase, participants viewed abstract shapes in the left or right visual field while maintaining central fixation. At test, old shapes were presented at fixation and participants classified each shape as previously in the "left" or "right" visual field followed by an "unsure"-"sure"-"very sure" confidence rating. Accurate spatial memory for shapes in the left visual field was isolated by contrasting accurate versus inaccurate spatial location responses. This contrast produced one hippocampal activation in which the interaction between item type and accuracy was significant. The analogous contrast for right visual field shapes did not produce activity in the hippocampus; however, the contrast of high confidence versus low confidence right-hits produced one hippocampal activation in which the interaction between item type and confidence was significant. In Experiment 2, the same paradigm was used but shapes were presented in each quadrant of the visual field during the study phase. Accurate memory for shapes in each quadrant, exclusively masked by accurate memory for shapes in the other quadrants, produced a distinct activation in the hippocampus. A multi-voxel pattern analysis (MVPA) of hippocampal activity revealed a significant correlation between behavioral spatial location accuracy and hippocampal MVPA accuracy across participants. The findings of both experiments indicate that distinct hippocampal regions are associated with memory for different visual field locations. Copyright © 2018 Elsevier B.V. All rights reserved.
Predictions of Experimentally Observed Stochastic Ground Vibrations Induced by Blasting
Kostić, Srđan; Perc, Matjaž; Vasović, Nebojša; Trajković, Slobodan
2013-01-01
In the present paper, we investigate the blast induced ground motion recorded at the limestone quarry “Suva Vrela” near Kosjerić, which is located in the western part of Serbia. We examine the recorded signals by means of surrogate data methods and a determinism test, in order to determine whether the recorded ground velocity is stochastic or deterministic in nature. Longitudinal, transversal and the vertical ground motion component are analyzed at three monitoring points that are located at different distances from the blasting source. The analysis reveals that the recordings belong to a class of stationary linear stochastic processes with Gaussian inputs, which could be distorted by a monotonic, instantaneous, time-independent nonlinear function. Low determinism factors obtained with the determinism test further confirm the stochastic nature of the recordings. Guided by the outcome of time series analysis, we propose an improved prediction model for the peak particle velocity based on a neural network. We show that, while conventional predictors fail to provide acceptable prediction accuracy, the neural network model with four main blast parameters as input, namely total charge, maximum charge per delay, distance from the blasting source to the measuring point, and hole depth, delivers significantly more accurate predictions that may be applicable on site. We also perform a sensitivity analysis, which reveals that the distance from the blasting source has the strongest influence on the final value of the peak particle velocity. This is in full agreement with previous observations and theory, thus additionally validating our methodology and main conclusions. PMID:24358140
NASA Astrophysics Data System (ADS)
Hejrani, Babak; Tkalčić, Hrvoje; Fichtner, Andreas
2017-07-01
Although both earthquake mechanism and 3-D Earth structure contribute to the seismic wavefield, the latter is usually assumed to be layered in source studies, which may limit the quality of the source estimate. To overcome this limitation, we implement a method that takes advantage of a 3-D heterogeneous Earth model, recently developed for the Australasian region. We calculate centroid moment tensors (CMTs) for earthquakes in Papua New Guinea (PNG) and the Solomon Islands. Our method is based on a library of Green's functions for each source-station pair for selected Geoscience Australia and Global Seismic Network stations in the region, and distributed on a 3-D grid covering the seismicity down to 50 km depth. For the calculation of Green's functions, we utilize a spectral-element method for the solution of the seismic wave equation. Seismic moment tensors were calculated using least squares inversion, and the 3-D location of the centroid is found by grid search. Through several synthetic tests, we confirm a trade-off between the location and the correct input moment tensor components when using a 1-D Earth model to invert synthetics produced in a 3-D heterogeneous Earth. Our CMT catalogue for PNG in comparison to the global CMT shows a meaningful increase in the double-couple percentage (up to 70%). Another significant difference that we observe is in the mechanism of events with depth shallower then 15 km and Mw < 6, which contributes to accurate tectonic interpretation of the region.
Earthquake Monitoring with the MyShake Global Smartphone Seismic Network
NASA Astrophysics Data System (ADS)
Inbal, A.; Kong, Q.; Allen, R. M.; Savran, W. H.
2017-12-01
Smartphone arrays have the potential for significantly improving seismic monitoring in sparsely instrumented urban areas. This approach benefits from the dense spatial coverage of users, as well as from communication and computational capabilities built into smartphones, which facilitate big seismic data transfer and analysis. Advantages in data acquisition with smartphones trade-off with factors such as the low-quality sensors installed in phones, high noise levels, and strong network heterogeneity, all of which limit effective seismic monitoring. Here we utilize network and array-processing schemes to asses event detectability with the MyShake global smartphone network. We examine the benefits of using this network in either triggered or continuous modes of operation. A global database of ground motions measured on stationary phones triggered by M2-6 events is used to establish detection probabilities. We find that the probability of detecting an M=3 event with a single phone located <10 km from the epicenter exceeds 70%. Due to the sensor's self-noise, smaller magnitude events at short epicentral distances are very difficult to detect. To increase the signal-to-noise ratio, we employ array back-projection techniques on continuous data recorded by thousands of phones. In this class of methods, the array is used as a spatial filter that suppresses signals emitted from shallow noise sources. Filtered traces are stacked to further enhance seismic signals from deep sources. We benchmark our technique against traditional location algorithms using recordings from California, a region with large MyShake user database. We find that locations derived from back-projection images of M 3 events recorded by >20 nearby phones closely match the regional catalog locations. We use simulated broadband seismic data to examine how location uncertainties vary with user distribution and noise levels. To this end, we have developed an empirical noise model for the metropolitan Los-Angeles (LA) area. We find that densities larger than 100 stationary phones/km2 are required to accurately locate M 2 events in the LA basin. Given the projected MyShake user distribution, that condition may be met within the next few years.
NASA Astrophysics Data System (ADS)
Walsh, Braden; Jolly, Arthur; Procter, Jonathan
2017-04-01
Using active seismic sources on Tongariro Volcano, New Zealand, the amplitude source location (ASL) method is calibrated and optimized through a series of sensitivity tests. By applying a geologic medium velocity of 1500 m/s and an attenuation value of Q=60 for surface waves along with amplification factors computed from regional earthquakes, the ASL produced location discrepancies larger than 1.0 km horizontally and up to 0.5 km in depth. Through the use of sensitivity tests on input parameters, we show that velocity and attenuation models have moderate to strong influences on the location results, but can be easily constrained. Changes in locations are accommodated through either lateral or depth movements. Station corrections (amplification factors) and station geometry strongly affect the ASL locations laterally, horizontally and in depth. Calibrating the amplification factors through the exploitation of the active seismic source events reduced location errors for the sources by up to 50%.
Microseismic Image-domain Velocity Inversion: Case Study From The Marcellus Shale
NASA Astrophysics Data System (ADS)
Shragge, J.; Witten, B.
2017-12-01
Seismic monitoring at injection wells relies on generating accurate location estimates of detected (micro-)seismicity. Event location estimates assist in optimizing well and stage spacings, assessing potential hazards, and establishing causation of larger events. The largest impediment to generating accurate location estimates is an accurate velocity model. For surface-based monitoring the model should capture 3D velocity variation, yet, rarely is the laterally heterogeneous nature of the velocity field captured. Another complication for surface monitoring is that the data often suffer from low signal-to-noise levels, making velocity updating with established techniques difficult due to uncertainties in the arrival picks. We use surface-monitored field data to demonstrate that a new method requiring no arrival picking can improve microseismic locations by jointly locating events and updating 3D P- and S-wave velocity models through image-domain adjoint-state tomography. This approach creates a complementary set of images for each chosen event through wave-equation propagation and correlating combinations of P- and S-wavefield energy. The method updates the velocity models to optimize the focal consistency of the images through adjoint-state inversions. We demonstrate the functionality of the method using a surface array of 192 three-component geophones over a hydraulic stimulation in the Marcellus Shale. Applying the proposed joint location and velocity-inversion approach significantly improves the estimated locations. To assess event location accuracy, we propose a new measure of inconsistency derived from the complementary images. By this measure the location inconsistency decreases by 75%. The method has implications for improving the reliability of microseismic interpretation with low signal-to-noise data, which may increase hydrocarbon extraction efficiency and improve risk assessment from injection related seismicity.
Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette
Huang, Wenzhu; Zhang, Wentao; Li, Fang
2013-01-01
This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location. PMID:24141266
Remote listening and passive acoustic detection in a 3-D environment
NASA Astrophysics Data System (ADS)
Barnhill, Colin
Teleconferencing environments are a necessity in business, education and personal communication. They allow for the communication of information to remote locations without the need for travel and the necessary time and expense required for that travel. Visual information can be communicated using cameras and monitors. The advantage of visual communication is that an image can capture multiple objects and convey them, using a monitor, to a large group of people regardless of the receiver's location. This is not the case for audio. Currently, most experimental teleconferencing systems' audio is based on stereo recording and reproduction techniques. The problem with this solution is that it is only effective for one or two receivers. To accurately capture a sound environment consisting of multiple sources and to recreate that for a group of people is an unsolved problem. This work will focus on new methods of multiple source 3-D environment sound capture and applications using these captured environments. Using spherical microphone arrays, it is now possible to capture a true 3-D environment A spherical harmonic transform on the array's surface allows us to determine the basis functions (spherical harmonics) for all spherical wave solutions (up to a fixed order). This spherical harmonic decomposition (SHD) allows us to not only look at the time and frequency characteristics of an audio signal but also the spatial characteristics of an audio signal. In this way, a spherical harmonic transform is analogous to a Fourier transform in that a Fourier transform transforms a signal into the frequency domain and a spherical harmonic transform transforms a signal into the spatial domain. The SHD also decouples the input signals from the microphone locations. Using the SHD of a soundfield, new algorithms are available for remote listening, acoustic detection, and signal enhancement The new algorithms presented in this paper show distinct advantages over previous detection and listening algorithms especially for multiple speech sources and room environments. The algorithms use high order (spherical harmonic) beamforming and power signal characteristics for source localization and signal enhancement These methods are applied to remote listening, surveillance, and teleconferencing.
NASA Astrophysics Data System (ADS)
Lachance, R. L.; Gordley, L. L.; Marshall, B. T.; Fisher, J.; Paxton, G.; Gubeli, J. F.
2015-12-01
Currently there is no efficient and affordable way to monitor gas releases over small to large areas. We have demonstrated the ability to accurately measure key greenhouse and pollutant gasses with low cost solar observations using the breakthrough sensor technology called the "Pupil Imaging Gas Correlation", PIGC™, which provides size and complexity reduction while providing exceptional resolution and coverage for various gas sensing applications. It is a practical implementation of the well-known Gas Filter Correlation Radiometry (GFCR) technique used for the HALOE and MOPITT satellite instruments that were flown on successful NASA missions in the early 2000s. This strong space heritage brings performance and reliability to the ground instrument design. A methane (CH4) abundance sensitivity of 0.5% or better of ambient column with uncooled microbolometers has been demonstrated with 1 second direct solar observations. These under $10 k sensors can be deployed in precisely balanced autonomous grids to monitor the flow of chosen gasses, and infer their source locations. Measureable gases include CH4, 13CO2, N2O, NO, NH3, CO, H2S, HCN, HCl, HF, HDO and others. A single instrument operates in a dual operation mode, at no additional cost, for continuous (real-time 24/7) local area perimeter monitoring for the detection of leaks for safety & security needs, looking at an artificial light source (for example a simple 60 W light bulb placed 100 m away), while simultaneously allowing solar observation for quasi-continuous wide area total atmospheric column scanning (3-D) for environmental monitoring (fixed and mobile configurations). The second mode of operation continuously quantifies the concentration and flux of specific gases over different ground locations, determined the amount of targeted gas being released from the area or getting into the area from outside locations, allowing better tracking of plumes and identification of sources. This paper reviews the measurement technique, performance demonstration and grid deployment strategy.
Building pit dewatering: application of transient analytic elements.
Zaadnoordijk, Willem J
2006-01-01
Analytic elements are well suited for the design of building pit dewatering. Wells and drains can be modeled accurately by analytic elements, both nearby to determine the pumping level and at some distance to verify the targeted drawdown at the building site and to estimate the consequences in the vicinity. The ability to shift locations of wells or drains easily makes the design process very flexible. The temporary pumping has transient effects, for which transient analytic elements may be used. This is illustrated using the free, open-source, object-oriented analytic element simulator Tim(SL) for the design of a building pit dewatering near a canal. Steady calculations are complemented with transient calculations. Finally, the bandwidths of the results are estimated using linear variance analysis.
An Inexpensive and Versatile Version of Kundt's Tube for Measuring the Speed of Sound in Air
NASA Astrophysics Data System (ADS)
Papacosta, Pangratios; Linscheid, Nathan
2016-01-01
Experiments that measure the speed of sound in air are common in high schools and colleges. In the Kundt's tube experiment, a horizontal air column is adjusted until a resonance mode is achieved for a specific frequency of sound. When this happens, the cork dust in the tube is disturbed at the displacement antinode regions. The location of the displacement antinodes enables the measurement of the wavelength of the sound that is being used. This paper describes a design that uses a speaker instead of the traditional aluminum rod as the sound source. This allows the use of multiple sound frequencies that yield a much more accurate speed of sound in air.
Strain-Based Damage Determination Using Finite Element Analysis for Structural Health Management
NASA Technical Reports Server (NTRS)
Hochhalter, Jacob D.; Krishnamurthy, Thiagaraja; Aguilo, Miguel A.
2016-01-01
A damage determination method is presented that relies on in-service strain sensor measurements. The method employs a gradient-based optimization procedure combined with the finite element method for solution to the forward problem. It is demonstrated that strains, measured at a limited number of sensors, can be used to accurately determine the location, size, and orientation of damage. Numerical examples are presented to demonstrate the general procedure. This work is motivated by the need to provide structural health management systems with a real-time damage characterization. The damage cases investigated herein are characteristic of point-source damage, which can attain critical size during flight. The procedure described can be used to provide prognosis tools with the current damage configuration.
WCSTools 3.0: More Tools for Image Astrometry and Catalog Searching
NASA Astrophysics Data System (ADS)
Mink, Douglas J.
For five years, WCSTools has provided image astrometry for astronomers who need accurate positions for objects they wish to observe. Other functions have been added and improved since the package was first released. Support has been added for new catalogs, such as the GSC-ACT, 2MASS Point Source Catalog, and GSC II, as they have been published. A simple command line interface can search any supported catalog, returning information in several standard formats, whether the catalog is on a local disk or searchable over the World Wide Web. The catalog searching routine can be located on either end (or both ends!) of such a web connection, and the output from one catalog search can be used as the input to another search.
NASA Astrophysics Data System (ADS)
Myers, S. C.; Pitarka, A.; Mellors, R. J.
2016-12-01
The Source Physics Experiment (SPE) is producing new data to study the generation of seismic waves from explosive sources. Preliminary results show that far-field S-waves are generated both within the non-elastic volume surrounding explosive sources and by P- to S-wave scattering. The relative contribution of non-elastic phenomenology and elastic-wave scattering to far-field S-waves has been debated for decades, and numerical simulations based on the SPE experiments are addressing this question. The match between observed and simulated data degrades with event-station distance and with increasing time in each seismogram. This suggests that a more accurate model of subsurface elastic properties could result in better agreement between observed and simulated seismograms. A detailed model of subsurface structure has been developed using geologic maps and the extensive database of borehole logs, but uncertainty in structural details remains high. The large N instrument deployment during the SPE-5 experiment offers an opportunity to use time-reversal techniques to back project the wave field into the subsurface to locate significant sources of scattered energy. The large N deployment was nominally 1000, 5 Hz sensors (500 Z and 500 3C geophones) deployed in a roughly rectangular array to the south and east of the SPE-5 shot. Sensor spacing was nominally 50 meters in the interior portion of the array and 100 meters in the outer region, with two dense lines at 25 m spacing. The array covers the major geologic boundary between the Yucca Flat basin and the granitic Climax Stock in which the SPE experiments have been conducted. Improved mapping of subsurface scatterers is expected to result in better agreement between simulated and observed seismograms and aid in our understanding of S-wave generation from explosions. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Alajlouni, Sa'ed; Albakri, Mohammad; Tarazaga, Pablo
2018-05-01
An algorithm is introduced to solve the general multilateration (source localization) problem in a dispersive waveguide. The algorithm is designed with the intention of localizing impact forces in a dispersive floor, and can potentially be used to localize and track occupants in a building using vibration sensors connected to the lower surface of the walking floor. The lower the wave frequencies generated by the impact force, the more accurate the localization is expected to be. An impact force acting on a floor, generates a seismic wave that gets distorted as it travels away from the source. This distortion is noticeable even over relatively short traveled distances, and is mainly caused by the dispersion phenomenon among other reasons, therefore using conventional localization/multilateration methods will produce localization error values that are highly variable and occasionally large. The proposed localization approach is based on the fact that the wave's energy, calculated over some time window, decays exponentially as the wave travels away from the source. Although localization methods that assume exponential decay exist in the literature (in the field of wireless communications), these methods have only been considered for wave propagation in non-dispersive media, in addition to the limiting assumption required by these methods that the source must not coincide with a sensor location. As a result, these methods cannot be applied to the indoor localization problem in their current form. We show how our proposed method is different from the other methods, and that it overcomes the source-sensor location coincidence limitation. Theoretical analysis and experimental data will be used to motivate and justify the pursuit of the proposed approach for localization in a dispersive medium. Additionally, hammer impacts on an instrumented floor section inside an operational building, as well as finite element model simulations, are used to evaluate the performance of the algorithm. It is shown that the algorithm produces promising results providing a foundation for further future development and optimization.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Ignition Stationary RICE Located at a Major Source of HAP Emissions and Existing Spark Ignition Stationary RICE ⤠500 HP Located at a Major Source of HAP Emissions 2c Table 2c to Subpart ZZZZ of Part 63... Stationary RICE Located at a Major Source of HAP Emissions and Existing Spark Ignition Stationary RICE ≤ 500...
NASA Astrophysics Data System (ADS)
Beaulieu, J.-P.; Batista, V.; Bennett, D. P.; Marquette, J.-B.; Blackman, J. W.; Cole, A. A.; Coutures, C.; Danielski, C.; Dominis Prester, D.; Donatowicz, J.; Fukui, A.; Koshimoto, N.; Lončarić, K.; Morales, J. C.; Sumi, T.; Suzuki, D.; Henderson, C.; Shvartzvald, Y.; Beichman, C.
2018-02-01
To obtain accurate mass measurements for cold planets discovered by microlensing, it is usually necessary to combine light curve modeling with at least two lens mass–distance relations. The physical parameters of the planetary system OGLE-2014-BLG-0124L have been constrained thanks to accurate parallax effect between ground-based and simultaneous space-based Spitzer observations. Here, we resolved the source+lens star from sub-arcsecond blends in H-band using adaptive optics (AO) observations with NIRC2 mounted on Keck II telescope. We identify additional flux, coincident with the source to within 160 mas. We estimate the potential contributions to this blended light (chance-aligned star, additional companion to the lens or to the source) and find that 85% of the NIR flux is due to the lens star at H L = 16.63 ± 0.06 and K L = 16.44 ± 0.06. We combined the parallax constraint and the AO constraint to derive the physical parameters of the system. The lensing system is composed of a mid-late type G main sequence star of M L = 0.9 ± 0.05 M ⊙ located at D L = 3.5 ± 0.2 kpc in the Galactic disk. Taking the mass ratio and projected separation from the original study leads to a planet of M p = 0.65 ± 0.044 M Jupiter at 3.48 ± 0.22 au. Excellent parallax measurements from simultaneous ground-space observations have been obtained on the microlensing event OGLE-2014-BLG-0124, but it is only when they are combined with ∼30 minutes of Keck II AO observations that the physical parameters of the host star are well measured.
NASA Astrophysics Data System (ADS)
Zhang, C.; Li, X.; Huawu, W.; Wang, P.; Wang, Y.; WU, X.; Li, W.; Huang, Y.
2017-12-01
Understanding of the responses of different plant species to changes in available water source is critical for accurately modeling and predicting species dynamic and the effect of expected climate change on plant distribution. Our study aimed to explore whether there were differences of water use strategies between the two coexisting shrubs (Reaumuria songarica Maxim and Nitraria.sphaerocarpa Maxim ) in response to different amounts of summer precipitation. We conducted a 3-year field observations at three sites along a gradient of precipitation from middle to lower reaches of Heihe River basin (HRB), northwestern China. Stable oxygen composition (δ18O) in plant xylem water, soil water, and groundwater were analyzed concurrently with ecophysiological measurement at monthly intervals during the growing seasons. The results showed that both R. soongorica and N. sphaerocarpa growing in regions with precipitation dominated water supply exhibited distinct seasonal pattern in water source utilization. In contrast, R. soongorica in the most arid site has the consistent water-use strategy relying primarily on groundwater sources regardless seasonality of precipitation. Water source for coexisting R. soongorica and N. sphaerocarpa did not differ at the sites where precipitation amount was high, but they were a significant different in more arid locations. N. sphaerocarpa is more sensitive to summer precipitation than R. soongorica in terms of predawn water potential (Ψpd), stomatal conductance and foliage δ13C. Our findings reveal that plant relying groundwater sources could maintain a consistent water use strategies, but did not for plants took up precipitation-derived water source. Our results demonstrated that N. sphaerocarpa with a shallower rooting system was more responsive for summer rainfall than did for R. soongorica. We also found that the difference in water source uptake between the coexisting species was more apparent in more arid locations. Results of this work will improve our understanding of complex interactions between species and water condition in such dry environments Keywords: Hydrological niche; Reaumuria soongorica; Nitraria sphaerocarpa; Water use pattern; δ18O; δ13C
A virtual photon energy fluence model for Monte Carlo dose calculation.
Fippel, Matthias; Haryanto, Freddy; Dohm, Oliver; Nüsslin, Fridtjof; Kriesen, Stephan
2003-03-01
The presented virtual energy fluence (VEF) model of the patient-independent part of the medical linear accelerator heads, consists of two Gaussian-shaped photon sources and one uniform electron source. The planar photon sources are located close to the bremsstrahlung target (primary source) and to the flattening filter (secondary source), respectively. The electron contamination source is located in the plane defining the lower end of the filter. The standard deviations or widths and the relative weights of each source are free parameters. Five other parameters correct for fluence variations, i.e., the horn or central depression effect. If these parameters and the field widths in the X and Y directions are given, the corresponding energy fluence distribution can be calculated analytically and compared to measured dose distributions in air. This provides a method of fitting the free parameters using the measurements for various square and rectangular fields and a fixed number of monitor units. The next step in generating the whole set of base data is to calculate monoenergetic central axis depth dose distributions in water which are used to derive the energy spectrum by deconvolving the measured depth dose curves. This spectrum is also corrected to take the off-axis softening into account. The VEF model is implemented together with geometry modules for the patient specific part of the treatment head (jaws, multileaf collimator) into the XVMC dose calculation engine. The implementation into other Monte Carlo codes is possible based on the information in this paper. Experiments are performed to verify the model by comparing measured and calculated dose distributions and output factors in water. It is demonstrated that open photon beams of linear accelerators from two different vendors are accurately simulated using the VEF model. The commissioning procedure of the VEF model is clinically feasible because it is based on standard measurements in air and water. It is also useful for IMRT applications because a full Monte Carlo simulation of the treatment head would be too time-consuming for many small fields.
Modeling of Turbulence Generated Noise in Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2004-01-01
A numerically calculated Green's function is used to predict jet noise spectrum and its far-field directivity. A linearized form of Lilley's equation governs the non-causal Green s function of interest, with the non-linear terms on the right hand side identified as the source. In this paper, contributions from the so-called self- and shear-noise source terms will be discussed. A Reynolds-averaged Navier-Stokes solution yields the required mean flow as well as time- and length scales of a noise-generating turbulent eddy. A non-compact source, with exponential temporal and spatial functions, is used to describe the turbulence velocity correlation tensors. It is shown that while an exact non-causal Green's function accurately predicts the observed shift in the location of the spectrum peak with angle as well as the angularity of sound at moderate Mach numbers, at high subsonic and supersonic acoustic Mach numbers the polar directivity of radiated sound is not entirely captured by this Green's function. Results presented for Mach 0.5 and 0.9 isothermal jets, as well as a Mach 0.8 hot jet conclude that near the peak radiation angle a different source/Green's function convolution integral may be required in order to capture the peak observed directivity of jet noise.
Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.
Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian
2015-09-01
Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to be extracted from single-axis accelerometer data.
IMPRESS: medical location-aware decision making during emergencies
NASA Astrophysics Data System (ADS)
Gkotsis, I.; Eftychidis, G.; Leventakis, G.; Mountzouris, M.; Diagourtas, D.; Kostaridis, A.; Hedel, R.; Olunczek, A.; Hahmann, S.
2017-09-01
Emergency situations and mass casualties involve several agencies and public authorities, which need to gather data from the incident scene and exchange geo-referenced information to provide fast and accurate first aid to the people in need. Tracking patients on their way to the hospitals can prove critical in taking lifesaving decisions. Increased and continuous flow of information combined by vital signs and geographic location of emergency victims can greatly reduce the response time of the medical emergency chain and improve the efficiency of disaster medicine activity. Recent advances in mobile positioning systems and telecommunications are providing the technology needed for the development of location-aware medical applications. IMPRESS is an advanced ICT platform based on adequate technologies for developing location-aware medical response during emergencies. The system incorporates mobile and fixed components that collect field data from diverse sources, support medical location and situation-based services and share information on the patient's transport from the field to the hospitals. In IMPRESS platform tracking of victims, ambulances and emergency services vehicles is integrated with medical, traffic and crisis management information into a common operational picture. The Incident Management component of the system manages operational resources together with patient tracking data that contain vital sign values and patient's status evolution. Thus, it can prioritize emergency transport decisions, based on medical and location-aware information. The solution combines positioning and information gathered and owned by various public services involved in MCIs or large-scale disasters. IMPRESS solution, were validated in field and table top exercises in cooperation with emergency services and hospitals.
NASA Astrophysics Data System (ADS)
Catchings, R.
2017-12-01
P- and S-wave propagation differ in varying materials in the Earth's crust. As a result, combined measurements of P- and S-wave data can be used to infer properties of the shallow crust, including bulk composition, fluid saturation, faulting and fracturing, seismic velocities, reflectivity, and general structures. Ratios of P- to S-wave velocities and Poisson's ratio, which can be derived from the P- and S-wave data, can be particularly diagnostic of subsurface materials and their physical state. In field studies, S-wave data can be obtained directly with S-wave sources or from surface waves associated with P-wave sources. P- and S-wave data can be processed using reflection, refraction, and surface-wave-analysis methods. With the combined data, unconsolidated sediments, consolidated sediments, and rocks can be differentiated on the basis of seismic velocities and their ratios, as can saturated versus unsaturated sediments. We summarize studies where we have used combined P- and S-wave measurements to reliably map the top of ground water, prospect for minerals, locate subsurface faults, locate basement interfaces, determine basin shapes, and measure shear-wave velocities (with calculated Vs30), and other features of the crust that are important for hazards, engineering, and exploration purposes. When compared directly, we find that body waves provide more accurate measures than surface waves.
Application of universal kriging for prediction pollutant using GStat R
NASA Astrophysics Data System (ADS)
Nur Falah, Annisa; Subartini, Betty; Nurani Ruchjana, Budi
2017-10-01
In the universe, the air and water is a natural resource that is a very big function for living beings. The air is a gas mixture contained in a layer that surrounds the earth and the components of the gas mixture is not always constant. Also in river there is always a pollutant of chemistry concentration more than concentration limit. During the time a lot of air or water pollution caused by industrial waste, coal ash or chemistry pollution is an example of pollution that can pollute the environment and damage the health of humans. To solve this problem we need a method that is able to predict pollutant content in locations that are not observed. In geostatistics, we can use the universal kriging for prediction in a location that unobserved locations. Universal kriging is an interpolation method that has a tendency trend (drift) or a particular valuation method used to deal with non-stationary sample data. GStat R is a program based on open source R software that can be used to predict pollutant in a location that is not observed by the method of universal kriging. In this research, we predicted river pollutant content using trend (drift) equation of first order. GStat R application program in the prediction of river pollutants provides faster computation, more accurate, convenient and can be used as a recommendation for policy makers in the field of environment.
Deep-level stereoscopic multiple traps of acoustic vortices
NASA Astrophysics Data System (ADS)
Li, Yuzhi; Guo, Gepu; Ma, Qingyu; Tu, Juan; Zhang, Dong
2017-04-01
Based on the radiation pattern of a planar piston transducer, the mechanisms underlying the generation of axially controllable deep-level stereoscopic multiple traps of acoustic vortices (AV) using sparse directional sources were proposed with explicit formulae. Numerical simulations for the axial and cross-sectional distributions of acoustic pressure and phase were conducted for various ka (product of the wave number and the radius of transducer) values at the frequency of 1 MHz. It was demonstrated that, for bigger ka, besides the main-AV (M-AV) generated by the main lobes of the sources, cone-shaped side-AV (S-AV) produced by the side lobes were closer to the source plane at a relatively lower pressure. Corresponding to the radiation angles of pressure nulls between the main lobe and the side lobes of the sources, vortex valleys with nearly pressure zero could be generated on the central axis to form multiple traps, based on Gor'kov potential theory. The number and locations of vortex valleys could be controlled accurately by the adjustment of ka. With the established eight-source AV generation system, the existence of the axially controllable multiple traps was verified by the measured M-AV and S-AVs as well as the corresponding vortex valleys. The favorable results provided the feasibility of deep-level stereoscopic control of AV and suggested potential application of multiple traps for particle manipulation in the area of biomedical engineering.
Reflection and refraction seismic on the great Ancona landslide
NASA Astrophysics Data System (ADS)
Stucchi, E.; Mazzotti, A.
2003-04-01
The Adriatic coast in Italy is characterised by the occurrence of several landslide bodies, some of which of huge extension. Here we present the results of seismic refraction and reflection studies recently carried out on the Ancona Landslide, which is located immediately westward of the harbour city of Ancona, and interests an area of about 3.5 km^2 with a landslide front of 2 km. The acquired seismic profile crosses the entire landslide body and was performed employing land and marine sources and receivers. Thus it allows the simultaneous acquisition of marine-marine, marine-land, land-marine and land-land data. The most significant acquisition parameters are: nominal maximum source-receiver offset 600 m, receiver group interval 5 m, single airgun and small explosive charges as energy sources, profile length 1.5 km, average reflection coverage on land 4000% and at sea 20000%. Notwithstanding the significant noise contamination due to intense human activities (road, naval and railway traffic) in the area, the data shows good first breaks and reflections which we use for refraction and reflection processing. The refraction study makes use of GRM and other techniques (Lawton) and it leads to a good definition of the shallower landslide bodies but it is not able to depict the deeper decollement surface. It is also very useful in providing a detailed near surface velocity model that is crucial for the determination of accurate static corrections for the reflection data. High quality subsurface images are achieved by applying different processing sequences to the different sets (marine, land or land-marine) of reflection seismic data. The processing steps that turned out as more effective to the achievement of such a quality were the noise removal by means of FX and SVD filtering, the attenuation of the bubble effect for the marine source data, the ground roll attenuation and the computation of accurate statics. The outcomes of the refraction and reflection investigations are greatly useful in evidencing the geometry of the huge landslide body, its maximum depth and the location, close to the sea shore, of the landslide foot. Moreover, together with other kind of data (a grid of high-resolution marine seismic lines acquired 200 m offshore, several marine and land lines acquired by ENI-AGIP for hydrocarbon exploration), these results clearly evidence the general structural setting of the area which likely plays a role in the landslide dynamic. Ongoing works include the estimation of an optimal velocity model by means of refraction/reflection tomography and pre-post stack depth migration.
NASA Astrophysics Data System (ADS)
Han, Young-Ji; Holsen, Thomas M.; Hopke, Philip K.
Ambient gaseous phase mercury concentrations (TGM) were measured at three locations in NY State including Potsdam, Stockton, and Sterling from May 2000 to March 2005. Using these data, three hybrid receptor models incorporating backward trajectories were used to identify source areas for TGM. The models used were potential source contribution function (PSCF), residence time weighted concentration (RTWC), and simplified quantitative transport bias analysis (SQTBA). Each model was applied using multi-site measurements to resolve the locations of important mercury sources for New York State. PSCF results showed that southeastern New York, Ohio, Indiana, Tennessee, Louisiana, and Virginia were important TGM source areas for these sites. RTWC identified Canadian sources including the metal production facilities in Ontario and Quebec, but US regional sources including the Ohio River Valley were also resolved. Sources in southeastern NY, Massachusetts, western Pennsylvania, Indiana, and northern Illinois were identified to be significant by SQTBA. The three modeling results were combined to locate the most important probable source locations, and those are Ohio, Indiana, Illinois, and Wisconsin. The Atlantic Ocean was suggested to be a possible source as well.
NASA Astrophysics Data System (ADS)
Pagnoni, Gianluca; Armigliato, Alberto; Tinti, Stefano; Loreto, Maria Filomena; Facchin, Lorenzo
2014-05-01
The earthquake that the 8 September 1905 hit Calabria in southern Italy was the second Italian earthquake for magnitude in the last century. It destroyed many villages along the coast of the Gulf of Sant'Eufemia, caused more than 500 fatalities and has also generated a tsunami with non-destructive effects. The historical reports tell us that the tsunami caused major damage in the villages of Briatico, Bivona, Pizzo and Vibo Marina, located in the south part of the Sant'Eufemia gulf and minor damage to Tropea and to Scalea, this one being village located about 100 km far from the epicenter. Other reports include accounts of fishermen at sea during the tsunami. Further, the tsunami is visible on tide gauge records in Messina, Sicily, in Naples and in Civitavecchia, a harbour located to the north of Rome (Platania, 1907) In spite of the attention devoted by researchers to this case, until now, like for other tsunamigenic Italian earthquakes, the genetic structure of the earthquake is still not identified and debate is still open. In this context, tsunami simulations can provide contributions useful to find the source model more consistent with observational data. This approach was already followed by Piatanesi and Tinti (2002), who carried out numerical simulations of tsunamis from a number of local sources. In the last decade studies on this seismogenic area were int ensified resulting in new estimates for the 1905 earthquake magnitude (7.1 according to the CPTI11 catalogue) and in the suggestion of new source models. By using an improved tsunami simulation model, more accurate bathymetry data, this work tests the source models investigated by Piatanesi and Tinti (2002) and in addition the new fault models proposed by Cucci and Tertulliani (2010) and by Loreto et al. (2013). The simulations of the tsunami are calculated by means of the code, UBO-TSUFD, that solves the linear equations of Navier-Stokes in approximation of shallow water with the finite-difference technique, while the initial conditions are calculated via Okada's formula. The key-result used to test the models against the data is the maximum height of the tsunami calculated close to the shore at a minimum depth of 5m corrected using the values of the initial coseismic field deformation.
Reverse radiance: a fast accurate method for determining luminance
NASA Astrophysics Data System (ADS)
Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay
2012-10-01
Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.
Microseismic imaging using a source function independent full waveform inversion method
NASA Astrophysics Data System (ADS)
Wang, Hanchen; Alkhalifah, Tariq
2018-07-01
At the heart of microseismic event measurements is the task to estimate the location of the source microseismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional microseismic source locating methods require, in many cases, manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, FWI of microseismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modelled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers are calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
Improved Overpressure Recording and Modeling for Near-Surface Explosion Forensics
NASA Astrophysics Data System (ADS)
Kim, K.; Schnurr, J.; Garces, M. A.; Rodgers, A. J.
2017-12-01
The accurate recording and analysis of air-blast acoustic waveforms is a key component of the forensic analysis of explosive events. Smartphone apps can enhance traditional technologies by providing scalable, cost-effective ubiquitous sensor solutions for monitoring blasts, undeclared activities, and inaccessible facilities. During a series of near-surface chemical high explosive tests, iPhone 6's running the RedVox infrasound recorder app were co-located with high-fidelity Hyperion overpressure sensors, allowing for direct comparison of the resolution and frequency content of the devices. Data from the traditional sensors is used to characterize blast signatures and to determine relative iPhone microphone amplitude and phase responses. A Wiener filter based source deconvolution method is applied, using a parameterized source function estimated from traditional overpressure sensor data, to estimate system responses. In addition, progress on a new parameterized air-blast model is presented. The model is based on the analysis of a large set of overpressure waveforms from several surface explosion test series. An appropriate functional form with parameters determined empirically from modern air-blast and acoustic data will allow for better parameterization of signals and the improved characterization of explosive sources.
The Rupture Characteristic of 1999 Izmit Sequence Using IRIS Data
NASA Astrophysics Data System (ADS)
Konca, A. O.; Helmberger, D. V.; Ji, C.; Tan, Y.
2003-12-01
The standard source studies use teleseismic data (30° to 90° ) to analyze earthquakes. Therefore, only a limited portion of the focal sphere is involved in source determinations. Furthermore, the locations and origin times of events remain incompatible with local determinations. Here, we attempt to resolve such issues by using IRIS data at all distances, leading to more accurate and detailed rupture properties and accurate relative locations. The 1999 Izmit earthquake sequence is chosen to test our method. The challenge of using data outside the conventional teleseismic distance range is that the arrival times and waveforms are affected more by the Earth structure. We overcome this difficulty by calibrating the path effects for the mainshock using the simpler aftershocks. Therefore, it is crucial to determine the source parameters of the aftershock. We constructed a Green's function library from a regionalized 1-D model and performed a grid search to establish the depth and fault parameters based on waveform matching for the Pnl waves between the synthetics and data, allowing the synthetics in each station to shift separately to account for the path effect. Our results show that the earthquake depth was around 7 km, rather than 19 km from local observatory (Kandilli) and 15 km from the Harvard's CMT solution. The best focal mechanism has a strike of 263° , a dip of 65° , and a rake of 180° , which is very close to the Harvard's CMT solution. The waveform fits of this aftershock is then used as a criterion to select useful source-station paths. A path with a cross-correlation value above 90% between data and synthetics is defined as a "good path" and can be used for studying the Izmit and Duzce earthquakes. We find that the stations in Central Europe and some of the Greek Islands are "good paths", while the stations in Northeast Africa and Italy cannot be used. The time shifts that give the best cross-correlation values are used to calibrate the picks of the Izmit and Duzce events. We realize that this is a very objective way to pick arrival times. However, our preliminary inversions using teleseismic data for Duzce and Izmit events show that handpicked P and S arrival times of the same station from two very close events are not always well correlated. Obviously, how we pick the arrival time governs the rupture pattern and rupture velocity. Therefore, our methodology brings a more objective approach to pick the travel times. To the end, we will invert for the source history of the Duzce and Izmit earthquakes with the regional data and compare with the inversion result using teleseismic data. Moreover, predictions of the teleseismic data, using the solution from the inversion using regional phases will be presented.
Ryberg, T.; Haberland, C.H.; Fuis, G.S.; Ellsworth, W.L.; Shelly, D.R.
2010-01-01
Non-volcanic tremor (NVT) has been observed at several subduction zones and at the San Andreas Fault (SAF). Tremor locations are commonly derived by cross-correlating envelope-transformed seismic traces in combination with source-scanning techniques. Recently, they have also been located by using relative relocations with master events, that is low-frequency earthquakes that are part of the tremor; locations are derived by conventional traveltime-based methods. Here we present a method to locate the sources of NVT using an imaging approach for multiple array data. The performance of the method is checked with synthetic tests and the relocation of earthquakes. We also applied the method to tremor occurring near Cholame, California. A set of small-aperture arrays (i.e. an array consisting of arrays) installed around Cholame provided the data set for this study. We observed several tremor episodes and located tremor sources in the vicinity of SAF. During individual tremor episodes, we observed a systematic change of source location, indicating rapid migration of the tremor source along SAF. ?? 2010 The Authors Geophysical Journal International ?? 2010 RAS.
Localization of optic disc and fovea in retinal images using intensity based line scanning analysis.
Kamble, Ravi; Kokare, Manesh; Deshmukh, Girish; Hussin, Fawnizu Azmadi; Mériaudeau, Fabrice
2017-08-01
Accurate detection of diabetic retinopathy (DR) mainly depends on identification of retinal landmarks such as optic disc and fovea. Present methods suffer from challenges like less accuracy and high computational complexity. To address this issue, this paper presents a novel approach for fast and accurate localization of optic disc (OD) and fovea using one-dimensional scanned intensity profile analysis. The proposed method utilizes both time and frequency domain information effectively for localization of OD. The final OD center is located using signal peak-valley detection in time domain and discontinuity detection in frequency domain analysis. However, with the help of detected OD location, the fovea center is located using signal valley analysis. Experiments were conducted on MESSIDOR dataset, where OD was successfully located in 1197 out of 1200 images (99.75%) and fovea in 1196 out of 1200 images (99.66%) with an average computation time of 0.52s. The large scale evaluation has been carried out extensively on nine publicly available databases. The proposed method is highly efficient in terms of quickly and accurately localizing OD and fovea structure together compared with the other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
France, Logan K; Vermillion, Meghan S; Garrett, Caroline M
2018-01-01
Blood pressure is a critical parameter for evaluating cardiovascular health, assessing effects of drugs and procedures, monitoring physiologic status during anesthesia, and making clinical decisions. The placement of an arterial catheter is the most direct and accurate method for measuring blood pressure; however, this approach is invasive and of limited use during brief sedated examinations. The objective of this study was to determine which method of indirect blood pressure monitoring was most accurate compared with measurement by direct arterial catheterization. In addition, we sought to determine the relative accuracy of each indirect method (compared with direct arterial measurement) at a given body location and to assess whether the accuracy of each indirect method was dependent on body location. We compared direct blood pressure measurements by means of catheterization of the saphenous artery with oscillometric and ultrasonic Doppler flow detection measurements at 3 body locations (forearm, distal leg, and tail base) in 16 anesthetized, male rhesus macaques. The results indicate that oscillometry at the forearm is the best indirect method and location for accurately and consistently measuring blood pressure in healthy male rhesus macaques.
Poynting-vector based method for determining the bearing and location of electromagnetic sources
Simons, David J.; Carrigan, Charles R.; Harben, Philip E.; Kirkendall, Barry A.; Schultz, Craig A.
2008-10-21
A method and apparatus is utilized to determine the bearing and/or location of sources, such as, alternating current (A.C.) generators and loads, power lines, transformers and/or radio-frequency (RF) transmitters, emitting electromagnetic-wave energy for which a Poynting-Vector can be defined. When both a source and field sensors (electric and magnetic) are static, a bearing to the electromagnetic source can be obtained. If a single set of electric (E) and magnetic (B) sensors are in motion, multiple measurements permit location of the source. The method can be extended to networks of sensors allowing determination of the location of both stationary and moving sources.
Yost, William A; Zhong, Xuan; Najam, Anbar
2015-11-01
In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.
Invariance of wearing location of Omron-BI pedometers: a validation study.
Zhu, Weimo; Lee, Miyoung
2010-11-01
The purpose of this study was to investigate the validity and reliability evidences of the Omron BI pedometer, which could count steps taken even when worn at different locations on the body. Forty (20 males and 20 females) adults were recruited to walk wearing 5 sets, 1 set at a time, of 10 BI pedometers during testing, 1 each at 10 different locations. For comparison, they also wore 2 Yamax Digi-Walker SW-200 pedometers and a Dynastream AMP 331 activity monitor. The subjects walked in 3 free-living conditions: a flat sidewalk, stairs, and mixed conditions. Except for a slight decrease in accuracy in the pant pocket locations, Omron BI pedometers counted steps accurately across other locations when subjects walked on the flat sidewalk, and the performance was consistent across devices and trials. When the subjects climbed up stairs, however, the absolute error % of the pant pocket locations increased significantly (P < .05) and similar or higher error rates were found in the AMP 331 and SW-200s. The Omron BI pedometer can accurately count steps when worn at various locations on the body in free-living conditions except for front pant pocket locations, especially when climbing stairs.
Are pain location and physical examinations useful in locating a tear site of the rotator cuff?
Itoi, Eiji; Minagawa, Hiroshi; Yamamoto, Nobuyuki; Seki, Nobutoshi; Abe, Hidekazu
2006-02-01
Pain is the most common symptom of patients with rotator cuff tendinopathy, but little is known about the relationship between the site of pain and the site of cuff pathologic lesions. Also, accuracies of physical examinations used to locate a tear by assessing the muscle strength seem to be affected by the threshold for muscle weakness, but no studies have been reported regarding the efficacies of physical examinations in reference to their threshold. Pain location is useful in locating a tear site. Efficacies of physical examinations to evaluate the function of the cuff muscles depend on the threshold for muscle weakness. Case series; Level of evidence, 4. The authors retrospectively reviewed the clinical charts of 160 shoulders of 149 patients (mean age, 53 years) with either rotator cuff tears (140 shoulders) or cuff tendinitis (20 shoulders). The location of pain was recorded on a standardized form with 6 different areas. The diagnostic accuracies of the following tests were assessed with various thresholds for muscle weakness: supraspinatus test, the external rotation strength test, and the lift-off test. Lateral and anterior portions of the shoulder were the most common sites of pain regardless of existence of tear or tear location. The supraspinatus test was most accurate when it was assessed to have positive results with the muscle strength less than manual muscle testing grade 5, whereas the lift-off test was most accurate with a threshold less than grade 3. The external rotation strength test was most accurate with a threshold of less than grade 4+. The authors conclude that pain location is not useful in locating the site of a tear, whereas the physical examinations aiming to locate the tear site are clinically useful when assessed to have positive results with appropriate threshold for muscle weakness.
Micro-seismic imaging using a source function independent full waveform inversion method
NASA Astrophysics Data System (ADS)
Wang, Hanchen; Alkhalifah, Tariq
2018-03-01
At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
TH-CD-207B-03: How to Quantify Temporal Resolution in X-Ray MDCT Imaging?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budde, A; GE Healthcare Technologies, Madison, WI; Li, Y
Purpose: In modern CT scanners, a quantitative metric to assess temporal response, namely, to quantify the temporal resolution (TR), remains elusive. Rough surrogate metrics, such as half of the gantry rotation time for single source CT, a quarter of the gantry rotation time for dual source CT, or measurements of motion artifact’s size, shape, or intensity have previously been used. In this work, a rigorous framework which quantifies TR and a practical measurement method are developed. Methods: A motion phantom was simulated which consisted of a single rod that is in motion except during a static period at the temporalmore » center of the scan, termed the TR window. If the image of the motion scan has negligible motion artifacts compared to an image from a totally static scan, then the system has a TR no worse than the TR window used. By repeating this comparison with varying TR windows, the TR of the system can be accurately determined. Motion artifacts were also visually assessed and the TR was measured across varying rod motion speeds, directions, and locations. Noiseless fan beam acquisitions were simulated and images were reconstructed with a short-scan image reconstruction algorithm. Results: The size, shape, and intensity of motion artifacts varied when the rod speed, direction, or location changed. TR measured using the proposed method, however, was consistent across rod speeds, directions, and locations. Conclusion: Since motion artifacts vary depending upon the motion speed, direction, and location, they are not suitable for measuring TR. In this work, a CT system with a specified TR is defined as having the ability to produce a static image with negligible motion artifacts, no matter what motion occurs outside of a static window of width TR. This framework allows for practical measurement of temporal resolution in clinical CT imaging systems. Funding support: GE Healthcare; Conflict of Interest: Employee, GE Healthcare.« less
Controlled-source seismic interferometry with one way wave fields
NASA Astrophysics Data System (ADS)
van der Neut, J.; Wapenaar, K.; Thorbecke, J. W.
2008-12-01
In Seismic Interferometry we generally cross-correlate registrations at two receiver locations and sum over an array of sources to retrieve a Green's function as if one of the receiver locations hosts a (virtual) source and the other receiver location hosts an actual receiver. One application of this concept is to redatum an area of surface sources to a downhole receiver location, without requiring information about the medium between the sources and receivers, thus providing an effective tool for imaging below complex overburden, which is also known as the Virtual Source method. We demonstrate how elastic wavefield decomposition can be effectively combined with controlled-source Seismic Interferometry to generate virtual sources in a downhole receiver array that radiate only down- or upgoing P- or S-waves with receivers sensing only down- or upgoing P- or S- waves. For this purpose we derive exact Green's matrix representations from a reciprocity theorem for decomposed wavefields. Required is the deployment of multi-component sources at the surface and multi- component receivers in a horizontal borehole. The theory is supported with a synthetic elastic model, where redatumed traces are compared with those of a directly modeled reflection response, generated by placing active sources at the virtual source locations and applying elastic wavefield decomposition on both source and receiver side.
Tracking Honey Bees Using LIDAR (Light Detection and Ranging) Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
BENDER, SUSAN FAE ANN; RODACY, PHILIP J.; SCHMITT, RANDAL L.
The Defense Advanced Research Projects Agency (DARPA) has recognized that biological and chemical toxins are a real and growing threat to troops, civilians, and the ecosystem. The Explosives Components Facility at Sandia National Laboratories (SNL) has been working with the University of Montana, the Southwest Research Institute, and other agencies to evaluate the feasibility of directing honeybees to specific targets, and for environmental sampling of biological and chemical ''agents of harm''. Recent work has focused on finding and locating buried landmines and unexploded ordnance (UXO). Tests have demonstrated that honeybees can be trained to efficiently and accurately locate explosive signaturesmore » in the environment. However, it is difficult to visually track the bees and determine precisely where the targets are located. Video equipment is not practical due to its limited resolution and range. In addition, it is often unsafe to install such equipment in a field. A technology is needed to provide investigators with the standoff capability to track bees and accurately map the location of the suspected targets. This report documents Light Detection and Ranging (LIDAR) tests that were performed by SNL. These tests have shown that a LIDAR system can be used to track honeybees. The LIDAR system can provide both the range and coordinates of the target so that the location of buried munitions can be accurately mapped for subsequent removal.« less
A Map/INS/Wi-Fi Integrated System for Indoor Location-Based Service Applications
Yu, Chunyang; Lan, Haiyu; Gu, Fuqiang; Yu, Fei; El-Sheimy, Naser
2017-01-01
In this research, a new Map/INS/Wi-Fi integrated system for indoor location-based service (LBS) applications based on a cascaded Particle/Kalman filter framework structure is proposed. Two-dimension indoor map information, together with measurements from an inertial measurement unit (IMU) and Received Signal Strength Indicator (RSSI) value, are integrated for estimating positioning information. The main challenge of this research is how to make effective use of various measurements that complement each other in order to obtain an accurate, continuous, and low-cost position solution without increasing the computational burden of the system. Therefore, to eliminate the cumulative drift caused by low-cost IMU sensor errors, the ubiquitous Wi-Fi signal and non-holonomic constraints are rationally used to correct the IMU-derived navigation solution through the extended Kalman Filter (EKF). Moreover, the map-aiding method and map-matching method are innovatively combined to constrain the primary Wi-Fi/IMU-derived position through an Auxiliary Value Particle Filter (AVPF). Different sources of information are incorporated through a cascaded structure EKF/AVPF filter algorithm. Indoor tests show that the proposed method can effectively reduce the accumulation of positioning errors of a stand-alone Inertial Navigation System (INS), and provide a stable, continuous and reliable indoor location service. PMID:28574471
A Map/INS/Wi-Fi Integrated System for Indoor Location-Based Service Applications.
Yu, Chunyang; Lan, Haiyu; Gu, Fuqiang; Yu, Fei; El-Sheimy, Naser
2017-06-02
In this research, a new Map/INS/Wi-Fi integrated system for indoor location-based service (LBS) applications based on a cascaded Particle/Kalman filter framework structure is proposed. Two-dimension indoor map information, together with measurements from an inertial measurement unit (IMU) and Received Signal Strength Indicator (RSSI) value, are integrated for estimating positioning information. The main challenge of this research is how to make effective use of various measurements that complement each other in order to obtain an accurate, continuous, and low-cost position solution without increasing the computational burden of the system. Therefore, to eliminate the cumulative drift caused by low-cost IMU sensor errors, the ubiquitous Wi-Fi signal and non-holonomic constraints are rationally used to correct the IMU-derived navigation solution through the extended Kalman Filter (EKF). Moreover, the map-aiding method and map-matching method are innovatively combined to constrain the primary Wi-Fi/IMU-derived position through an Auxiliary Value Particle Filter (AVPF). Different sources of information are incorporated through a cascaded structure EKF/AVPF filter algorithm. Indoor tests show that the proposed method can effectively reduce the accumulation of positioning errors of a stand-alone Inertial Navigation System (INS), and provide a stable, continuous and reliable indoor location service.
Evaluating the Reverse Time Migration Method on the dense Lapnet / Polenet seismic array in Europe
NASA Astrophysics Data System (ADS)
Dupont, Aurélien; Le Pichon, Alexis
2013-04-01
In this study, results are obtained using the reverse time migration method used as benchmark to evaluate the implemented method by Walker et al., (2010, 2011). Explosion signals recorded by the USArray and extracted from the TAIRED catalogue (TA Infrasound Reference Event Database user community / Vernon et al., 2012) are investigated. The first one is an explosion at Camp Minden, Louisiana (2012-10-16 04:25:00 UTC) and the second one is a natural gas explosion near Price, Utah (2012-11-20 15:20:00 UTC). We compare our results to automatic solutions (www.iris.edu/spud/infrasoundevent). The good agreement between both solutions validates our detection method. In a second time, we analyse data from the Lapnet / Polenet dense seismic network (Kozlovskaya et al., 2008). Detection and location in two-dimensional space and time of infrasound events presumably due to acoustic-to-seismic coupling, during the 2007-2009 period in Europe, are presented. The aim of this work is to integrate near-real time network performance predictions at regional scales to improve automatic detection of infrasonic sources. The use of dense seismic networks provides a valuable tool to monitor infrasonic phenomena, since seismic location has recently proved to be more accurate than infrasound locations due to the large number of seismic sensors.
Komssi, S; Huttunen, J; Aronen, H J; Ilmoniemi, R J
2004-03-01
Dipole models, which are frequently used in attempts to solve the electromagnetic inverse problem, require explicit a priori assumptions about the cerebral current sources. This is not the case for solutions based on minimum-norm estimates. In the present study, we evaluated the spatial accuracy of the L2 minimum-norm estimate (MNE) in realistic noise conditions by assessing its ability to localize sources of evoked responses at the primary somatosensory cortex (SI). Multichannel somatosensory evoked potentials (SEPs) and magnetic fields (SEFs) were recorded in 5 subjects while stimulating the median and ulnar nerves at the left wrist. A Tikhonov-regularized L2-MNE, constructed on a spherical surface from the SEP signals, was compared with an equivalent current dipole (ECD) solution obtained from the SEFs. Primarily tangential current sources accounted for both SEP and SEF distributions at around 20 ms (N20/N20m) and 70 ms (P70/P70m), which deflections were chosen for comparative analysis. The distances between the locations of the maximum current densities obtained from MNE and the locations of ECDs were on the average 12-13 mm for both deflections and nerves stimulated. In accordance with the somatotopical order of SI, both the MNE and ECD tended to localize median nerve activation more laterally than ulnar nerve activation for the N20/N20m deflection. Simulation experiments further indicated that, with a proper estimate of the source depth and with a good fit of the head model, the MNE can reach a mean accuracy of 5 mm in 0.2-microV root-mean-square noise. When compared with previously reported localizations based on dipole modelling of SEPs, it appears that equally accurate localization of S1 can be obtained with the MNE. MNE can be used to verify parametric source modelling results. Having a relatively good localization accuracy and requiring minimal assumptions, the MNE may be useful for the localization of poorly known activity distributions and for tracking activity changes between brain areas as a function of time.
NASA Astrophysics Data System (ADS)
Kim, G.; Che, I. Y.
2017-12-01
We evaluated relationship among source parameters of underground nuclear tests in northern Korean Peninsula using regional seismic data. Dense global and regional seismic networks are incorporated to measure locations and origin times precisely. Location analyses show that distance among the locations is tiny on a regional scale. The tiny location-differences validate a linear model assumption. We estimated source spectral ratios by excluding path effects based spectral ratios of the observed seismograms. We estimated empirical relationship among depth of burials and yields based on theoretical source models.
TRIPPy: Python-based Trailed Source Photometry
NASA Astrophysics Data System (ADS)
Fraser, Wesley C.; Alexandersen, Mike; Schwamb, Megan E.; Marsset, Michael E.; Pike, Rosemary E.; Kavelaars, JJ; Bannister, Michele T.; Benecchi, Susan; Delsanti, Audrey
2016-05-01
TRIPPy (TRailed Image Photometry in Python) uses a pill-shaped aperture, a rectangle described by three parameters (trail length, angle, and radius) to improve photometry of moving sources over that done with circular apertures. It can generate accurate model and trailed point-spread functions from stationary background sources in sidereally tracked images. Appropriate aperture correction provides accurate, unbiased flux measurement. TRIPPy requires numpy, scipy, matplotlib, Astropy (ascl:1304.002), and stsci.numdisplay; emcee (ascl:1303.002) and SExtractor (ascl:1010.064) are optional.
The moving minimum audible angle is smaller during self motion than during source motion
Brimijoin, W. Owen; Akeroyd, Michael A.
2014-01-01
We are rarely perfectly still: our heads rotate in three axes and move in three dimensions, constantly varying the spectral and binaural cues at the ear drums. In spite of this motion, static sound sources in the world are typically perceived as stable objects. This argues that the auditory system—in a manner not unlike the vestibulo-ocular reflex—works to compensate for self motion and stabilize our sensory representation of the world. We tested a prediction arising from this postulate: that self motion should be processed more accurately than source motion. We used an infrared motion tracking system to measure head angle, and real-time interpolation of head related impulse responses to create “head-stabilized” signals that appeared to remain fixed in space as the head turned. After being presented with pairs of simultaneous signals consisting of a man and a woman speaking a snippet of speech, normal and hearing impaired listeners were asked to report whether the female voice was to the left or the right of the male voice. In this way we measured the moving minimum audible angle (MMAA). This measurement was made while listeners were asked to turn their heads back and forth between ± 15° and the signals were stabilized in space. After this “self-motion” condition we measured MMAA in a second “source-motion” condition when listeners remained still and the virtual locations of the signals were moved using the trajectories from the first condition. For both normal and hearing impaired listeners, we found that the MMAA for signals moving relative to the head was ~1–2° smaller when the movement was the result of self motion than when it was the result of source motion, even though the motion with respect to the head was identical. These results as well as the results of past experiments suggest that spatial processing involves an ongoing and highly accurate comparison of spatial acoustic cues with self-motion cues. PMID:25228856
Fan, Guoxin; Gu, Xin; Liu, Yifan; Wu, Xinbo; Zhang, Hailong; Gu, Guangfei; Guan, Xiaofei; He, Shisheng
2016-01-01
Transforaminal percutaneous endoscopic lumbar discectomy (tPELD) poses great challenges for junior surgeons. Beginners often require repeated attempts using fluoroscopy causing more punctures, which may significantly undermine their confidence and increase the radiation exposure to medical staff and patients. Moreover, the impact of an accurate location on the learning curve of tPELD has not been defined. The study aimed to investigate the impact of an accurate preoperative location method on learning difficulty and fluoroscopy time of tPELD. Retrospective evaluation. Patients receiving tPELD by one surgeon with a novel accurate preoperative location method were regarded as Group A, and those receiving tPELD by another surgeon with a conventional fluoroscopy method were regarded as Group B. From January 2012 to August 2014, we retrospectively reviewed the first 80 tPELD cases conducted by 2 junior surgeons. The operation time, fluoroscopy times, preoperative location time, and puncture-channel time were thoroughly analyzed. The operation time of the first 20 patients were 99.75 ± 10.38 minutes in Group A and 115.7 ± 16.46 minutes in Group B, while the operation time of all 80 patients was 88.36 ± 11.56 minutes in Group A and 98.26 ± 14.90 minutes in Group B. Significant differences were detected in operation time between the 2 groups, both for the first 20 patients and total 80 patients (P < 0.05). The fluoroscopy times were 26.78 ± 4.17 in Group A and 33.98 ± 2.69 in Group B (P < 0.001). The preoperative location time was 3.43 ± 0.61 minutes in Group A and 5.59 ± 1.46 minutes in Group B (P < 0.001). The puncture-channel time was 27.20 ± 4.49 minutes in Group A and 34.64 ± 8.35 minutes in Group B (P < 0.001). There was a moderate correlation between preoperative location time and puncture-channel time (r = 0.408, P < 0.001), and a moderate correlation between preoperative location time and fluoroscopy times (r = 0.441, P < 0.001). Mild correlations were also observed between preoperative location time and operation time (r = 0.270, P = 0.001). There were no significant differences in preoperative back visual analogue scale (VAS) score, postoperative back VAS, preoperative leg VAS, postoperative leg VAS, preoperative Japanese Orthopaedic Association (JOA) score, postoperative JOA, preoperative Oswestry disability score (ODI), or postoperative ODI (P > 0.05). However, significant differences were all detected between preoperative abovementioned scores and postoperative scores (P < 0.05). Moreover, there was no significant differences in Macnab satisfaction between the 2 groups (P = 0.179). There were 2 patients with recurrence in Group A and 3 patients in Group B. Twelve patients with postoperative disc remnants were identified in Group A and 9 patients in Group B. No significant difference was identified between the 2 groups (P = 0.718). The preoperative lumbar location method is just a tiny step in tPELD, junior surgeons still need to focus on their subjective feelings during punctures and accumulating their experience in endoscopic discectomy. The accurate preoperative location method lowered the learning difficulty and reduced the fluoroscopy time of tPELD, which was also associated with lower preoperative location time and puncture-channel time. Key words: Learning difficulty, fluoroscopy reduction, transforamimal percutaneous endoscopic lumbar discectomy, preoperative locationLearning difficulty, fluoroscopy reduction, transforamimal percutaneous endoscopic lumbar discectomy, preoperative location.
Mathematical analysis of the honeybee waggle dance.
Okada, R; Ikeno, H; Kimura, T; Ohashi, Mizue; Aonuma, H; Ito, E
2012-01-01
A honeybee informs her nestmates of the location of a flower by doing a waggle dance. The waggle dance encodes both the direction of and distance to the flower from the hive. To reveal how the waggle dance benefits the colony, we created a Markov model of bee foraging behavior and performed simulation experiments by incorporating the biological parameters that we obtained from our own observations of real bees as well as from the literature. When two feeders were each placed 400 m away from the hive in different directions, a virtual colony in which honeybees danced and correctly transferred information (a normal, real bee colony) made significantly greater numbers of successful visits to the feeders compared to a colony with inaccurate information transfer. Howerer, when five feeders were each located 400 m from the hive, the inaccurate information transfer colony performed better than the normal colony. These results suggest that dancing's ability to communicate accurate information depends on the number of feeders. Furthermore, because non-dancing colonies always made significantly fewer visits than those two colonies, we concluded that dancing behavior is beneficial for hives' ability to visit food sources.
A functional model for characterizing long-distance movement behaviour
Buderman, Frances E.; Hooten, Mevin B.; Ivan, Jacob S.; Shenk, Tanya M.
2016-01-01
Advancements in wildlife telemetry techniques have made it possible to collect large data sets of highly accurate animal locations at a fine temporal resolution. These data sets have prompted the development of a number of statistical methodologies for modelling animal movement.Telemetry data sets are often collected for purposes other than fine-scale movement analysis. These data sets may differ substantially from those that are collected with technologies suitable for fine-scale movement modelling and may consist of locations that are irregular in time, are temporally coarse or have large measurement error. These data sets are time-consuming and costly to collect but may still provide valuable information about movement behaviour.We developed a Bayesian movement model that accounts for error from multiple data sources as well as movement behaviour at different temporal scales. The Bayesian framework allows us to calculate derived quantities that describe temporally varying movement behaviour, such as residence time, speed and persistence in direction. The model is flexible, easy to implement and computationally efficient.We apply this model to data from Colorado Canada lynx (Lynx canadensis) and use derived quantities to identify changes in movement behaviour.
Swiger, S L; Hogsette, J A; Butler, J F
2014-02-01
Larval interactions of dipteran species, blow flies in particular, were observed and documented daily over time and location on five black bear carcasses in Gainesville, FL, USA, from June 2002 - September 2004. Cochliomyia macellaria (Fabricius) or Chrysomya megacephala (Fabricius) larvae were collected first, after which Chrysomya rufifacies (Macquart) oviposited on the carcasses in multiple locations (i.e., neck, anus, and exposed flesh) not inhabited already by the other blow fly larvae. Within the first week of decomposition, C. rufifacies larvae grew to ≥12 mm, filling the carcasses with thousands of larvae and replacing the other calliphorid larvae either through successful food source competition or by predation. As a result, C. macellaria and C. megacephala were not collected past their third instar feeding stage. The blow fly species, C. megacephala, C. macellaria, Lucilia caeruleiviridis (Macquart), Phormia regina (Meigen), Lucilia sericata (Meigen), and C. rufifacies, completed two developmental cycles in the 88.5-kg carcass. This phenomenon might serve to complicate or prevent the calculation of an accurate postmortem interval.
Strategies for automatic processing of large aftershock sequences
NASA Astrophysics Data System (ADS)
Kvaerna, T.; Gibbons, S. J.
2017-12-01
Aftershock sequences following major earthquakes present great challenges to seismic bulletin generation. The analyst resources needed to locate events increase with increased event numbers as the quality of underlying, fully automatic, event lists deteriorates. While current pipelines, designed a generation ago, are usually limited to single passes over the raw data, modern systems also allow multiple passes. Processing the raw data from each station currently generates parametric data streams that are later subject to phase-association algorithms which form event hypotheses. We consider a major earthquake scenario and propose to define a region of likely aftershock activity in which we will detect and accurately locate events using a separate, specially targeted, semi-automatic process. This effort may use either pattern detectors or more general algorithms that cover wider source regions without requiring waveform similarity. An iterative procedure to generate automatic bulletins would incorporate all the aftershock event hypotheses generated by the auxiliary process, and filter all phases from these events from the original detection lists prior to a new iteration of the global phase-association algorithm.
Aerodynamics of Stardust Sample Return Capsule
NASA Technical Reports Server (NTRS)
Mitcheltree, R. A.; Wilmoth, R. G.; Cheatwood, F. M.; Brauckmann, G. J.; Greene, F. A.
1997-01-01
Successful return of interstellar dust and cometary material by the Stardust Sample Return Capsule requires an accurate description of the Earth entry vehicle's aerodynamics. This description must span the hypersonic-rarefied, hypersonic-continuum, supersonic, transonic, and subsonic flow regimes. Data from numerous sources are compiled to accomplish this objective. These include Direct Simulation Monte Carlo analyses, thermochemical nonequilibrium computational fluid dynamics, transonic computational fluid dynamics, existing wind tunnel data, and new wind tunnel data. Four observations are highlighted: 1) a static instability is revealed in the free-molecular and early transitional-flow regime due to aft location of the vehicle s center-of-gravity, 2) the aerodynamics across the hypersonic regime are compared with the Newtonian flow approximation and a correlation between the accuracy of the Newtonian flow assumption and the sonic line position is noted, 3) the primary effect of shape change due to ablation is shown to be a reduction in drag, and 4) a subsonic dynamic instability is revealed which will necessitate either a change in the vehicle s center-of-gravity location or the use of a stabilizing drogue parachute.
Probabilistic location estimation of acoustic emission sources in isotropic plates with one sensor
NASA Astrophysics Data System (ADS)
Ebrahimkhanlou, Arvin; Salamone, Salvatore
2017-04-01
This paper presents a probabilistic acoustic emission (AE) source localization algorithm for isotropic plate structures. The proposed algorithm requires only one sensor and uniformly monitors the entire area of such plates without any blind zones. In addition, it takes a probabilistic approach and quantifies localization uncertainties. The algorithm combines a modal acoustic emission (MAE) and a reflection-based technique to obtain information pertaining to the location of AE sources. To estimate confidence contours for the location of sources, uncertainties are quantified and propagated through the two techniques. The approach was validated using standard pencil lead break (PLB) tests on an Aluminum plate. The results demonstrate that the proposed source localization algorithm successfully estimates confidence contours for the location of AE sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... RICE Located at Area Sources of HAP Emissions 2d Table 2d to Subpart ZZZZ of Part 63 Protection of... 2d Table 2d to Subpart ZZZZ of Part 63—Requirements for Existing Stationary RICE Located at Area... requirements for existing stationary RICE located at area sources of HAP emissions: For each . . . You must...
Mobile mapping of methane emissions and isoscapes
NASA Astrophysics Data System (ADS)
Takriti, Mounir; Ward, Sue; Wynn, Peter; Elias, Dafydd; McNamara, Niall
2017-04-01
Methane (CH4) is a potent greenhouse gas emitted from a variety of natural and anthropogenic sources. It is crucial to accurately and efficiently detect CH4 emissions and identify their sources to improve our understanding of changing emission patterns as well as to identify ways to curtail their release into the atmosphere. However, using established methods this can be challenging as well as time and resource intensive due to the temporal and spatial heterogeneity of many sources. To address this problem, we have developed a vehicle mounted mobile system that combines high precision CH4 measurements with isotopic mapping and dual isotope source characterisation. We here present details of the development and testing of a unique system for the detection and isotopic analysis of CH4 plumes built around a Picarro isotopic (13C/12C) gas analyser and a high precision Los Gatos greenhouse gas analyser. Combined with micrometeorological measurements and a mechanism for collecting discrete samples for high precision dual isotope (13C/12C, 2H/1H) analysis the system enables mapping of concentrations as well as directional and isotope based source verification. We then present findings from our mobile methane surveys around the North West of England. This area includes a variety of natural and anthropogenic methane sources within a relatively small geographical area, including livestock farming, urban and industrial gas infrastructure, landfills and waste water treatment facilities, and wetlands. We show that the system was successfully able to locate leaks from natural gas infrastructure and emissions from agricultural activities and to distinguish isotope signatures from these sources.
Online monitoring of seismic damage in water distribution systems
NASA Astrophysics Data System (ADS)
Liang, Jianwen; Xiao, Di; Zhao, Xinhua; Zhang, Hongwei
2004-07-01
It is shown that water distribution systems can be damaged by earthquakes, and the seismic damages cannot easily be located, especially immediately after the events. Earthquake experiences show that accurate and quick location of seismic damage is critical to emergency response of water distribution systems. This paper develops a methodology to locate seismic damage -- multiple breaks in a water distribution system by monitoring water pressure online at limited positions in the water distribution system. For the purpose of online monitoring, supervisory control and data acquisition (SCADA) technology can well be used. A neural network-based inverse analysis method is constructed for locating the seismic damage based on the variation of water pressure. The neural network is trained by using analytically simulated data from the water distribution system, and validated by using a set of data that have never been used in the training. It is found that the methodology provides an effective and practical way in which seismic damage in a water distribution system can be accurately and quickly located.
NASA Technical Reports Server (NTRS)
Pickett, G. F.; Wells, R. A.; Love, R. A.
1977-01-01
A computer user's manual describing the operation and the essential features of the microphone location program is presented. The Microphone Location Program determines microphone locations that ensure accurate and stable results from the equation system used to calculate modal structures. As part of the computational procedure for the Microphone Location Program, a first-order measure of the stability of the equation system was indicated by a matrix 'conditioning' number.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engdahl, Eric, R.; Bergman, Eric, A.; Myers, Stephen, C.
A new catalog of seismicity at magnitudes above 2.5 for the period 1923-2008 in the Iran region is assembled from arrival times reported by global, regional, and local seismic networks. Using in-country data we have formed new events, mostly at lower magnitudes that were not previously included in standard global earthquake catalogs. The magnitude completeness of the catalog varies strongly through time, complete to about magnitude 4.2 prior to 1998 and reaching a minimum of about 3.6 during the period 1998-2005. Of the 25,722 events in the catalog, most of the larger events have been carefully reviewed for proper phasemore » association, especially for depth phases and to eliminate outlier readings, and relocated. To better understand the quality of the data set of arrival times reported by Iranian networks that are central to this study, many waveforms for events in Iran have been re-picked by an experienced seismic analyst. Waveforms at regional distances in this region are often complex. For many events this makes arrival time picks difficult to make, especially for smaller magnitude events, resulting in reported times that can be substantially improved by an experienced analyst. Even when the signal/noise ratio is large, re-picking can lead to significant differences. Picks made by our analyst are compared with original picks made by the regional networks. In spite of the obvious outliers, the median (-0.06 s) and spread (0.51 s) are small, suggesting that reasonable confidence can be placed in the picks reported by regional networks in Iran. This new catalog has been used to assess focal depth distributions throughout Iran. A principal result of this study is that the geographic pattern of depth distributions revealed by the relatively small number of earthquakes (~167) with depths constrained by waveform modeling (+/- 4 km) are now in agreement with the much larger number of depths (~1229) determined using reanalysis of ISC arrival-times (+/-10 km), within their respective errors. This is a significant advance, as outliers and future events with apparently anomalous depths can be readily identified and, if necessary, further investigated. The patterns of reliable focal depth distributions have been interpreted in the context of Middle Eastern active tectonics. Most earthquakes in the Iranian continental lithosphere occur in the upper crust, less than about 25-30 km in depth, with the crustal shortening produced by continental collision apparently accommodated entirely by thickening and distributed deformation rather than by subduction of crust into the mantle. However, intermediate-depth earthquakes associated with subducted slab do occur across the central Caspian Sea and beneath the Makran coast. A multiple-event relocation technique, specialized to use different kinds of near-source data, is used to calibrate the locations of 24 clusters containing 901 events drawn from the seismicity catalog. The absolute locations of these clusters are fixed either by comparing the pattern of relocated earthquakes with mapped fault geometry, by using one or more cluster events that have been accurately located independently by a local seismic network or aftershock deployment, by using InSAR data to determine the rupture zone of shallow earthquakes, or by some combination of these near-source data. This technique removes most of the systematic bias in single-event locations done with regional and teleseismic data, resulting in 624 calibrated events with location uncertainties of 5 km or better at the 90% confidence level (GT590). For 21 clusters (847 events) that are calibrated in both location and origin time we calculate empirical travel times, relative to a standard 1-D travel time model (ak135), and investigate event to station travel-time anomalies as functions of epicentral distance and azimuth. Substantial travel-time anomalies are seen in the Iran region which make accurate locations impossible unless observing stations are at very short distances (less than about 200 km) or travel-time models are improved to account for lateral heterogeneity in the region. Earthquake locations in the Iran region by international agencies, based on regional and teleseismic arrival time data, are systematically biased to the southwest and have a 90% location accuracy of 18-23 km, with the lower value achievable by applying limits on secondary azimuth gap. The data set of calibrated locations reported here provides an important constraint on travel-time models that would begin to account for the lateral heterogeneity in Earth structure in the Iran region, and permit seismic networks, especially the regional ones, to obtain in future more accurate locations of the earthquakes in the region.« less
Su, Jason G; Jerrett, Michael; Meng, Ying-Ying; Pickett, Melissa; Ritz, Beate
2015-02-15
Epidemiological studies investigating relationships between environmental exposures from air pollution and health typically use residential addresses as a single point for exposure, while environmental exposures in transit, at work, school or other locations are largely ignored. Personal exposure monitors measure individuals' exposures over time; however, current personal monitors are intrusive and cannot be operated at a large scale over an extended period of time (e.g., for a continuous three months) and can be very costly. In addition, spatial locations typically cannot be identified when only personal monitors are used. In this paper, we piloted a study that applied momentary location tracking services supplied by smart phones to identify an individual's location in space-time for three consecutive months (April 28 to July 28, 2013) using available Wi-Fi networks. Individual exposures in space-time to the traffic-related pollutants Nitrogen Oxides (NOX) were estimated by superimposing an annual mean NOX concentration surface modeled using the Land Use Regression (LUR) modeling technique. Individual's exposures were assigned to stationary (including home, work and other stationary locations) and in-transit (including commute and other travel) locations. For the individual, whose home/work addresses were known and the commute route was fixed, it was found that 95.3% of the time, the individual could be accurately identified in space-time. The ambient concentration estimated at the home location was 21.01 ppb. When indoor/outdoor infiltration, indoor sources of air pollution and time spent outdoors were taken into consideration, the individual's cumulative exposures were 28.59 ppb and 96.49 ppb, assuming a respective indoor/outdoor ratio of 1.33 and 5.00. Integrating momentary location tracking services with fixed-site field monitoring, plus indoor-outdoor air exchange calibration, makes exposure assessment of a very large population over an extended time period feasible. Copyright © 2014 Elsevier B.V. All rights reserved.
Accurate Land Company, Inc., Acadia Subdivision, Plat 1 and Plat 2
The EPA is providing notice of an Administrative Penalty Assessment in the form of an Expedited Storm Water Settlement Agreement against Accurate Land Company, Inc., a business located at 12035 University Ave., Suite 100, Clive, IA 50235, for alleged viola
Ultrasound-guided thermocouple placement for cryosurgery.
Abramovits, W; Pruiksma, R; Bose, S
1996-09-01
Although cryosurgical methods have high cure rates, imprecise estimates of both skin lesion depth and destructive temperature front location result in subjective technique in skin malignancy treatments. We evaluated the possibility of newer ultrasound equipment to assist in the precise placement of thermocouples in human skin. DermaScan C ver. 3 ultrasonographic equipment fitted with a sharp focus probe with a frequency of 20 MHz and a scan length of 12.1 mm was used to locate thermocouples with 27- and 30-gauge needles. We successfully and reproducibly located thermocouples and thin needles, and accurately measured their distance from the skin surface. Ultrasound is a useful method for the accurate placement of thermocouples, and needles as thin as 30 gauge for monitoring in cryosurgery.
Distributed fiber sensing system with wide frequency response and accurate location
NASA Astrophysics Data System (ADS)
Shi, Yi; Feng, Hao; Zeng, Zhoumo
2016-02-01
A distributed fiber sensing system merging Mach-Zehnder interferometer and phase-sensitive optical time domain reflectometer (Φ-OTDR) is demonstrated for vibration measurement, which requires wide frequency response and accurate location. Two narrow line-width lasers with delicately different wavelengths are used to constitute the interferometer and reflectometer respectively. A narrow band Fiber Bragg Grating is responsible for separating the two wavelengths. In addition, heterodyne detection is applied to maintain the signal to noise rate of the locating signal. Experiment results show that the novel system has a wide frequency from 1 Hz to 50 MHz, limited by the sample frequency of data acquisition card, and a spatial resolution of 20 m, according to 200 ns pulse width, along 2.5 km fiber link.
Near-Field Magnetic Dipole Moment Analysis
NASA Technical Reports Server (NTRS)
Harris, Patrick K.
2003-01-01
This paper describes the data analysis technique used for magnetic testing at the NASA Goddard Space Flight Center (GSFC). Excellent results have been obtained using this technique to convert a spacecraft s measured magnetic field data into its respective magnetic dipole moment model. The model is most accurate with the earth s geomagnetic field cancelled in a spherical region bounded by the measurement magnetometers with a minimum radius large enough to enclose the magnetic source. Considerably enhanced spacecraft magnetic testing is offered by using this technique in conjunction with a computer-controlled magnetic field measurement system. Such a system, with real-time magnetic field display capabilities, has been incorporated into other existing magnetic measurement facilities and is also used at remote locations where transport to a magnetics test facility is impractical.
NASA Technical Reports Server (NTRS)
Myers, V. I. (Principal Investigator); Dalsted, K. J.; Best, R. G.; Smith, J. R.; Eidenshink, J. C.; Schmer, F. A.; Andrawis, A. S.; Rahn, P. H.
1977-01-01
The author has identified the following significant results. Digital analysis of LANDSAT CCT's indicated that two discrete spectral background zones occurred among the five soil zone. K-CLASS classification of corn revealed that accuracy increased when two background zones were used, compared to the classification of corn stratified by five soil zones. Selectively varying film type developer and development time produces higher contract in reprocessed imagery. Interpretation of rangeland and cropped land data from 1968 aerial photography and 1976 LANDSAT imagery indicated losses in rangeland habitat. Thermal imagery was useful in locating potential sources of sub-surface water and geothermal energy, estimating evapotranspiration, and inventorying the land.
Focal plane transport assembly for the HEAO-B X-ray telescope
NASA Technical Reports Server (NTRS)
Brissette, R.; Allard, P. D.; Keller, F.; Strizhak, E.; Wester, E.
1979-01-01
The High Energy Astronomy Observatory - Mission B (HEAO-B), an earth orbiting X-ray telescope facility capable of locating and imaging celestial X-ray sources within one second of arc in the celestial sphere, is considered. The Focal Plane Transport Assembly (FPTA) is one of the basic structural elements of the three thousand pound HEAO-B experiment payload. The FPTA is a multifunctional assembly which supports seven imaging X-ray detectors circumferentially about a central shaft and accurately positions any particular one into the focus of a high resolution mirror assembly. A drive system, position sensor, rotary coupler, and detent alignment system, all an integral part of the rotatable portion which in turn is supported by main bearings to the stationary focal plane housing are described.
EPA Facility Registry Service (FRS): CERCLIS
This data provides location and attribute information on Facilities regulated under the Comprehensive Environmental Responsibility Compensation and Liability Information System (CERCLIS) for a intranet web feature service . The data provided in this service are obtained from EPA's Facility Registry Service (FRS). The FRS is an integrated source of comprehensive (air, water, and waste) environmental information about facilities, sites or places. This service connects directly to the FRS database to provide this data as a feature service. FRS creates high-quality, accurate, and authoritative facility identification records through rigorous verification and management procedures that incorporate information from program national systems, state master facility records, data collected from EPA's Central Data Exchange registrations and data management personnel. Additional Information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs.
A Bayesian framework for infrasound location
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.; Arrowsmith, Stephen J.; Anderson, Dale N.
2010-04-01
We develop a framework for location of infrasound events using backazimuth and infrasonic arrival times from multiple arrays. Bayesian infrasonic source location (BISL) developed here estimates event location and associated credibility regions. BISL accounts for unknown source-to-array path or phase by formulating infrasonic group velocity as random. Differences between observed and predicted source-to-array traveltimes are partitioned into two additive Gaussian sources, measurement error and model error, the second of which accounts for the unknown influence of wind and temperature on path. By applying the technique to both synthetic tests and ground-truth events, we highlight the complementary nature of back azimuths and arrival times for estimating well-constrained event locations. BISL is an extension to methods developed earlier by Arrowsmith et al. that provided simple bounds on location using a grid-search technique.
Atmospheric measurement of point source fossil fuel CO2 emissions
NASA Astrophysics Data System (ADS)
Turnbull, J. C.; Keller, E. D.; Baisden, W. T.; Brailsford, G.; Bromley, T.; Norris, M.; Zondervan, A.
2013-11-01
We use the Kapuni Gas Treatment Plant to examine methodologies for atmospheric monitoring of point source fossil fuel CO2 (CO2ff) emissions. The Kapuni plant, located in rural New Zealand, removes CO2 from locally extracted natural gas and vents that CO2 to the atmosphere, at a rate of ~0.1 Tg carbon per year. The plant is located in a rural dairy farming area, with no other significant CO2ff sources nearby, but large, diurnally varying, biospheric CO2 fluxes from the surrounding highly productive agricultural grassland. We made flask measurements of CO2 and 14CO2 (from which we derive the CO2ff component) and in situ measurements of CO2 downwind of the Kapuni plant, using a Helikite to sample transects across the emission plume from the surface up to 100 m a.g.l. We also determined the surface CO2ff content averaged over several weeks from the 14CO2 content of grass samples collected from the surrounding area. We use the WindTrax plume dispersion model to compare the atmospheric observations with the emissions reported by the Kapuni plant, and to determine how well atmospheric measurements can constrain the emissions. The model has difficulty accurately capturing the fluctuations and short-term variability in the Helikite samples, but does quite well in representing the observed CO2ff in 15 min averaged surface flask samples and in ~1 week integrated CO2ff averages from grass samples. In this pilot study, we found that using grass samples, the modeled and observed CO2ff emissions averaged over one week agreed to within 30%. The results imply that greater verification accuracy may be achieved by including more detailed meteorological observations and refining 14CO2 sampling strategies.
Atmospheric measurement of point source fossil CO2 emissions
NASA Astrophysics Data System (ADS)
Turnbull, J. C.; Keller, E. D.; Baisden, T.; Brailsford, G.; Bromley, T.; Norris, M.; Zondervan, A.
2014-05-01
We use the Kapuni Gas Treatment Plant to examine methodologies for atmospheric monitoring of point source fossil fuel CO2 (CO2ff) emissions. The Kapuni plant, located in rural New Zealand, removes CO2 from locally extracted natural gas and vents that CO2 to the atmosphere, at a rate of ~0.1 Tg carbon per year. The plant is located in a rural dairy farming area, with no other significant CO2ff sources nearby, but large, diurnally varying, biospheric CO2 fluxes from the surrounding highly productive agricultural grassland. We made flask measurements of CO2 and 14CO2 (from which we derive the CO2ff component) and in situ measurements of CO2 downwind of the Kapuni plant, using a Helikite to sample transects across the emission plume from the surface up to 100 m above ground level. We also determined the surface CO2ff content averaged over several weeks from the 14C content of grass samples collected from the surrounding area. We use the WindTrax plume dispersion model to compare the atmospheric observations with the emissions reported by the Kapuni plant, and to determine how well atmospheric measurements can constrain the emissions. The model has difficulty accurately capturing the fluctuations and short-term variability in the Helikite samples, but does quite well in representing the observed CO2ff in 15 min averaged surface flask samples and in ~ one week integrated CO2ff averages from grass samples. In this pilot study, we found that using grass samples, the modeled and observed CO2ff emissions averaged over one week agreed to within 30%. The results imply that greater verification accuracy may be achieved by including more detailed meteorological observations and refining 14C sampling strategies.
Imaging Magma Plumbing Beneath Askja Volcano, Iceland
NASA Astrophysics Data System (ADS)
Greenfield, T. S.; White, R. S.
2015-12-01
Using a dense seismic network we have imaged the plumbing system beneath Askja, a large central volcano in the Northern Volcanic Zone, Iceland. Local and regional earthquakes have been used as sources to solve for the velocity structure beneath the volcano. We find a pronounced low-velocity anomaly beneath the caldera at a depth of ~7 km around the depth of the brittle-ductile transition. The anomaly is ~10% slower than the initial best fitting 1D model and has a Vp/Vs ratio higher than the surrounding crust, suggesting the presence of increased temperature or partial melt. We use relationships between mineralogy and seismic velocities to estimate that this region contains ~10% partial melt, similar to observations made at other volcanoes such as Kilauea. This low-velocity body is deeper than the depth range suggested by geodetic studies of a deflating source beneath Askja. Beneath the large low-velocity zone a region of reduced velocities extends into the lower crust and is coincident with seismicity in the lower crust. This is suggestive of a high temperature channel into the lower crust which could be the pathway for melt rising from the mantle. This melt either intrudes into the lower crust or stalls at the brittle-ductile boundary in the imaged body. Above this, melt can travel into the fissure swarm through large dikes or erupt within the Askja caldera itself.We generate travel time tables using a finite difference technique and the residuals used to simultaneously solve for both the earthquake locations and velocity structure. The 2014-15 Bárðarbunga dike intrusion has provided a 45 km long, distributed source of large earthquakes which are well located and provide accurate arrival time picks. Together with long-term background seismicity these provide excellent illumination of the Askja volcano from all directions.hhhh
Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Kværna, Tormod; Harris, David B.; Dodge, Douglas A.
2016-04-01
Aftershock sequences following very large earthquakes present enormous challenges to near-realtime generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase association algorithms and a significant deterioration in the quality of underlying fully automatic event bulletins. Current processing pipelines were designed a generation ago and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams which are then scanned by a phase association algorithm to form event hypotheses. We consider the scenario where a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located using a separate specially targeted semi-automatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid search algorithm which may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove over half of the original detections which could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Further reductions in the number of detections in the parametric data streams are likely using correlation and subspace detectors and/or empirical matched field processing.
Elastic Velocity Updating through Image-Domain Tomographic Inversion of Passive Seismic Data
NASA Astrophysics Data System (ADS)
Witten, B.; Shragge, J. C.
2014-12-01
Seismic monitoring at injection sites (e.g., CO2sequestration, waste water disposal, hydraulic fracturing) has become an increasingly important tool for hazard identification and avoidance. The information obtained from this data is often limited to seismic event properties (e.g., location, approximate time, moment tensor), the accuracy of which greatly depends on the estimated elastic velocity models. However, creating accurate velocity models from passive array data remains a challenging problem. Common techniques rely on picking arrivals or matching waveforms requiring high signal-to-noise data that is often not available for the magnitude earthquakes observed over injection sites. We present a new method for obtaining elastic velocity information from earthquakes though full-wavefield wave-equation imaging and adjoint-state tomography. The technique exploits images of the earthquake source using various imaging conditions based upon the P- and S-wavefield data. We generate image volumes by back propagating data through initial models and then applying a correlation-based imaging condition. We use the P-wavefield autocorrelation, S-wavefield autocorrelation, and P-S wavefield cross-correlation images. Inconsistencies in the images form the residuals, which are used to update the P- and S-wave velocity models through adjoint-state tomography. Because the image volumes are constructed from all trace data, the signal-to-noise in this space is increased when compared to the individual traces. Moreover, it eliminates the need for picking and does not require any estimation of the source location and timing. Initial tests show that with reasonable source distribution and acquisition array, velocity anomalies can be recovered. Future tests will apply this methodology to other scales from laboratory to global.
Intercomparison of Open-Path Trace Gas Measurements with Two Dual Frequency Comb Spectrometers
Waxman, Eleanor M.; Cossel, Kevin C.; Truong, Gar-Wing; Giorgetta, Fabrizio R.; Swann, William C.; Coburn, Sean; Wright, Robert J.; Rieker, Gregory B.; Coddington, Ian; Newbury, Nathan R.
2017-01-01
We present the first quantitative intercomparison between two open-path dual comb spectroscopy (DCS) instruments which were operated across adjacent 2-km open-air paths over a two-week period. We used DCS to measure the atmospheric absorption spectrum in the near infrared from 6021 to 6388 cm−1 (1565 to 1661 nm), corresponding to a 367 cm−1 bandwidth, at 0.0067 cm−1 sample spacing. The measured absorption spectra agree with each other to within 5×10−4 without any external calibration of either instrument. The absorption spectra are fit to retrieve concentrations for carbon dioxide (CO2), methane (CH4), water (H2O), and deuterated water (HDO). The retrieved dry mole fractions agree to 0.14% (0.57 ppm) for CO2, 0.35% (7 ppb) for CH4, and 0.40% (36 ppm) for H2O over the two-week measurement campaign, which included 23 °C outdoor temperature variations and periods of strong atmospheric turbulence. This agreement is at least an order of magnitude better than conventional active-source open-path instrument intercomparisons and is particularly relevant to future regional flux measurements as it allows accurate comparisons of open-path DCS data across locations and time. We additionally compare the open-path DCS retrievals to a WMO-calibrated cavity ringdown point sensor located along the path with good agreement. Short-term and long-term differences between the two systems are attributed, respectively, to spatial sampling discrepancies and to inaccuracies in the current spectral database used to fit the DCS data. Finally, the two-week measurement campaign yields diurnal cycles of CO2 and CH4 that are consistent with the presence of local sources of CO2 and absence of local sources of CH4. PMID:29276547
Geist, E.L.; Bilek, S.L.; Arcas, D.; Titov, V.V.
2006-01-01
Source parameters affecting tsunami generation and propagation for the Mw > 9.0 December 26, 2004 and the Mw = 8.6 March 28, 2005 earthquakes are examined to explain the dramatic difference in tsunami observations. We evaluate both scalar measures (seismic moment, maximum slip, potential energy) and finite-source repre-sentations (distributed slip and far-field beaming from finite source dimensions) of tsunami generation potential. There exists significant variability in local tsunami runup with respect to the most readily available measure, seismic moment. The local tsunami intensity for the December 2004 earthquake is similar to other tsunamigenic earthquakes of comparable magnitude. In contrast, the March 2005 local tsunami was deficient relative to its earthquake magnitude. Tsunami potential energy calculations more accurately reflect the difference in tsunami severity, although these calculations are dependent on knowledge of the slip distribution and therefore difficult to implement in a real-time system. A significant factor affecting tsunami generation unaccounted for in these scalar measures is the location of regions of seafloor displacement relative to the overlying water depth. The deficiency of the March 2005 tsunami seems to be related to concentration of slip in the down-dip part of the rupture zone and the fact that a substantial portion of the vertical displacement field occurred in shallow water or on land. The comparison of the December 2004 and March 2005 Sumatra earthquakes presented in this study is analogous to previous studies comparing the 1952 and 2003 Tokachi-Oki earthquakes and tsunamis, in terms of the effect slip distribution has on local tsunamis. Results from these studies indicate the difficulty in rapidly assessing local tsunami runup from magnitude and epicentral location information alone.
NASA Technical Reports Server (NTRS)
Chamberlain, James P.; Latorella, Kara A.
2001-01-01
This study compares how well general aviation (GA) pilots detect convective weather in flight with different weather information sources. A flight test was conducted in which GA pilot test subjects were given different in-flight weather information cues and flown toward convective weather of moderate or greater intensity. The test subjects were not actually flying the aircraft, but were given pilot tasks representative of the workload and position awareness requirements of the en route portion of a cross country GA flight. On each flight, one test subject received weather cues typical of a flight in visual meteorological conditions (VMC), another received cues typical of flight in instrument meteorological conditions (IMC), and a third received cues typical of flight in IMC but augmented with a graphical weather information system (GWIS). The GWIS provided the subject with near real time data-linked weather products, including a weather radar mosaic superimposed on a moving map with a symbol depicting the aircraft's present position and direction of track. At several points during each flight, the test subjects completed short questionnaires which included items addressing their weather situation awareness and flight decisions. In particular, test subjects were asked to identify the location of the nearest convective cells. After the point of nearest approach to convective weather, the test subjects were asked to draw the location of convective weather on an aeronautical chart, along with the aircraft's present position. This paper reports preliminary results on how accurately test subjects provided with these different weather sources could identify the nearest cell of moderate or greater intensity along their route of flight. Additional flight tests are currently being conducted to complete the data set.
Retinotopic memory is more precise than spatiotopic memory.
Golomb, Julie D; Kanwisher, Nancy
2012-01-31
Successful visually guided behavior requires information about spatiotopic (i.e., world-centered) locations, but how accurately is this information actually derived from initial retinotopic (i.e., eye-centered) visual input? We conducted a spatial working memory task in which subjects remembered a cued location in spatiotopic or retinotopic coordinates while making guided eye movements during the memory delay. Surprisingly, after a saccade, subjects were significantly more accurate and precise at reporting retinotopic locations than spatiotopic locations. This difference grew with each eye movement, such that spatiotopic memory continued to deteriorate, whereas retinotopic memory did not accumulate error. The loss in spatiotopic fidelity is therefore not a generic consequence of eye movements, but a direct result of converting visual information from native retinotopic coordinates. Thus, despite our conscious experience of an effortlessly stable spatiotopic world and our lifetime of practice with spatiotopic tasks, memory is actually more reliable in raw retinotopic coordinates than in ecologically relevant spatiotopic coordinates.
Reconstructing Spatial Distributions from Anonymized Locations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horey, James L; Forrest, Stephanie; Groat, Michael
2012-01-01
Devices such as mobile phones, tablets, and sensors are often equipped with GPS that accurately report a person's location. Combined with wireless communication, these devices enable a wide range of new social tools and applications. These same qualities, however, leave location-aware applications vulnerable to privacy violations. This paper introduces the Negative Quad Tree, a privacy protection method for location aware applications. The method is broadly applicable to applications that use spatial density information, such as social applications that measure the popularity of social venues. The method employs a simple anonymization algorithm running on mobile devices, and a more complex reconstructionmore » algorithm on a central server. This strategy is well suited to low-powered mobile devices. The paper analyzes the accuracy of the reconstruction method in a variety of simulated and real-world settings and demonstrates that the method is accurate enough to be used in many real-world scenarios.« less
Risk assessment in man and mouse.
Balci, Fuat; Freestone, David; Gallistel, Charles R
2009-02-17
Human and mouse subjects tried to anticipate at which of 2 locations a reward would appear. On a randomly scheduled fraction of the trials, it appeared with a short latency at one location; on the complementary fraction, it appeared after a longer latency at the other location. Subjects of both species accurately assessed the exogenous uncertainty (the probability of a short versus a long trial) and the endogenous uncertainty (from the scalar variability in their estimates of an elapsed duration) to compute the optimal target latency for a switch from the short- to the long-latency location. The optimal latency was arrived at so rapidly that there was no reliably discernible improvement over trials. Under these nonverbal conditions, humans and mice accurately assess risks and behave nearly optimally. That this capacity is well-developed in the mouse opens up the possibility of a genetic approach to the neurobiological mechanisms underlying risk assessment.
Risk assessment in man and mouse
Balci, Fuat; Freestone, David; Gallistel, Charles R.
2009-01-01
Human and mouse subjects tried to anticipate at which of 2 locations a reward would appear. On a randomly scheduled fraction of the trials, it appeared with a short latency at one location; on the complementary fraction, it appeared after a longer latency at the other location. Subjects of both species accurately assessed the exogenous uncertainty (the probability of a short versus a long trial) and the endogenous uncertainty (from the scalar variability in their estimates of an elapsed duration) to compute the optimal target latency for a switch from the short- to the long-latency location. The optimal latency was arrived at so rapidly that there was no reliably discernible improvement over trials. Under these nonverbal conditions, humans and mice accurately assess risks and behave nearly optimally. That this capacity is well-developed in the mouse opens up the possibility of a genetic approach to the neurobiological mechanisms underlying risk assessment. PMID:19188592
Intelligent navigation and accurate positioning of an assist robot in indoor environments
NASA Astrophysics Data System (ADS)
Hua, Bin; Rama, Endri; Capi, Genci; Jindai, Mitsuru; Tsuri, Yosuke
2017-12-01
Intact robot's navigation and accurate positioning in indoor environments are still challenging tasks. Especially in robot applications, assisting disabled and/or elderly people in museums/art gallery environments. In this paper, we present a human-like navigation method, where the neural networks control the wheelchair robot to reach the goal location safely, by imitating the supervisor's motions, and positioning in the intended location. In a museum similar environment, the mobile robot starts navigation from various positions, and uses a low-cost camera to track the target picture, and a laser range finder to make a safe navigation. Results show that the neural controller with the Conjugate Gradient Backpropagation training algorithm gives a robust response to guide the mobile robot accurately to the goal position.
40 CFR Table 3 to Subpart Zzzz of... - Subsequent Performance Tests
Code of Federal Regulations, 2011 CFR
2011-07-01
... reconstructed 2SLB stationary RICE with a brake horsepower > 500 located at major sources; new or reconstructed 4SLB stationary RICE with a brake horsepower ≥ 250 located at major sources; and new or reconstructed CI stationary RICE with a brake horsepower > 500 located at major sources Reduce CO emissions and not...
ERIC Educational Resources Information Center
Lin, Jing-Wen
2016-01-01
Holding scientific conceptions and having the ability to accurately predict students' preconceptions are a prerequisite for science teachers to design appropriate constructivist-oriented learning experiences. This study explored the types and sources of students' preconceptions of electric circuits. First, 438 grade 3 (9 years old) students were…
Microseismic source locations with deconvolution migration
NASA Astrophysics Data System (ADS)
Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu
2018-03-01
Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.
Graf, Eveline S.; Wright, Ian C.; Stefanyshyn, Darren J.
2012-01-01
The two main movements occurring between the forefoot and rearfoot segment of a human foot are flexion at the metatarsophalangeal joints and torsion in the midfoot. The location of the torsion axis within the foot is currently unknown. The purpose of this study was to develop a method based on Cardan angles and the finite helical axis approach to calculate the torsion axis without the effect of flexion. As the finite helical axis method is susceptible to error due to noise with small helical rotations, a minimal amount of rotation was defined in order to accurately determine the torsion axis location. Using simulation, the location of the axis based on data containing noise was compared to the axis location of data without noise with a one-sample t-test and Fisher's combined probability score. When using only data with helical rotation of seven degrees or more, the location of the torsion axis based on the data with noise was within 0.2 mm of the reference location. Therefore, the proposed method allowed an accurate calculation of the foot torsion axis location. PMID:22666303
Brady's Geothermal Field - March 2016 Vibroseis SEG-Y Files and UTM Locations
Kurt Feigl
2016-03-31
PoroTomo March 2016 (Task 6.4) Updated vibroseis source locations with UTM locations. Supersedes gdr.openei.org/submissions/824. Updated vibroseis source location data for Stages 1-4, PoroTomo March 2016. This revision includes source point locations in UTM format (meters) for all four Stages of active source acquisition. Vibroseis sweep data were collected on a Signature Recorder unit (mfr Seismic Source) mounted in the vibroseis cab during the March 2016 PoroTomo active seismic survey Stages 1 to 4. Each sweep generated a GPS timed SEG-Y file with 4 input channels and a 20 second record length. Ch1 = pilot sweep, Ch2 = accelerometer output from the vibe's mass, Ch3 = accel output from the baseplase, and Ch4 = weighted sum of the accelerometer outputs. SEG-Y files are available via the links below.
Modeling methods for merging computational and experimental aerodynamic pressure data
NASA Astrophysics Data System (ADS)
Haderlie, Jacob C.
This research describes a process to model surface pressure data sets as a function of wing geometry from computational and wind tunnel sources and then merge them into a single predicted value. The described merging process will enable engineers to integrate these data sets with the goal of utilizing the advantages of each data source while overcoming the limitations of both; this provides a single, combined data set to support analysis and design. The main challenge with this process is accurately representing each data source everywhere on the wing. Additionally, this effort demonstrates methods to model wind tunnel pressure data as a function of angle of attack as an initial step towards a merging process that uses both location on the wing and flow conditions (e.g., angle of attack, flow velocity or Reynold's number) as independent variables. This surrogate model of pressure as a function of angle of attack can be useful for engineers that need to predict the location of zero-order discontinuities, e.g., flow separation or normal shocks. Because, to the author's best knowledge, there is no published, well-established merging method for aerodynamic pressure data (here, the coefficient of pressure Cp), this work identifies promising modeling and merging methods, and then makes a critical comparison of these methods. Surrogate models represent the pressure data for both data sets. Cubic B-spline surrogate models represent the computational simulation results. Machine learning and multi-fidelity surrogate models represent the experimental data. This research compares three surrogates for the experimental data (sequential--a.k.a. online--Gaussian processes, batch Gaussian processes, and multi-fidelity additive corrector) on the merits of accuracy and computational cost. The Gaussian process (GP) methods employ cubic B-spline CFD surrogates as a model basis function to build a surrogate model of the WT data, and this usage of the CFD surrogate in building the WT data could serve as a "merging" because the resulting WT pressure prediction uses information from both sources. In the GP approach, this model basis function concept seems to place more "weight" on the Cp values from the wind tunnel (WT) because the GP surrogate uses the CFD to approximate the WT data values. Conversely, the computationally inexpensive additive corrector method uses the CFD B-spline surrogate to define the shape of the spanwise distribution of the Cp while minimizing prediction error at all spanwise locations for a given arc length position; this, too, combines information from both sources to make a prediction of the 2-D WT-based Cp distribution, but the additive corrector approach gives more weight to the CFD prediction than to the WT data. Three surrogate models of the experimental data as a function of angle of attack are also compared for accuracy and computational cost. These surrogates are a single Gaussian process model (a single "expert"), product of experts, and generalized product of experts. The merging approach provides a single pressure distribution that combines experimental and computational data. The batch Gaussian process method provides a relatively accurate surrogate that is computationally acceptable, and can receive wind tunnel data from port locations that are not necessarily parallel to a variable direction. On the other hand, the sequential Gaussian process and additive corrector methods must receive a sufficient number of data points aligned with one direction, e.g., from pressure port bands (tap rows) aligned with the freestream. The generalized product of experts best represents wind tunnel pressure as a function of angle of attack, but at higher computational cost than the single expert approach. The format of the application data from computational and experimental sources in this work precluded the merging process from including flow condition variables (e.g., angle of attack) in the independent variables, so the merging process is only conducted in the wing geometry variables of arc length and span. The merging process of Cp data allows a more "hands-off" approach to aircraft design and analysis, (i.e., not as many engineers needed to debate the Cp distribution shape) and generates Cp predictions at any location on the wing. However, the cost with these benefits are engineer time (learning how to build surrogates), computational time in constructing the surrogates, and surrogate accuracy (surrogates introduce error into data predictions). This dissertation effort used the Trap Wing / First AIAA CFD High-Lift Prediction Workshop as a relevant transonic wing with a multi-element high-lift system, and this work identified that the batch GP model for the WT data and the B-spline surrogate for the CFD might best be combined using expert belief weights to describe Cp as a function of location on the wing element surface. (Abstract shortened by ProQuest.).
Remote Sensing of Cloud Top Heights Using the Research Scanning Polarimeter
NASA Technical Reports Server (NTRS)
Sinclair, Kenneth; van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John; Wasilewski, Andrzej
2015-01-01
Clouds cover roughly two thirds of the globe and act as an important regulator of Earth's radiation budget. Of these, multilayered clouds occur about half of the time and are predominantly two-layered. Changes in cloud top height (CTH) have been predicted by models to have a globally averaged positive feedback, however observational changes in CTH have shown uncertain results. Additional CTH observations are necessary to better and quantify the effect. Improved CTH observations will also allow for improved sub-grid parameterizations in large-scale models and accurate CTH information is important when studying variations in freezing point and cloud microphysics. NASA's airborne Research Scanning Polarimeter (RSP) is able to measure cloud top height using a novel multi-angular contrast approach. RSP scans along the aircraft track and obtains measurements at 152 viewing angles at any aircraft location. The approach presented here aggregates measurements from multiple scans to a single location at cloud altitude using a correlation function designed to identify the location-distinct features in each scan. During NASAs SEAC4RS air campaign, the RSP was mounted on the ER-2 aircraft along with the Cloud Physics Lidar (CPL), which made simultaneous measurements of CTH. The RSPs unique method of determining CTH is presented. The capabilities of using single and combinations of channels within the approach are investigated. A detailed comparison of RSP retrieved CTHs with those of CPL reveal the accuracy of the approach. Results indicate a strong ability for the RSP to accurately identify cloud heights. Interestingly, the analysis reveals an ability for the approach to identify multiple cloud layers in a single scene and estimate the CTH of each layer. Capabilities and limitations of identifying single and multiple cloud layers heights are explored. Special focus is given to sources of error in the method including optically thin clouds, physically thick clouds, multi-layered clouds as well as cloud phase. When determining multi-layered CTHs, limits on the upper clouds opacity are assessed.
Seismic Monitoring of Ice Generated Events at the Bering Glacier
NASA Astrophysics Data System (ADS)
Fitzgerald, K.; Richardson, J.; Pennington, W.
2008-12-01
The Bering Glacier, located in southeast Alaska, is the largest glacier in North America with a surface area of approximately 5,175 square kilometers. It extends from its source in the Bagley Icefield to its terminus in tidal Vitus Lake, which drains into the Gulf of Alaska. It is known that the glacier progresses downhill through the mechanisms of plastic crystal deformation and basal sliding. However, the basal processes which take place tens to hundreds of meters below the surface are not well understood, except through the study of sub- glacial landforms and passive seismology. Additionally, the sub-glacial processes enabling the surges, which occur approximately every two decades, are poorly understood. Two summer field campaigns in 2007 and 2008 were designed to investigate this process near the terminus of the glacier. During the summer of 2007, a field experiment at the Bering Glacier was conducted using a sparse array of L-22 short period sensors to monitor ice-related events. The array was in place for slightly over a week in August and consisted of five stations centered about the final turn of the glacier west of the Grindle Hills. Many events were observed, but due to the large distance between stations and the highly attenuating surface ice, few events were large enough to be recorded on sufficient stations to be accurately located and described. During August 2008, six stations were deployed for a similar length of time, but with a closer spacing. With this improved array, events were located and described more accurately, leading to additional conclusions about the surface, interior, and sub-glacial ice processes producing seismic signals. While the glacier was not surging during the experiment, this study may provide information on the non-surging, sub-glacial base level activity. It is generally expected that another surge will take place within a few years, and baseline studies such as this may assist in understanding the nature of surges.
Boyer, C; Baujard, V; Scherrer, J R
2001-01-01
Any new user to the Internet will think that to retrieve the relevant document is an easy task especially with the wealth of sources available on this medium, but this is not the case. Even experienced users have difficulty formulating the right query for making the most of a search tool in order to efficiently obtain an accurate result. The goal of this work is to reduce the time and the energy necessary in searching and locating medical and health information. To reach this goal we have developed HONselect [1]. The aim of HONselect is not only to improve efficiency in retrieving documents but to respond to an increased need for obtaining a selection of relevant and accurate documents from a breadth of various knowledge databases including scientific bibliographical references, clinical trials, daily news, multimedia illustrations, conferences, forum, Web sites, clinical cases, and others. The authors based their approach on the knowledge representation using the National Library of Medicine's Medical Subject Headings (NLM, MeSH) vocabulary and classification [2,3]. The innovation is to propose a multilingual "one-stop searching" (one Web interface to databases currently in English, French and German) with full navigational and connectivity capabilities. The user may choose from a given selection of related terms the one that best suit his search, navigate in the term's hierarchical tree, and access directly to a selection of documents from high quality knowledge suppliers such as the MEDLINE database, the NLM's ClinicalTrials.gov server, the NewsPage's daily news, the HON's media gallery, conference listings and MedHunt's Web sites [4, 5, 6, 7, 8, 9]. HONselect, developed by HON, a non-profit organisation [10], is a free online available multilingual tool based on the MeSH thesaurus to index, select, retrieve and display accurate, up to date, high-level and quality documents.
Li, Xiang; Samei, Ehsan; Segars, W. Paul; Sturgeon, Gregory M.; Colsher, James G.; Toncheva, Greta; Yoshizumi, Terry T.; Frush, Donald P.
2011-01-01
Purpose: Radiation-dose awareness and optimization in CT can greatly benefit from a dose-reporting system that provides dose and risk estimates specific to each patient and each CT examination. As the first step toward patient-specific dose and risk estimation, this article aimed to develop a method for accurately assessing radiation dose from CT examinations. Methods: A Monte Carlo program was developed to model a CT system (LightSpeed VCT, GE Healthcare). The geometry of the system, the energy spectra of the x-ray source, the three-dimensional geometry of the bowtie filters, and the trajectories of source motions during axial and helical scans were explicitly modeled. To validate the accuracy of the program, a cylindrical phantom was built to enable dose measurements at seven different radial distances from its central axis. Simulated radial dose distributions in the cylindrical phantom were validated against ion chamber measurements for single axial scans at all combinations of tube potential and bowtie filter settings. The accuracy of the program was further validated using two anthropomorphic phantoms (a pediatric one-year-old phantom and an adult female phantom). Computer models of the two phantoms were created based on their CT data and were voxelized for input into the Monte Carlo program. Simulated dose at various organ locations was compared against measurements made with thermoluminescent dosimetry chips for both single axial and helical scans. Results: For the cylindrical phantom, simulations differed from measurements by −4.8% to 2.2%. For the two anthropomorphic phantoms, the discrepancies between simulations and measurements ranged between (−8.1%, 8.1%) and (−17.2%, 13.0%) for the single axial scans and the helical scans, respectively. Conclusions: The authors developed an accurate Monte Carlo program for assessing radiation dose from CT examinations. When combined with computer models of actual patients, the program can provide accurate dose estimates for specific patients. PMID:21361208
Noise Source Identification in a Reverberant Field Using Spherical Beamforming
NASA Astrophysics Data System (ADS)
Choi, Young-Chul; Park, Jin-Ho; Yoon, Doo-Byung; Kwon, Hyu-Sang
Identification of noise sources, their locations and strengths, has been taken great attention. The method that can identify noise sources normally assumes that noise sources are located at a free field. However, the sound in a reverberant field consists of that coming directly from the source plus sound reflected or scattered by the walls or objects in the field. In contrast to the exterior sound field, reflections are added to sound field. Therefore, the source location estimated by the conventional methods may give unacceptable error. In this paper, we explain the effects of reverberant field on interior source identification process and propose the method that can identify noise sources in the reverberant field.
Estimating 3D tilt from local image cues in natural scenes
Burge, Johannes; McCann, Brian C.; Geisler, Wilson S.
2016-01-01
Estimating three-dimensional (3D) surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues from the same location (disparity gradient, luminance gradient, and dominant texture orientation) should be combined to estimate 3D tilt in natural scenes. We collected a database of natural stereoscopic images with precisely co-registered range images that provide the ground-truth distance at each pixel location. We then analyzed the relationship between ground-truth tilt and image cue values. Our analysis is free of assumptions about the joint probability distributions and yields the Bayes optimal estimates of tilt, given the cue values. Rich results emerge: (a) typical tilt estimates are only moderately accurate and strongly influenced by the cardinal bias in the prior probability distribution; (b) when cue values are similar, or when slant is greater than 40°, estimates are substantially more accurate; (c) when luminance and texture cues agree, they often veto the disparity cue, and when they disagree, they have little effect; and (d) simplifying assumptions common in the cue combination literature is often justified for estimating tilt in natural scenes. The fact that tilt estimates are typically not very accurate is consistent with subjective impressions from viewing small patches of natural scene. The fact that estimates are substantially more accurate for a subset of image locations is also consistent with subjective impressions and with the hypothesis that perceived surface orientation, at more global scales, is achieved by interpolation or extrapolation from estimates at key locations. PMID:27738702
Joint Inversion of Source Location and Source Mechanism of Induced Microseismics
NASA Astrophysics Data System (ADS)
Liang, C.
2014-12-01
Seismic source mechanism is a useful property to indicate the source physics and stress and strain distribution in regional, local and micro scales. In this study we jointly invert source mechanisms and locations for microseismics induced in fluid fracturing treatment in the oil and gas industry. For the events that are big enough to see waveforms, there are quite a few techniques can be applied to invert the source mechanism including waveform inversion, first polarity inversion and many other methods and variants based on these methods. However, for events that are too small to identify in seismic traces such as the microseismics induced by the fluid fracturing in the Oil and Gas industry, a source scanning algorithms (SSA for short) with waveform stacking are usually applied. At the same time, a joint inversion of location and source mechanism are possible but at a cost of high computation budget. The algorithm is thereby called Source Location and Mechanism Scanning Algorithm, SLMSA for short. In this case, for given velocity structure, all possible combinations of source locations (X,Y and Z) and source mechanism (Strike, Dip and Rake) are used to compute travel-times and polarities of waveforms. Correcting Normal moveout times and polarities, and stacking all waveforms, the (X, Y, Z , strike, dip, rake) combination that gives the strongest stacking waveform is identified as the solution. To solve the problem of high computation problem, CPU-GPU programing is applied. Numerical datasets are used to test the algorithm. The SLMSA has also been applied to a fluid fracturing datasets and reveal several advantages against the location only method: (1) for shear sources, the source only program can hardly locate them because of the canceling out of positive and negative polarized traces, but the SLMSA method can successfully pick up those events; (2) microseismic locations alone may not be enough to indicate the directionality of micro-fractures. The statistics of source mechanisms can certainly provide more knowledges on the orientation of fractures; (3) in our practice, the joint inversion method almost always yield more events than the source only method and for those events that are also picked by the SSA method, the stacking power of SLMSA are always higher than the ones obtained in SSA.
Spatiotemporal Modelling of Dust Storm Sources Emission in West Asia
NASA Astrophysics Data System (ADS)
Khodabandehloo, E.; Alimohamdadi, A.; Sadeghi-Niaraki, A.; Darvishi Boloorani, A.; Alesheikh, A. A.
2013-09-01
Dust aerosol is the largest contributor to aerosol mass concentrations in the troposphere and has considerable effects on the air quality of spatial and temporal scales. Arid and semi-arid areas of the West Asia are one of the most important regional dust sources in the world. These phenomena directly or indirectly affecting almost all aspects life in almost 15 countries in the region. So an accurate estimate of dust emissions is very crucial for making a common understanding and knowledge of the problem. Because of the spatial and temporal limits of the ground-based observations, remote sensing methods have been found to be more efficient and useful for studying the West Asia dust source. The vegetation cover limits dust emission by decelerating the surface wind velocities and therefore reducing the momentum transport. While all models explicitly take into account the change of wind speed and soil moisture in calculating dust emissions, they commonly employ a "climatological" land cover data for identifying dust source locations and neglect the time variation of surface bareness. In order to compile the aforementioned model, land surface features such as soil moisture, texture, type, and vegetation and also wind speed as atmospheric parameter are used. Having used NDVI data show significant change in dust emission, The modeled dust emission with static source function in June 2008 is 17.02 % higher than static source function and similar result for Mach 2007 show the static source function is 8.91 % higher than static source function. we witness a significant improvement in accuracy of dust forecasts during the months of most soil vegetation changes (spring and winter) compared to outputs resulted from static model, in which NDVI data are neglected.
NASA Astrophysics Data System (ADS)
Lyu, Jiang-Tao; Zhou, Chen
2017-12-01
Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.
Ground-based real-time tracking and traverse recovery of China's first lunar rover
NASA Astrophysics Data System (ADS)
Zhou, Huan; Li, Haitao; Xu, Dezhen; Dong, Guangliang
2016-02-01
The Chang'E-3 unmanned lunar exploration mission forms an important stage in China's Lunar Exploration Program. China's first lunar rover "Yutu" is a sub-probe of the Chang'E-3 mission. Its main science objectives cover the investigations of the lunar soil and crust structure, explorations of mineral resources, and analyses of matter compositions. Some of these tasks require accurate real-time and continuous position tracking of the rover. To achieve these goals with the scale-limited Chinese observation network, this study proposed a ground-based real-time very long baseline interferometry phase referencing tracking method. We choose the Chang'E-3 lander as the phase reference source, and the accurate location of the rover is updated every 10 s using its radio-image sequences with the help of a priori information. The detailed movements of the Yutu rover have been captured with a sensitivity of several centimeters, and its traverse across the lunar surface during the first few days after its separation from the Chang'E-3 lander has been recovered. Comparisons and analysis show that the position tracking accuracy reaches a 1-m level.
Quantification of intensity variations in functional MR images using rotated principal components
NASA Astrophysics Data System (ADS)
Backfrieder, W.; Baumgartner, R.; Sámal, M.; Moser, E.; Bergmann, H.
1996-08-01
In functional MRI (fMRI), the changes in cerebral haemodynamics related to stimulated neural brain activity are measured using standard clinical MR equipment. Small intensity variations in fMRI data have to be detected and distinguished from non-neural effects by careful image analysis. Based on multivariate statistics we describe an algorithm involving oblique rotation of the most significant principal components for an estimation of the temporal and spatial distribution of the stimulated neural activity over the whole image matrix. This algorithm takes advantage of strong local signal variations. A mathematical phantom was designed to generate simulated data for the evaluation of the method. In simulation experiments, the potential of the method to quantify small intensity changes, especially when processing data sets containing multiple sources of signal variations, was demonstrated. In vivo fMRI data collected in both visual and motor stimulation experiments were analysed, showing a proper location of the activated cortical regions within well known neural centres and an accurate extraction of the activation time profile. The suggested method yields accurate absolute quantification of in vivo brain activity without the need of extensive prior knowledge and user interaction.
Development of a CME-associated geomagnetic storm intensity prediction tool
NASA Astrophysics Data System (ADS)
Wu, C. C.; DeHart, J. M.
2015-12-01
From 1995 to 2012, the Wind spacecraft recorded 168 magnetic cloud (MC) events. Among those events, 79 were found to have upstream shock waves and their source locations on the Sun were identified. Using a recipe of interplanetary magnetic field (IMF) Bz initial turning direction after shock (Wu et al., 1996, GRL), it is found that the north-south polarity of 66 (83.5%) out of the 79 events were accurately predicted. These events were tested and further analyzed, reaffirming that the Bz intial turning direction was accurate. The results also indicate that 37 of the 79 MCs originate from the north (of the Sun) averaged a Dst_min of -119 nT, whereas 42 of the MCs originating from the south (of the Sun) averaged -89 nT. In an effort to provide this research to others, a website was built that incorporated various tools and pictures to predict the intensity of the geomagnetic storms. The tool is capable of predicting geomagnetic storms with different ranges of Dst_min (from no-storm to gigantic storms). This work was supported by Naval Research Lab HBCU/MI Internship program and Chief of Naval Research.
FECAL POLLUTION, PUBLIC HEALTH AND MICROBIAL SOURCE TRACKING
Microbial source tracking (MST) seeks to provide information about sources of fecal water contamination. Without knowledge of sources, it is difficult to accurately model risk assessments, choose effective remediation strategies, or bring chronically polluted waters into complian...
NASA Technical Reports Server (NTRS)
Panda, Jayanta; Seasholtz, Richard G.
2003-01-01
Noise sources in high-speed jets were identified by directly correlating flow density fluctuation (cause) to far-field sound pressure fluctuation (effect). The experimental study was performed in a nozzle facility at the NASA Glenn Research Center in support of NASA s initiative to reduce the noise emitted by commercial airplanes. Previous efforts to use this correlation method have failed because the tools for measuring jet turbulence were intrusive. In the present experiment, a molecular Rayleigh-scattering technique was used that depended on laser light scattering by gas molecules in air. The technique allowed accurate measurement of air density fluctuations from different points in the plume. The study was conducted in shock-free, unheated jets of Mach numbers 0.95, 1.4, and 1.8. The turbulent motion, as evident from density fluctuation spectra was remarkably similar in all three jets, whereas the noise sources were significantly different. The correlation study was conducted by keeping a microphone at a fixed location (at the peak noise emission angle of 30 to the jet axis and 50 nozzle diameters away) while moving the laser probe volume from point to point in the flow. The following figure shows maps of the nondimensional coherence value measured at different Strouhal frequencies ([frequency diameter]/jet speed) in the supersonic Mach 1.8 and subsonic Mach 0.95 jets. The higher the coherence, the stronger the source was.
NASA Astrophysics Data System (ADS)
Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Ajith, P.; Allen, B.; Allocca, A.; Amador Ceron, E.; Amariutei, D.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Ast, S.; Aston, S. M.; Astone, P.; Atkinson, D.; Aufmuth, P.; Aulbert, C.; Aylott, B. E.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Bao, Y.; Barayoga, J. C. B.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Beck, D.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Bell, C.; Belopolski, I.; Benacquista, M.; Berliner, J. M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bhadbade, T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Bock, O.; Bodiya, T. P.; Bogan, C.; Bond, C.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bouhou, B.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet–Castell, J.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Charlton, P.; Chassande-Mottin, E.; Chen, W.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Clayton, J. H.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colacino, C. N.; Colla, A.; Colombini, M.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M.; Coulon, J.-P.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Cutler, R. M.; Dahl, K.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Daw, E. J.; Dayanga, T.; De Rosa, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; Del Pozzo, W.; Dent, T.; Dergachev, V.; DeRosa, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Paolo Emilio, M.; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorsher, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Endrőczi, G.; Engel, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Farr, B. F.; Farr, W. M.; Favata, M.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Feroz, F.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Foley, S.; Forsi, E.; Forte, L. A.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M. A.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P. J.; Fyffe, M.; Gair, J.; Galimberti, M.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gáspár, M. E.; Gelencser, G.; Gemme, G.; Genin, E.; Gennai, A.; Gergely, L. Á.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Griffo, C.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gupta, R.; Gustafson, E. K.; Gustafson, R.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Hayama, K.; Hayau, J.-F.; Heefner, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M. A.; Heng, I. S.; Heptonstall, A. W.; Herrera, V.; Heurs, M.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Holtrop, M.; Hong, T.; Hooper, S.; Hough, J.; Howell, E. J.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Izumi, K.; Jacobson, M.; James, E.; Jang, Y. J.; Jaranowski, P.; Jesse, E.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kasprzack, M.; Kasturi, R.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kaufman, K.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Keitel, D.; Kelley, D.; Kells, W.; Keppel, D. G.; Keresztes, Z.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B. K.; Kim, C.; Kim, H.; Kim, K.; Kim, N.; Kim, Y. M.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kurdyumov, R.; Kwee, P.; Lam, P. K.; Landry, M.; Langley, A.; Lantz, B.; Lastzka, N.; Lawrie, C.; Lazzarini, A.; Le Roux, A.; Leaci, P.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Leong, J. R.; Leonor, I.; Leroy, N.; Letendre, N.; Lhuillier, V.; Li, J.; Li, T. G. F.; Lindquist, P. E.; Litvine, V.; Liu, Y.; Liu, Z.; Lockerbie, N. A.; Lodhia, D.; Logue, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Lubinski, M.; Lück, H.; Lundgren, A. P.; Macarthur, J.; Macdonald, E.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; Meadors, G. D.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morgia, A.; Mori, T.; Morriss, S. R.; Mosca, S.; Mossavi, K.; Mours, B.; Mow–Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nash, T.; Naticchioni, L.; Necula, V.; Nelson, J.; Neri, I.; Newton, G.; Nguyen, T.; Nishizawa, A.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Oldenberg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Penn, S.; Perreca, A.; Persichetti, G.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pihlaja, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Pöld, J.; Postiglione, F.; Poux, C.; Prato, M.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Ramet, C.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, M.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Rolland, L.; Rollins, J. G.; Romano, R.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sankar, S.; Sannibale, V.; Santamaría, L.; Santiago-Prieto, I.; Santostasi, G.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R. L.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Seifert, F.; Sellers, D.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G. R.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Somiya, K.; Sorazu, B.; Speirits, F. C.; Sperandio, L.; Stefszky, M.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S. E.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szeifert, G.; Tacca, M.; Taffarello, L.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Vahlbruch, H.; Vajente, G.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van Veggel, A. A.; Vass, S.; Vasuth, M.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A. E.; Vinet, J.-Y.; Vitale, S.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Waldman, S. J.; Wallace, L.; Wan, Y.; Wang, M.; Wang, X.; Wanner, A.; Ward, R. L.; Was, M.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wiesner, K.; Wilkinson, C.; Willems, P. A.; Williams, L.; Williams, R.; Willke, B.; Wimmer, M.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Wooley, R.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yancey, C. C.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.
2013-09-01
Compact binary systems with neutron stars or black holes are one of the most promising sources for ground-based gravitational-wave detectors. Gravitational radiation encodes rich information about source physics; thus parameter estimation and model selection are crucial analysis steps for any detection candidate events. Detailed models of the anticipated waveforms enable inference on several parameters, such as component masses, spins, sky location and distance, that are essential for new astrophysical studies of these sources. However, accurate measurements of these parameters and discrimination of models describing the underlying physics are complicated by artifacts in the data, uncertainties in the waveform models and in the calibration of the detectors. Here we report such measurements on a selection of simulated signals added either in hardware or software to the data collected by the two LIGO instruments and the Virgo detector during their most recent joint science run, including a “blind injection” where the signal was not initially revealed to the collaboration. We exemplify the ability to extract information about the source physics on signals that cover the neutron-star and black-hole binary parameter space over the component mass range 1M⊙-25M⊙ and the full range of spin parameters. The cases reported in this study provide a snapshot of the status of parameter estimation in preparation for the operation of advanced detectors.
SEISRISK II; a computer program for seismic hazard estimation
Bender, Bernice; Perkins, D.M.
1982-01-01
The computer program SEISRISK II calculates probabilistic ground motion values for use in seismic hazard mapping. SEISRISK II employs a model that allows earthquakes to occur as points within source zones and as finite-length ruptures along faults. It assumes that earthquake occurrences have a Poisson distribution, that occurrence rates remain constant during the time period considered, that ground motion resulting from an earthquake is a known function of magnitude and distance, that seismically homogeneous source zones are defined, that fault locations are known, that fault rupture lengths depend on magnitude, and that earthquake rates as a function of magnitude are specified for each source. SEISRISK II calculates for each site on a grid of sites the level of ground motion that has a specified probability of being exceeded during a given time period. The program was designed to process a large (essentially unlimited) number of sites and sources efficiently and has been used to produce regional and national maps of seismic hazard.}t is a substantial revision of an earlier program SEISRISK I, which has never been documented. SEISRISK II runs considerably [aster and gives more accurate results than the earlier program and in addition includes rupture length and acceleration variability which were not contained in the original version. We describe the model and how it is implemented in the computer program and provide a flowchart and listing of the code.
Measurement of Phased Array Point Spread Functions for Use with Beamforming
NASA Technical Reports Server (NTRS)
Bahr, Chris; Zawodny, Nikolas S.; Bertolucci, Brandon; Woolwine, Kyle; Liu, Fei; Li, Juan; Sheplak, Mark; Cattafesta, Louis
2011-01-01
Microphone arrays can be used to localize and estimate the strengths of acoustic sources present in a region of interest. However, the array measurement of a region, or beam map, is not an accurate representation of the acoustic field in that region. The true acoustic field is convolved with the array s sampling response, or point spread function (PSF). Many techniques exist to remove the PSF's effect on the beam map via deconvolution. Currently these methods use a theoretical estimate of the array point spread function and perhaps account for installation offsets via determination of the microphone locations. This methodology fails to account for any reflections or scattering in the measurement setup and still requires both microphone magnitude and phase calibration, as well as a separate shear layer correction in an open-jet facility. The research presented seeks to investigate direct measurement of the array's PSF using a non-intrusive acoustic point source generated by a pulsed laser system. Experimental PSFs of the array are computed for different conditions to evaluate features such as shift-invariance, shear layers and model presence. Results show that experimental measurements trend with theory with regard to source offset. The source shows expected behavior due to shear layer refraction when observed in a flow, and application of a measured PSF to NACA 0012 aeroacoustic trailing-edge noise data shows a promising alternative to a classic shear layer correction method.
Source Identification and Location Techniques
NASA Technical Reports Server (NTRS)
Weir, Donald; Bridges, James; Agboola, Femi; Dougherty, Robert
2001-01-01
Mr. Weir presented source location results obtained from an engine test as part of the Engine Validation of Noise Reduction Concepts program. Two types of microphone arrays were used in this program to determine the jet noise source distribution for the exhaust from a 4.3 bypass ratio turbofan engine. One was a linear array of 16 microphones located on a 25 ft. sideline and the other was a 103 microphone 3-D "cage" array in the near field of the jet. Data were obtained from a baseline nozzle and from numerous nozzle configuration using chevrons and/or tabs to reduce the jet noise. Mr. Weir presented data from two configurations: the baseline nozzle and a nozzle configuration with chevrons on both the core and bypass nozzles. This chevron configuration had achieved a jet noise reduction of 4 EPNdB in small scale tests conducted at the Glenn Research Center. IR imaging showed that the chevrons produced significant improvements in mixing and greatly reduced the length of the jet potential core. Comparison of source location data from the 1-D phased array showed a shift of the noise sources towards the nozzle and clear reductions of the sources due to the noise reduction devices. Data from the 3-D array showed a single source at a frequency of 125 Hz. located several diameters downstream from the nozzle exit. At 250 and 400 Hz., multiple sources, periodically spaced, appeared to exist downstream of the nozzle. The trend of source location moving toward the nozzle exit with increasing frequency was also observed. The 3-D array data also showed a reduction in source strength with the addition of chevrons. The overall trend of source location with frequency was compared for the two arrays and with classical experience. Similar trends were observed. Although overall trends with frequency and addition of suppression devices were consistent between the data from the 1-D and the 3-D arrays, a comparison of the details of the inferred source locations did show differences. A flight test is planned to determine if the hardware tested statically will achieve similar reductions in flight.
Aging, Emotion, Attention, and Binding in the Taboo Stroop Task: Data and Theories.
MacKay, Donald G; Johnson, Laura W; Graham, Elizabeth R; Burke, Deborah M
2015-10-14
How does aging impact relations between emotion, memory, and attention? To address this question, young and older adults named the font colors of taboo and neutral words, some of which recurred in the same font color or screen location throughout two color-naming experiments. The results indicated longer color-naming response times (RTs) for taboo than neutral base-words (taboo Stroop interference); better incidental recognition of colors and locations consistently associated with taboo versus neutral words (taboo context-memory enhancement); and greater speed-up in color-naming RTs with repetition of color-consistent than color-inconsistent taboo words, but no analogous speed-up with repetition of location-consistent or location-inconsistent taboo words (the consistency type by repetition interaction for taboo words). All three phenomena remained constant with aging, consistent with the transmission deficit hypothesis and binding theory, where familiar emotional words trigger age-invariant reactions for prioritizing the binding of contextual features to the source of emotion. Binding theory also accurately predicted the interaction between consistency type and repetition for taboo words. However, one or more aspects of these phenomena failed to support the inhibition deficit hypothesis, resource capacity theory, or socio-emotional selectivity theory. We conclude that binding theory warrants further test in a range of paradigms, and that relations between aging and emotion, memory, and attention may depend on whether the task and stimuli trigger fast-reaction, involuntary binding processes, as in the taboo Stroop paradigm.
Aging, Emotion, Attention, and Binding in the Taboo Stroop Task: Data and Theories
MacKay, Donald G.; Johnson, Laura W.; Graham, Elizabeth R.; Burke, Deborah M.
2015-01-01
How does aging impact relations between emotion, memory, and attention? To address this question, young and older adults named the font colors of taboo and neutral words, some of which recurred in the same font color or screen location throughout two color-naming experiments. The results indicated longer color-naming response times (RTs) for taboo than neutral base-words (taboo Stroop interference); better incidental recognition of colors and locations consistently associated with taboo versus neutral words (taboo context-memory enhancement); and greater speed-up in color-naming RTs with repetition of color-consistent than color-inconsistent taboo words, but no analogous speed-up with repetition of location-consistent or location-inconsistent taboo words (the consistency type by repetition interaction for taboo words). All three phenomena remained constant with aging, consistent with the transmission deficit hypothesis and binding theory, where familiar emotional words trigger age-invariant reactions for prioritizing the binding of contextual features to the source of emotion. Binding theory also accurately predicted the interaction between consistency type and repetition for taboo words. However, one or more aspects of these phenomena failed to support the inhibition deficit hypothesis, resource capacity theory, or socio-emotional selectivity theory. We conclude that binding theory warrants further test in a range of paradigms, and that relations between aging and emotion, memory, and attention may depend on whether the task and stimuli trigger fast-reaction, involuntary binding processes, as in the taboo Stroop paradigm. PMID:26473909
NASA Technical Reports Server (NTRS)
Allen, C. S.; Jaeger, S. M.
1999-01-01
The goal of our efforts is to extrapolate nearfield jet noise measurements to the geometric far field where the jet noise sources appear to radiate from a single point. To accomplish this, information about the location of noise sources in the jet plume, the radiation patterns of the noise sources and the sound pressure level distribution of the radiated field must be obtained. Since source locations and radiation patterns can not be found with simple single microphone measurements, a more complicated method must be used.
NASA Astrophysics Data System (ADS)
Takagawa, T.
2017-12-01
A rapid and precise tsunami forecast based on offshore monitoring is getting attention to reduce human losses due to devastating tsunami inundation. We developed a forecast method based on the combination of hierarchical Bayesian inversion with pre-computed database and rapid post-computing of tsunami inundation. The method was applied to Tokyo bay to evaluate the efficiency of observation arrays against three tsunamigenic earthquakes. One is a scenario earthquake at Nankai trough and the other two are historic ones of Genroku in 1703 and Enpo in 1677. In general, rich observation array near the tsunami source has an advantage in both accuracy and rapidness of tsunami forecast. To examine the effect of observation time length we used four types of data with the lengths of 5, 10, 20 and 45 minutes after the earthquake occurrences. Prediction accuracy of tsunami inundation was evaluated by the simulated tsunami inundation areas around Tokyo bay due to target earthquakes. The shortest time length of accurate prediction varied with target earthquakes. Here, accurate prediction means the simulated values fall within the 95% credible intervals of prediction. In Enpo earthquake case, 5-minutes observation is enough for accurate prediction for Tokyo bay, but 10-minutes and 45-minutes are needed in the case of Nankai trough and Genroku, respectively. The difference of the shortest time length for accurate prediction shows the strong relationship with the relative distance from the tsunami source and observation arrays. In the Enpo case, offshore tsunami observation points are densely distributed even in the source region. So, accurate prediction can be rapidly achieved within 5 minutes. This precise prediction is useful for early warnings. Even in the worst case of Genroku, where less observation points are available near the source, accurate prediction can be obtained within 45 minutes. This information can be useful to figure out the outline of the hazard in an early stage of reaction.
The origin of infrasonic ionosphere oscillations over tropospheric thunderstorms
NASA Astrophysics Data System (ADS)
Shao, Xuan-Min; Lay, Erin H.
2016-07-01
Thunderstorms have been observed to introduce infrasonic oscillations in the ionosphere, but it is not clear what processes or which parts of the thunderstorm generate the oscillations. In this paper, we present a new technique that uses an array of ground-based GPS total electron content (TEC) measurements to locate the source of the infrasonic oscillations and compare the source locations with thunderstorm features to understand the possible source mechanisms. The location technique utilizes instantaneous phase differences between pairs of GPS-TEC measurements and an algorithm to best fit the measured and the expected phase differences for assumed source positions and other related parameters. In this preliminary study, the infrasound waves are assumed to propagate along simple geometric raypaths from the source to the measurement locations to avoid extensive computations. The located sources are compared in time and space with thunderstorm development and lightning activity. Sources are often found near the main storm cells, but they are more likely related to the downdraft process than to the updraft process. The sources are also commonly found in the convectively quiet stratiform regions behind active cells and are in good coincidence with extensive lightning discharges and inferred high-altitude sprites discharges.
A double-correlation tremor-location method
NASA Astrophysics Data System (ADS)
Li, Ka Lok; Sgattoni, Giulia; Sadeghisorkhani, Hamzeh; Roberts, Roland; Gudmundsson, Olafur
2017-02-01
A double-correlation method is introduced to locate tremor sources based on stacks of complex, doubly-correlated tremor records of multiple triplets of seismographs back projected to hypothetical source locations in a geographic grid. Peaks in the resulting stack of moduli are inferred source locations. The stack of the moduli is a robust measure of energy radiated from a point source or point sources even when the velocity information is imprecise. Application to real data shows how double correlation focuses the source mapping compared to the common single correlation approach. Synthetic tests demonstrate the robustness of the method and its resolution limitations which are controlled by the station geometry, the finite frequency of the signal, the quality of the used velocity information and noise level. Both random noise and signal or noise correlated at time shifts that are inconsistent with the assumed velocity structure can be effectively suppressed. Assuming a surface wave velocity, we can constrain the source location even if the surface wave component does not dominate. The method can also in principle be used with body waves in 3-D, although this requires more data and seismographs placed near the source for depth resolution.
Predicting infection risk of airborne foot-and-mouth disease.
Schley, David; Burgin, Laura; Gloster, John
2009-05-06
Foot-and-mouth disease is a highly contagious disease of cloven-hoofed animals, the control and eradication of which is of significant worldwide socio-economic importance. The virus may spread by direct contact between animals or via fomites as well as through airborne transmission, with the latter being the most difficult to control. Here, we consider the risk of infection to flocks or herds from airborne virus emitted from a known infected premises. We show that airborne infection can be predicted quickly and with a good degree of accuracy, provided that the source of virus emission has been determined and reliable geo-referenced herd data are available. A simple model provides a reliable tool for estimating risk from known sources and for prioritizing surveillance and detection efforts. The issue of data information management systems was highlighted as a lesson to be learned from the official inquiry into the UK 2007 foot-and-mouth outbreak: results here suggest that the efficacy of disease control measures could be markedly improved through an accurate livestock database incorporating flock/herd size and location, which would enable tactical as well as strategic modelling.
Application of MEMS Microphone Array Technology to Airframe Noise Measurements
NASA Technical Reports Server (NTRS)
Humphreys, William M., Jr.; Shams, Qamar A.; Graves, Sharon S.; Sealey, Bradley S.; Bartram, Scott M.; Comeaux, Toby
2005-01-01
Current generation microphone directional array instrumentation is capable of extracting accurate noise source location and directivity data on a variety of aircraft components, resulting in significant gains in test productivity. However, with this gain in productivity has come the desire to install larger and more complex arrays in a variety of ground test facilities, creating new challenges for the designers of array systems. To overcome these challenges, a research study was initiated to identify and develop hardware and fabrication technologies which could be used to construct an array system exhibiting acceptable measurement performance but at much lower cost and with much simpler installation requirements. This paper describes an effort to fabricate a 128-sensor array using commercially available Micro-Electro-Mechanical System (MEMS) microphones. The MEMS array was used to acquire noise data for an isolated 26%-scale high-fidelity Boeing 777 landing gear in the Virginia Polytechnic Institute and State University Stability Tunnel across a range of Mach numbers. The overall performance of the array was excellent, and major noise sources were successfully identified from the measurements.
Anechoic Chamber test of the Electromagnetic Measurement System ground test unit
NASA Astrophysics Data System (ADS)
Stevenson, L. E.; Scott, L. D.; Oakes, E. T.
1987-04-01
The Electromagnetic Measurement System (EMMS) will acquire data on electromagnetic (EM) environments at key weapon locations on various aircraft certified for nuclear weapons. The high-frequency ground unit of the EMMS consists of an instrumented B61 bomb case that will measure (with current probes) the localized current density resulting from an applied EM field. For this portion of the EMMS, the first system test was performed in the Anechoic Chamber Facility at Sandia National Laboratories, Albuquerque, New Mexico. The EMMS pod was subjected to EM radiation at microwave frequencies of 1, 3, and 10 GHz. At each frequency, the EMMS pod was rotated at many positions relative to the microwave source so that the individual current probes were exposed to a direct line-of-sight illumination. The variations between the measured and calculated electric fields for the current probes with direct illumination by the EM source are within a few db. The results obtained from the anechoic test were better than expected and verify that the high frequency ground portion of the EMMS will accurately measure the EM environments for which it was designed.
Vehicular sources in acoustic propagation experiments
NASA Technical Reports Server (NTRS)
Prado, Gervasio; Fitzgerald, James; Arruda, Anthony; Parides, George
1990-01-01
One of the most important uses of acoustic propagation models lies in the area of detection and tracking of vehicles. Propagation models are used to compute transmission losses in performance prediction models and to analyze the results of past experiments. Vehicles can also provide the means for cost effective experiments to measure acoustic propagation conditions over significant ranges. In order to properly correlate the information provided by the experimental data and the propagation models, the following issues must be taken into consideration: the phenomenology of the vehicle noise sources must be understood and characterized; the vehicle's location or 'ground truth' must be accurately reproduced and synchronized with the acoustic data; and sufficient meteorological data must be collected to support the requirements of the propagation models. The experimental procedures and instrumentation needed to carry out propagation experiments are discussed. Illustrative results are presented for two cases. First, a helicopter was used to measure propagation losses at a range of 1 to 10 Km. Second, a heavy diesel-powered vehicle was used to measure propagation losses in the 300 to 2200 m range.