DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohd, Shukri; Holford, Karen M.; Pullin, Rhys
2014-02-12
Source location is an important feature of acoustic emission (AE) damage monitoring in nuclear piping. The ability to accurately locate sources can assist in source characterisation and early warning of failure. This paper describe the development of a novelAE source location technique termed 'Wavelet Transform analysis and Modal Location (WTML)' based on Lamb wave theory and time-frequency analysis that can be used for global monitoring of plate like steel structures. Source location was performed on a steel pipe of 1500 mm long and 220 mm outer diameter with nominal thickness of 5 mm under a planar location test setup usingmore » H-N sources. The accuracy of the new technique was compared with other AE source location methods such as the time of arrival (TOA) techniqueand DeltaTlocation. Theresults of the study show that the WTML method produces more accurate location resultscompared with TOA and triple point filtering location methods. The accuracy of the WTML approach is comparable with the deltaT location method but requires no initial acoustic calibration of the structure.« less
Evaluation of the accuracy of GPS as a method of locating traffic collisions.
DOT National Transportation Integrated Search
2004-06-01
The objective of this study were to determine the accuracy of GPS units as a traffic crash location tool, evaluate the accuracy of the location data obtained using the GPS units, and determine the largest sources of any errors found. : The analysis s...
NASA Astrophysics Data System (ADS)
Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung
2016-07-01
A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.
Microseismic imaging using a source function independent full waveform inversion method
NASA Astrophysics Data System (ADS)
Wang, Hanchen; Alkhalifah, Tariq
2018-07-01
At the heart of microseismic event measurements is the task to estimate the location of the source microseismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional microseismic source locating methods require, in many cases, manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, FWI of microseismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modelled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers are calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
NASA Astrophysics Data System (ADS)
Debski, Wojciech
2015-06-01
The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analysed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. Although estimating of the earthquake foci location is relatively simple, a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling and a priori uncertainties. In this paper, we addressed this task when statistics of observational and/or modelling errors are unknown. This common situation requires introduction of a priori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland, we propose an approach based on an analysis of Shanon's entropy calculated for the a posteriori distribution. We show that this meta-characteristic of the a posteriori distribution carries some information on uncertainties of the solution found.
A test of the reward-value hypothesis.
Smith, Alexandra E; Dalecki, Stefan J; Crystal, Jonathon D
2017-03-01
Rats retain source memory (memory for the origin of information) over a retention interval of at least 1 week, whereas their spatial working memory (radial maze locations) decays within approximately 1 day. We have argued that different forgetting functions dissociate memory systems. However, the two tasks, in our previous work, used different reward values. The source memory task used multiple pellets of a preferred food flavor (chocolate), whereas the spatial working memory task provided access to a single pellet of standard chow-flavored food at each location. Thus, according to the reward-value hypothesis, enhanced performance in the source memory task stems from enhanced encoding/memory of a preferred reward. We tested the reward-value hypothesis by using a standard 8-arm radial maze task to compare spatial working memory accuracy of rats rewarded with either multiple chocolate or chow pellets at each location using a between-subjects design. The reward-value hypothesis predicts superior accuracy for high-valued rewards. We documented equivalent spatial memory accuracy for high- and low-value rewards. Importantly, a 24-h retention interval produced equivalent spatial working memory accuracy for both flavors. These data are inconsistent with the reward-value hypothesis and suggest that reward value does not explain our earlier findings that source memory survives unusually long retention intervals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qishi; Berry, M. L..; Grieme, M.
We propose a localization-based radiation source detection (RSD) algorithm using the Ratio of Squared Distance (ROSD) method. Compared with the triangulation-based method, the advantages of this ROSD method are multi-fold: i) source location estimates based on four detectors improve their accuracy, ii) ROSD provides closed-form source location estimates and thus eliminates the imaginary-roots issue, and iii) ROSD produces a unique source location estimate as opposed to two real roots (if any) in triangulation, and obviates the need to identify real phantom roots during clustering.
NASA Astrophysics Data System (ADS)
Zhou, Yunfei; Cai, Hongzhi; Zhong, Liyun; Qiu, Xiang; Tian, Jindong; Lu, Xiaoxu
2017-05-01
In white light scanning interferometry (WLSI), the accuracy of profile measurement achieved with the conventional zero optical path difference (ZOPD) position locating method is closely related with the shape of interference signal envelope (ISE), which is mainly decided by the spectral distribution of illumination source. For a broadband light with Gaussian spectral distribution, the corresponding shape of ISE reveals a symmetric distribution, so the accurate ZOPD position can be achieved easily. However, if the spectral distribution of source is irregular, the shape of ISE will become asymmetric or complex multi-peak distribution, WLSI cannot work well through using ZOPD position locating method. Aiming at this problem, we propose time-delay estimation (TDE) based WLSI method, in which the surface profile information is achieved by using the relative displacement of interference signal between different pixels instead of the conventional ZOPD position locating method. Due to all spectral information of interference signal (envelope and phase) are utilized, in addition to revealing the advantage of high accuracy, the proposed method can achieve profile measurement with high accuracy in the case that the shape of ISE is irregular while ZOPD position locating method cannot work. That is to say, the proposed method can effectively eliminate the influence of source spectrum.
Micro-seismic imaging using a source function independent full waveform inversion method
NASA Astrophysics Data System (ADS)
Wang, Hanchen; Alkhalifah, Tariq
2018-03-01
At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
Travel-time source-specific station correction improves location accuracy
NASA Astrophysics Data System (ADS)
Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo
2013-04-01
Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.
LLNL Location and Detection Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, S C; Harris, D B; Anderson, M L
2003-07-16
We present two LLNL research projects in the topical areas of location and detection. The first project assesses epicenter accuracy using a multiple-event location algorithm, and the second project employs waveform subspace Correlation to detect and identify events at Fennoscandian mines. Accurately located seismic events are the bases of location calibration. A well-characterized set of calibration events enables new Earth model development, empirical calibration, and validation of models. In a recent study, Bondar et al. (2003) develop network coverage criteria for assessing the accuracy of event locations that are determined using single-event, linearized inversion methods. These criteria are conservative andmore » are meant for application to large bulletins where emphasis is on catalog completeness and any given event location may be improved through detailed analysis or application of advanced algorithms. Relative event location techniques are touted as advancements that may improve absolute location accuracy by (1) ensuring an internally consistent dataset, (2) constraining a subset of events to known locations, and (3) taking advantage of station and event correlation structure. Here we present the preliminary phase of this work in which we use Nevada Test Site (NTS) nuclear explosions, with known locations, to test the effect of travel-time model accuracy on relative location accuracy. Like previous studies, we find that the reference velocity-model and relative-location accuracy are highly correlated. We also find that metrics based on travel-time residual of relocated events are not a reliable for assessing either velocity-model or relative-location accuracy. In the topical area of detection, we develop specialized correlation (subspace) detectors for the principal mines surrounding the ARCES station located in the European Arctic. Our objective is to provide efficient screens for explosions occurring in the mines of the Kola Peninsula (Kovdor, Zapolyarny, Olenogorsk, Khibiny) and the major iron mines of northern Sweden (Malmberget, Kiruna). In excess of 90% of the events detected by the ARCES station are mining explosions, and a significant fraction are from these northern mining groups. The primary challenge in developing waveform correlation detectors is the degree of variation in the source time histories of the shots, which can result in poor correlation among events even in close proximity. Our approach to solving this problem is to use lagged subspace correlation detectors, which offer some prospect of compensating for variation and uncertainty in source time functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roussel-Dupre, R.; Symbalisty, E.; Fox, C.
2009-08-01
The location of a radiating source can be determined by time-tagging the arrival of the radiated signal at a network of spatially distributed sensors. The accuracy of this approach depends strongly on the particular time-tagging algorithm employed at each of the sensors. If different techniques are used across the network, then the time tags must be referenced to a common fiducial for maximum location accuracy. In this report we derive the time corrections needed to temporally align leading-edge, time-tagging techniques with peak-picking algorithms. We focus on broadband radio frequency (RF) sources, an ionospheric propagation channel, and narrowband receivers, but themore » final results can be generalized to apply to any source, propagation environment, and sensor. Our analytic results are checked against numerical simulations for a number of representative cases and agree with the specific leading-edge algorithm studied independently by Kim and Eng (1995) and Pongratz (2005 and 2007).« less
Joint Source Location and Focal Mechanism Inversion: efficiency, accuracy and applications
NASA Astrophysics Data System (ADS)
Liang, C.; Yu, Y.
2017-12-01
The analysis of induced seismicity has become a common practice to evaluate the results of hydraulic fracturing treatment. Liang et al (2016) proposed a joint Source Scanning Algorithms (jSSA for short) to obtain microseismic events and focal mechanisms simultaneously. The jSSA is superior over traditional SSA in many aspects, but the computation cost is too significant to be applied in real time monitoring. In this study, we have developed several scanning schemas to reduce computation time. A multi-stage scanning schema is proved to be able to improve the efficiency significantly while also retain its accuracy. A series of tests have been carried out by using both real field data and synthetic data to evaluate the accuracy of the method and its dependence on noise level, source depths, focal mechanisms and other factors. The surface-based arrays have better constraints on horizontal location errors (<20m) and angular errors of P axes (within 10 degree, for S/N>0.5). For sources with varying rakes, dips, strikes and depths, the errors are mostly controlled by the partition of positive and negative polarities in different quadrants. More evenly partitioned polarities in different quadrants yield better results in both locations and focal mechanisms. Nevertheless, even with bad resolutions for some FMs, the optimized jSSA method can still improve location accuracies significantly. Based on much more densely distributed events and focal mechanisms, a gridded stress inversion is conducted to get a evenly distributed stress field. The full potential of the jSSA has yet to be explored in different directions, especially in earthquake seismology as seismic array becoming incleasingly dense.
Research on Knowledge-Based Optimization Method of Indoor Location Based on Low Energy Bluetooth
NASA Astrophysics Data System (ADS)
Li, C.; Li, G.; Deng, Y.; Wang, T.; Kang, Z.
2017-09-01
With the rapid development of LBS (Location-based Service), the demand for commercialization of indoor location has been increasing, but its technology is not perfect. Currently, the accuracy of indoor location, the complexity of the algorithm, and the cost of positioning are hard to be simultaneously considered and it is still restricting the determination and application of mainstream positioning technology. Therefore, this paper proposes a method of knowledge-based optimization of indoor location based on low energy Bluetooth. The main steps include: 1) The establishment and application of a priori and posterior knowledge base. 2) Primary selection of signal source. 3) Elimination of positioning gross error. 4) Accumulation of positioning knowledge. The experimental results show that the proposed algorithm can eliminate the signal source of outliers and improve the accuracy of single point positioning in the simulation data. The proposed scheme is a dynamic knowledge accumulation rather than a single positioning process. The scheme adopts cheap equipment and provides a new idea for the theory and method of indoor positioning. Moreover, the performance of the high accuracy positioning results in the simulation data shows that the scheme has a certain application value in the commercial promotion.
NASA Astrophysics Data System (ADS)
Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.
2016-04-01
Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.
Acoustic localization of triggered lightning
NASA Astrophysics Data System (ADS)
Arechiga, Rene O.; Johnson, Jeffrey B.; Edens, Harald E.; Thomas, Ronald J.; Rison, William
2011-05-01
We use acoustic (3.3-500 Hz) arrays to locate local (<20 km) thunder produced by triggered lightning in the Magdalena Mountains of central New Mexico. The locations of the thunder sources are determined by the array back azimuth and the elapsed time since discharge of the lightning flash. We compare the acoustic source locations with those obtained by the Lightning Mapping Array (LMA) from Langmuir Laboratory, which is capable of accurately locating the lightning channels. To estimate the location accuracy of the acoustic array we performed Monte Carlo simulations and measured the distance (nearest neighbors) between acoustic and LMA sources. For close sources (<5 km) the mean nearest-neighbors distance was 185 m compared to 100 m predicted by the Monte Carlo analysis. For far distances (>6 km) the error increases to 800 m for the nearest neighbors and 650 m for the Monte Carlo analysis. This work shows that thunder sources can be accurately located using acoustic signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shchory, Tal; Schifter, Dan; Lichtman, Rinat
Purpose: In radiation therapy there is a need to accurately know the location of the target in real time. A novel radioactive tracking technology has been developed to answer this need. The technology consists of a radioactive implanted fiducial marker designed to minimize migration and a linac mounted tracking device. This study measured the static and dynamic accuracy of the new tracking technology in a clinical radiation therapy environment. Methods and Materials: The tracking device was installed on the linac gantry. The radioactive marker was located in a tissue equivalent phantom. Marker location was measured simultaneously by the radioactive trackingmore » system and by a Microscribe G2 coordinate measuring machine (certified spatial accuracy of 0.38 mm). Localization consistency throughout a volume and absolute accuracy in the Fixed coordinate system were measured at multiple gantry angles over volumes of at least 10 cm in diameter centered at isocenter. Dynamic accuracy was measured with the marker located inside a breathing phantom. Results: The mean consistency for the static source was 0.58 mm throughout the tested region at all measured gantry angles. The mean absolute position error in the Fixed coordinate system for all gantry angles was 0.97 mm. The mean real-time tracking error for the dynamic source within the breathing phantom was less than 1 mm. Conclusions: This novel radioactive tracking technology has the potential to be useful in accurate target localization and real-time monitoring for radiation therapy.« less
Shchory, Tal; Schifter, Dan; Lichtman, Rinat; Neustadter, David; Corn, Benjamin W
2010-11-15
In radiation therapy there is a need to accurately know the location of the target in real time. A novel radioactive tracking technology has been developed to answer this need. The technology consists of a radioactive implanted fiducial marker designed to minimize migration and a linac mounted tracking device. This study measured the static and dynamic accuracy of the new tracking technology in a clinical radiation therapy environment. The tracking device was installed on the linac gantry. The radioactive marker was located in a tissue equivalent phantom. Marker location was measured simultaneously by the radioactive tracking system and by a Microscribe G2 coordinate measuring machine (certified spatial accuracy of 0.38 mm). Localization consistency throughout a volume and absolute accuracy in the Fixed coordinate system were measured at multiple gantry angles over volumes of at least 10 cm in diameter centered at isocenter. Dynamic accuracy was measured with the marker located inside a breathing phantom. The mean consistency for the static source was 0.58 mm throughout the tested region at all measured gantry angles. The mean absolute position error in the Fixed coordinate system for all gantry angles was 0.97 mm. The mean real-time tracking error for the dynamic source within the breathing phantom was less than 1 mm. This novel radioactive tracking technology has the potential to be useful in accurate target localization and real-time monitoring for radiation therapy. Copyright © 2010 Elsevier Inc. All rights reserved.
Location error uncertainties - an advanced using of probabilistic inverse theory
NASA Astrophysics Data System (ADS)
Debski, Wojciech
2016-04-01
The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analyzed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. While estimating of the earthquake foci location is relatively simple a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling, and apriori uncertainties. In this presentation we addressed this task when statistics of observational and/or modeling errors are unknown. This common situation requires introduction of apriori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland we illustrate an approach based on an analysis of Shanon's entropy calculated for the aposteriori distribution. We show that this meta-characteristic of the aposteriori distribution carries some information on uncertainties of the solution found.
NASA Astrophysics Data System (ADS)
Kaufman, Lloyd; Williamson, Samuel J.; Costaribeiro, P.
1988-02-01
Recently developed small arrays of SQUID-based magnetic sensors can, if appropriately placed, locate the position of a confined biomagnetic source without moving the array. The authors present a technique with a relative accuracy of about 2 percent for calibrating such sensors having detection coils with the geometry of a second-order gradiometer. The effects of calibration error and magnetic noise on the accuracy of locating an equivalent current dipole source in the human brain are investigated for 5- and 7-sensor probes and for a pair of 7-sensor probes. With a noise level of 5 percent of peak signal, uncertainties of about 20 percent in source strength and depth for a 5-sensor probe are reduced to 8 percent for a pair of 7-sensor probes, and uncertainties of about 15 mm in lateral position are reduced to 1 mm, for the configuration considered.
Locating very high energy gamma-ray sources with arcminute accuracy
NASA Technical Reports Server (NTRS)
Akerlof, C. W.; Cawley, M. F.; Chantell, M.; Harris, K.; Lawrence, M. A.; Fegan, D. J.; Lang, M. J.; Hillas, A. M.; Jennings, D. G.; Lamb, R. C.
1991-01-01
The angular accuracy of gamma-ray detectors is intrinsically limited by the physical processes involved in photon detection. Although a number of pointlike sources were detected by the COS B satellite, only two have been unambiguously identified by time signature with counterparts at longer wavelengths. By taking advantage of the extended longitudinal structure of VHE gamma-ray showers, measurements in the TeV energy range can pinpoint source coordinates to arcminute accuracy. This has now been demonstrated with new data analysis procedures applied to observations of the Crab Nebula using Cherenkov air shower imaging techniques. With two telescopes in coincidence, the individual event circular probable error will be 0.13 deg. The half-cone angle of the field of view is effectively 1 deg.
Przybyla, Jay; Taylor, Jeffrey; Zhou, Xuesong
2010-01-01
In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM) smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy. PMID:22163641
Przybyla, Jay; Taylor, Jeffrey; Zhou, Xuesong
2010-01-01
In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM) smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.
NASA Astrophysics Data System (ADS)
Sánchez, Daniel; Nieh, James C.; Hénaut, Yann; Cruz, Leopoldo; Vandame, Rémy
Several studies have examined the existence of recruitment communication mechanisms in stingless bees. However, the spatial accuracy of location-specific recruitment has not been examined. Moreover, the location-specific recruitment of reactivated foragers, i.e., foragers that have previously experienced the same food source at a different location and time, has not been explicitly examined. However, such foragers may also play a significant role in colony foraging, particularly in small colonies. Here we report that reactivated Scaptotrigona mexicana foragers can recruit with high precision to a specific food location. The recruitment precision of reactivated foragers was evaluated by placing control feeders to the left and the right of the training feeder (direction-precision tests) and between the nest and the training feeder and beyond it (distance-precision tests). Reactivated foragers arrived at the correct location with high precision: 98.44% arrived at the training feeder in the direction trials (five-feeder fan-shaped array, accuracy of at least +/-6° of azimuth at 50 m from the nest), and 88.62% arrived at the training feeder in the distance trials (five-feeder linear array, accuracy of at least +/-5 m or +/-10% at 50 m from the nest). Thus, S. mexicana reactivated foragers can find the indicated food source at a specific distance and direction with high precision, higher than that shown by honeybees, Apis mellifera, which do not communicate food location at such close distances to the nest.
Bian, Xu; Li, Yibo; Feng, Hao; Wang, Jiaqiang; Qi, Lei; Jin, Shijiu
2015-01-01
This paper proposes a continuous leakage location method based on the ultrasonic array sensor, which is specific to continuous gas leakage in a pressure container with an integral stiffener. This method collects the ultrasonic signals generated from the leakage hole through the piezoelectric ultrasonic sensor array, and analyzes the space-time correlation of every collected signal in the array. Meanwhile, it combines with the method of frequency compensation and superposition in time domain (SITD), based on the acoustic characteristics of the stiffener, to obtain a high-accuracy location result on the stiffener wall. According to the experimental results, the method successfully solves the orientation problem concerning continuous ultrasonic signals generated from leakage sources, and acquires high accuracy location information on the leakage source using a combination of multiple sets of orienting results. The mean value of location absolute error is 13.51 mm on the one-square-meter plate with an integral stiffener (4 mm width; 20 mm height; 197 mm spacing), and the maximum location absolute error is generally within a ±25 mm interval. PMID:26404316
System and method for bullet tracking and shooter localization
Roberts, Randy S [Livermore, CA; Breitfeller, Eric F [Dublin, CA
2011-06-21
A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios.
An alternative subspace approach to EEG dipole source localization
NASA Astrophysics Data System (ADS)
Xu, Xiao-Liang; Xu, Bobby; He, Bin
2004-01-01
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.
Microseismic response characteristics modeling and locating of underground water supply pipe leak
NASA Astrophysics Data System (ADS)
Wang, J.; Liu, J.
2015-12-01
In traditional methods of pipeline leak location, geophones must be located on the pipe wall. If the exact location of the pipeline is unknown, the leaks cannot be identified accurately. To solve this problem, taking into account the characteristics of the pipeline leak, we propose a continuous random seismic source model and construct geological models to investigate the proposed method for locating underground pipeline leaks. Based on two dimensional (2D) viscoacoustic equations and the staggered grid finite-difference (FD) algorithm, the microseismic wave field generated by a leaking pipe is modeled. Cross-correlation analysis and the simulated annealing (SA) algorithm were utilized to obtain the time difference and the leak location. We also analyze and discuss the effect of the number of recorded traces, the survey layout, and the offset and interval of the traces on the accuracy of the estimated location. The preliminary results of the simulation and data field experiment indicate that (1) a continuous random source can realistically represent the leak microseismic wave field in a simulation using 2D visco-acoustic equations and a staggered grid FD algorithm. (2) The cross-correlation method is effective for calculating the time difference of the direct wave relative to the reference trace. However, outside the refraction blind zone, the accuracy of the time difference is reduced by the effects of the refracted wave. (3) The acquisition method of time difference based on the microseismic theory and SA algorithm has a great potential for locating leaks from underground pipelines from an array located on the ground surface. Keywords: Viscoacoustic finite-difference simulation; continuous random source; simulated annealing algorithm; pipeline leak location
An iterative method for obtaining the optimum lightning location on a spherical surface
NASA Technical Reports Server (NTRS)
Chao, Gao; Qiming, MA
1991-01-01
A brief introduction to the basic principles of an eigen method used to obtain the optimum source location of lightning is presented. The location of the optimum source is obtained by using multiple direction finders (DF's) on a spherical surface. An improvement of this method, which takes the distance of source-DF's as a constant, is presented. It is pointed out that using a weight factor of signal strength is not the most ideal method because of the inexact inverse signal strength-distance relation and the inaccurate signal amplitude. An iterative calculation method is presented using the distance from the source to the DF as a weight factor. This improved method has higher accuracy and needs only a little more calculation time. Some computer simulations for a 4DF system are presented to show the improvement of location through use of the iterative method.
Enhancements to the MCNP6 background source
McMath, Garrett E.; McKinney, Gregg W.
2015-10-19
The particle transport code MCNP has been used to produce a background radiation data file on a worldwide grid that can easily be sampled as a source in the code. Location-dependent cosmic showers were modeled by Monte Carlo methods to produce the resulting neutron and photon background flux at 2054 locations around Earth. An improved galactic-cosmic-ray feature was used to model the source term as well as data from multiple sources to model the transport environment through atmosphere, soil, and seawater. A new elevation scaling feature was also added to the code to increase the accuracy of the cosmic neutronmore » background for user locations with off-grid elevations. Furthermore, benchmarking has shown the neutron integral flux values to be within experimental error.« less
Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; ...
2016-04-01
Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.
Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less
2012-09-01
State Award Nos. DE-AC52-07NA27344/24.2.3.2 and DOS_SIAA-11-AVC/NMA-1 ABSTRACT The Middle East is a tectonically complex and seismically...active region. The ability to accurately locate earthquakes and other seismic events in this region is complicated by tectonics , the uneven...and seismic source parameters show that this activity comes from tectonic events. This work is informed by continuous or event-based regional
Locating the source of spreading in temporal networks
NASA Astrophysics Data System (ADS)
Huang, Qiangjuan; Zhao, Chengli; Zhang, Xue; Yi, Dongyun
2017-02-01
The topological structure of many real networks changes with time. Thus, locating the sources of a temporal network is a creative and challenging problem, as the enormous size of many real networks makes it unfeasible to observe the state of all nodes. In this paper, we propose an algorithm to solve this problem, named the backward temporal diffusion process. The proposed algorithm calculates the shortest temporal distance to locate the transmission source. We assume that the spreading process can be modeled as a simple diffusion process and by consensus dynamics. To improve the location accuracy, we also adopt four strategies to select which nodes should be observed by ranking their importance in the temporal network. Our paper proposes a highly accurate method for locating the source in temporal networks and is, to the best of our knowledge, a frontier work in this field. Moreover, our framework has important significance for controlling the transmission of diseases or rumors and formulating immediate immunization strategies.
Method and system for determining radiation shielding thickness and gamma-ray energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klann, Raymond T.; Vilim, Richard B.; de la Barrera, Sergio
2015-12-15
A system and method for determining the shielding thickness of a detected radiation source. The gamma ray spectrum of a radiation detector is utilized to estimate the shielding between the detector and the radiation source. The determination of the shielding may be used to adjust the information from known source-localization techniques to provide improved performance and accuracy of locating the source of radiation.
Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry
NASA Astrophysics Data System (ADS)
Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki
2015-08-01
In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.
Deep space target location with Hubble Space Telescope (HST) and Hipparcos data
NASA Technical Reports Server (NTRS)
Null, George W.
1988-01-01
Interplanetary spacecraft navigation requires accurate a priori knowledge of target positions. A concept is presented for attaining improved target ephemeris accuracy using two future Earth-orbiting optical observatories, the European Space Agency (ESA) Hipparcos observatory and the Nasa Hubble Space Telescope (HST). Assuming nominal observatory performance, the Hipparcos data reduction will provide an accurate global star catalog, and HST will provide a capability for accurate angular measurements of stars and solar system bodies. The target location concept employs HST to observe solar system bodies relative to Hipparcos catalog stars and to determine the orientation (frame tie) of these stars to compact extragalactic radio sources. The target location process is described, the major error sources discussed, the potential target ephemeris error predicted, and mission applications identified. Preliminary results indicate that ephemeris accuracy comparable to the errors in individual Hipparcos catalog stars may be possible with a more extensive HST observing program. Possible future ground and spacebased replacements for Hipparcos and HST astrometric capabilities are also discussed.
Classification of event location using matched filters via on-floor accelerometers
NASA Astrophysics Data System (ADS)
Woolard, Americo G.; Malladi, V. V. N. Sriram; Alajlouni, Sa'ed; Tarazaga, Pablo A.
2017-04-01
Recent years have shown prolific advancements in smart infrastructures, allowing buildings of the modern world to interact with their occupants. One of the sought-after attributes of smart buildings is the ability to provide unobtrusive, indoor localization of occupants. The ability to locate occupants indoors can provide a broad range of benefits in areas such as security, emergency response, and resource management. Recent research has shown promising results in occupant building localization, although there is still significant room for improvement. This study presents a passive, small-scale localization system using accelerometers placed around the edges of a small area in an active building environment. The area is discretized into a grid of small squares, and vibration measurements are processed using a pattern matching approach that estimates the location of the source. Vibration measurements are produced with ball-drops, hammer-strikes, and footsteps as the sources of the floor excitation. The developed approach uses matched filters based on a reference data set, and the location is classified using a nearest-neighbor search. This approach detects the appropriate location of impact-like sources i.e. the ball-drops and hammer-strikes with a 100% accuracy. However, this accuracy reduces to 56% for footsteps, with the average localization results being within 0.6 m (α = 0.05) from the true source location. While requiring a reference data set can make this method difficult to implement on a large scale, it may be used to provide accurate localization abilities in areas where training data is readily obtainable. This exploratory work seeks to examine the feasibility of the matched filter and nearest neighbor search approach for footstep and event localization in a small, instrumented area within a multi-story building.
Acoustic Location of Lightning Using Interferometric Techniques
NASA Astrophysics Data System (ADS)
Erives, H.; Arechiga, R. O.; Stock, M.; Lapierre, J. L.; Edens, H. E.; Stringer, A.; Rison, W.; Thomas, R. J.
2013-12-01
Acoustic arrays have been used to accurately locate thunder sources in lightning flashes. The acoustic arrays located around the Magdalena mountains of central New Mexico produce locations which compare quite well with source locations provided by the New Mexico Tech Lightning Mapping Array. These arrays utilize 3 outer microphones surrounding a 4th microphone located at the center, The location is computed by band-passing the signal to remove noise, and then computing the cross correlating the outer 3 microphones with respect the center reference microphone. While this method works very well, it works best on signals with high signal to noise ratios; weaker signals are not as well located. Therefore, methods are being explored to improve the location accuracy and detection efficiency of the acoustic location systems. The signal received by acoustic arrays is strikingly similar to th signal received by radio frequency interferometers. Both acoustic location systems and radio frequency interferometers make coherent measurements of a signal arriving at a number of closely spaced antennas. And both acoustic and interferometric systems then correlate these signals between pairs of receivers to determine the direction to the source of the received signal. The primary difference between the two systems is the velocity of propagation of the emission, which is much slower for sound. Therefore, the same frequency based techniques that have been used quite successfully with radio interferometers should be applicable to acoustic based measurements as well. The results presented here are comparisons between the location results obtained with current cross correlation method and techniques developed for radio frequency interferometers applied to acoustic signals. The data were obtained during the summer 2013 storm season using multiple arrays sensitive to both infrasonic frequency and audio frequency acoustic emissions from lightning. Preliminary results show that interferometric techniques have good potential for improving the lightning location accuracy and detection efficiency of acoustic arrays.
Swift Burst Alert Telescope (BAT) Instrument Response
NASA Technical Reports Server (NTRS)
Parsons, A.; Hullinger, D.; Markwardt, C.; Barthelmy, S.; Cummings, J.; Gehrels, N.; Krimm, H.; Tueller, J.; Fenimore, E.; Palmer, D.
2004-01-01
The Burst Alert Telescope (BAT), a large coded aperture instrument with a wide field-of-view (FOV), provides the gamma-ray burst triggers and locations for the Swift Gamma-Ray Burst Explorer. In addition to providing this imaging information, BAT will perform a 15 keV - 150 keV all-sky hard x-ray survey based on the serendipitous pointings resulting from the study of gamma-ray bursts and will also monitor the sky for transient hard x-ray sources. For BAT to provide spectral and photometric information for the gamma-ray bursts, the transient sources and the all-sky survey, the BAT instrument response must be determined to an increasingly greater accuracy. In this talk, we describe the BAT instrument response as determined to an accuracy suitable for gamma-ray burst studies. We will also discuss the public data analysis tools developed to calculate the BAT response to sources at different energies and locations in the FOV. The level of accuracy required for the BAT instrument response used for the hard x-ray survey is significantly higher because this response must be used in the iterative clean algorithm for finding fainter sources. Because the bright sources add a lot of coding noise to the BAT sky image, fainter sources can be seen only after the counts due to the bright sources are removed. The better we know the BAT response, the lower the noise in the cleaned spectrum and thus the more sensitive the survey. Since the BAT detector plane consists of 32768 individual, 4 mm square CZT gamma-ray detectors, the most accurate BAT response would include 32768 individual detector response functions to separate mask modulation effects from differences in detector efficiencies! We describe OUT continuing work to improve the accuracy of the BAT instrument response and will present the current results of Monte Carlo simulations as well as BAT ground calibration data.
Localization of diffusion sources in complex networks with sparse observations
NASA Astrophysics Data System (ADS)
Hu, Zhao-Long; Shen, Zhesi; Tang, Chang-Bing; Xie, Bin-Bin; Lu, Jian-Feng
2018-04-01
Locating sources in a large network is of paramount importance to reduce the spreading of disruptive behavior. Based on the backward diffusion-based method and integer programming, we propose an efficient approach to locate sources in complex networks with limited observers. The results on model networks and empirical networks demonstrate that, for a certain fraction of observers, the accuracy of our method for source localization will improve as the increase of network size. Besides, compared with the previous method (the maximum-minimum method), the performance of our method is much better with a small fraction of observers, especially in heterogeneous networks. Furthermore, our method is more robust against noise environments and strategies of choosing observers.
NASA Astrophysics Data System (ADS)
Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio
2005-10-01
This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG.
A study on locating the sonic source of sinusoidal magneto-acoustic signals using a vector method.
Zhang, Shunqi; Zhou, Xiaoqing; Ma, Ren; Yin, Tao; Liu, Zhipeng
2015-01-01
Methods based on the magnetic-acoustic effect are of great significance in studying the electrical imaging properties of biological tissues and currents. The continuous wave method, which is commonly used, can only detect the current amplitude without the sound source position. Although the pulse mode adopted in magneto-acoustic imaging can locate the sonic source, the low measuring accuracy and low SNR has limited its application. In this study, a vector method was used to solve and analyze the magnetic-acoustic signal based on the continuous sine wave mode. This study includes theory modeling of the vector method, simulations to the line model, and experiments with wire samples to analyze magneto-acoustic (MA) signal characteristics. The results showed that the amplitude and phase of the MA signal contained the location information of the sonic source. The amplitude and phase obeyed the vector theory in the complex plane. This study sets a foundation for a new technique to locate sonic sources for biomedical imaging of tissue conductivity. It also aids in studying biological current detecting and reconstruction based on the magneto-acoustic effect.
Geolocation and Pointing Accuracy Analysis for the WindSat Sensor
NASA Technical Reports Server (NTRS)
Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.
2006-01-01
Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, X; Lei, Y; Zheng, D
2016-06-15
Purpose: High Dose Rate (HDR) brachytherapy poses a special challenge to radiation safety and quality assurance (QA) due to its high radioactivity, and it is thus critical to verify the HDR source location and its radioactive strength. This study demonstrates a new method for measuring HDR source location and radioactivity utilizing thermal imaging. A potential application would relate to HDR QA and safety improvement. Methods: Heating effects by an HDR source were studied using Finite Element Analysis (FEA). Thermal cameras were used to visualize an HDR source inside a plastic applicator made of polyvinylidene difluoride (PVDF). Using different source dwellmore » times, correlations between the HDR source strength and heating effects were studied, thus establishing potential daily QA criteria using thermal imaging Results: For an Ir1?2 source with a radioactivity of 10 Ci, the decay-induced heating power inside the source is ∼13.3 mW. After the HDR source was extended into the PVDF applicator and reached thermal equilibrium, thermal imaging visualized the temperature gradient of 10 K/cm along the PVDF applicator surface, which agreed with FEA modeling. For Ir{sup 192} source activities ranging from 4.20–10.20 Ci, thermal imaging could verify source activity with an accuracy of 6.3% with a dwell time of 10 sec, and an accuracy of 2.5 % with 100 sec. Conclusion: Thermal imaging is a feasible tool to visualize HDR source dwell positions and verify source integrity. Patient safety and treatment quality will be improved by integrating thermal measurements into HDR QA procedures.« less
NASA Astrophysics Data System (ADS)
Gondán, László; Kocsis, Bence; Raffai, Péter; Frei, Zsolt
2018-03-01
Mergers of stellar-mass black holes on highly eccentric orbits are among the targets for ground-based gravitational-wave detectors, including LIGO, VIRGO, and KAGRA. These sources may commonly form through gravitational-wave emission in high-velocity dispersion systems or through the secular Kozai–Lidov mechanism in triple systems. Gravitational waves carry information about the binaries’ orbital parameters and source location. Using the Fisher matrix technique, we determine the measurement accuracy with which the LIGO–VIRGO–KAGRA network could measure the source parameters of eccentric binaries using a matched filtering search of the repeated burst and eccentric inspiral phases of the waveform. We account for general relativistic precession and the evolution of the orbital eccentricity and frequency during the inspiral. We find that the signal-to-noise ratio and the parameter measurement accuracy may be significantly higher for eccentric sources than for circular sources. This increase is sensitive to the initial pericenter distance, the initial eccentricity, and the component masses. For instance, compared to a 30 {M}ȯ –30 {M}ȯ non-spinning circular binary, the chirp mass and sky-localization accuracy can improve by a factor of ∼129 (38) and ∼2 (11) for an initially highly eccentric binary assuming an initial pericenter distance of 20 M tot (10 M tot).
Locating arbitrarily time-dependent sound sources in three dimensional space in real time.
Wu, Sean F; Zhu, Na
2010-08-01
This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.
An FBG acoustic emission source locating system based on PHAT and GA
NASA Astrophysics Data System (ADS)
Shen, Jing-shi; Zeng, Xiao-dong; Li, Wei; Jiang, Ming-shun
2017-09-01
Using the acoustic emission locating technology to monitor the health of the structure is important for ensuring the continuous and healthy operation of the complex engineering structures and large mechanical equipment. In this paper, four fiber Bragg grating (FBG) sensors are used to establish the sensor array to locate the acoustic emission source. Firstly, the nonlinear locating equations are established based on the principle of acoustic emission, and the solution of these equations is transformed into an optimization problem. Secondly, time difference extraction algorithm based on the phase transform (PHAT) weighted generalized cross correlation provides the necessary conditions for the accurate localization. Finally, the genetic algorithm (GA) is used to solve the optimization model. In this paper, twenty points are tested in the marble plate surface, and the results show that the absolute locating error is within the range of 10 mm, which proves the accuracy of this locating method.
Cross-coherent vector sensor processing for spatially distributed glider networks.
Nichols, Brendan; Sabra, Karim G
2015-09-01
Autonomous underwater gliders fitted with vector sensors can be used as a spatially distributed sensor array to passively locate underwater sources. However, to date, the positional accuracy required for robust array processing (especially coherent processing) is not achievable using dead-reckoning while the gliders remain submerged. To obtain such accuracy, the gliders can be temporarily surfaced to allow for global positioning system contact, but the acoustically active sea surface introduces locally additional sensor noise. This letter demonstrates that cross-coherent array processing, which inherently mitigates the effects of local noise, outperforms traditional incoherent processing source localization methods for this spatially distributed vector sensor network.
Towards an accurate real-time locator of infrasonic sources
NASA Astrophysics Data System (ADS)
Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.
2017-11-01
Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability distributions of the phase arrival time picks. To illustrate the improvements in both computation time and location accuracy achieved, we compare location results for the new algorithms, previously published BISL-type algorithms and the least-squares location technique. This comparison is provided via a case study of different typical spatial data distributions and statistical experiment using the database of 36 ground-truth explosions from the Utah Test and Training Range (UTTR) recorded during the US summer season at USArray transportable seismic stations when they were near the site between 2006 and 2008.
System and method for clock synchronization and position determination using entangled photon pairs
NASA Technical Reports Server (NTRS)
Shih, Yanhua (Inventor)
2010-01-01
A system and method for clock synchronization and position determination using entangled photon pairs is provided. The present invention relies on the measurement of the second order correlation function of entangled states. Photons from an entangled photon source travel one-way to the clocks to be synchronized. By analyzing photon registration time histories generated at each clock location, the entangled states allow for high accuracy clock synchronization as well as high accuracy position determination.
An Improved Method of AGM for High Precision Geolocation of SAR Images
NASA Astrophysics Data System (ADS)
Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.
2018-05-01
In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.
Combined mine tremors source location and error evaluation in the Lubin Copper Mine (Poland)
NASA Astrophysics Data System (ADS)
Leśniak, Andrzej; Pszczoła, Grzegorz
2008-08-01
A modified method of mine tremors location used in Lubin Copper Mine is presented in the paper. In mines where an intensive exploration is carried out a high accuracy source location technique is usually required. The effect of the flatness of the geophones array, complex geological structure of the rock mass and intense exploitation make the location results ambiguous in such mines. In the present paper an effective method of source location and location's error evaluations are presented, combining data from two different arrays of geophones. The first consists of uniaxial geophones spaced in the whole mine area. The second is installed in one of the mining panels and consists of triaxial geophones. The usage of the data obtained from triaxial geophones allows to increase the hypocenter vertical coordinate precision. The presented two-step location procedure combines standard location methods: P-waves directions and P-waves arrival times. Using computer simulations the efficiency of the created algorithm was tested. The designed algorithm is fully non-linear and was tested on the multilayered rock mass model of the Lubin Copper Mine, showing a computational better efficiency than the traditional P-wave arrival times location algorithm. In this paper we present the complete procedure that effectively solves the non-linear location problems, i.e. the mine tremor location and measurement of the error propagation.
Recollection can be Weak and Familiarity can be Strong
Ingram, Katherine M.; Mickes, Laura; Wixted, John T.
2012-01-01
The Remember/Know procedure is widely used to investigate recollection and familiarity in recognition memory, but almost all of the results obtained using that procedure can be readily accommodated by a unidimensional model based on signal-detection theory. The unidimensional model holds that Remember judgments reflect strong memories (associated with high confidence, high accuracy, and fast reaction times), whereas Know judgments reflect weaker memories (associated with lower confidence, lower accuracy, and slower reaction times). Although this is invariably true on average, a new two-dimensional account (the Continuous Dual-Process model) suggests that Remember judgments made with low confidence should be associated with lower old/new accuracy, but higher source accuracy, than Know judgments made with high confidence. We tested this prediction – and found evidence to support it – using a modified Remember/Know procedure in which participants were first asked to indicate a degree of recollection-based or familiarity-based confidence for each word presented on a recognition test and were then asked to recollect the color (red or blue) and screen location (top or bottom) associated with the word at study. For familiarity-based decisions, old/new accuracy increased with old/new confidence, but source accuracy did not (suggesting that stronger old/new memory was supported by higher degrees of familiarity). For recollection-based decisions, both old/new accuracy and source accuracy increased with old/new confidence (suggesting that stronger old/new memory was supported by higher degrees of recollection). These findings suggest that recollection and familiarity are continuous processes and that participants can indicate which process mainly contributed to their recognition decisions. PMID:21967320
NASA Astrophysics Data System (ADS)
Al-Jumaili, Safaa Kh.; Pearson, Matthew R.; Holford, Karen M.; Eaton, Mark J.; Pullin, Rhys
2016-05-01
An easy to use, fast to apply, cost-effective, and very accurate non-destructive testing (NDT) technique for damage localisation in complex structures is key for the uptake of structural health monitoring systems (SHM). Acoustic emission (AE) is a viable technique that can be used for SHM and one of the most attractive features is the ability to locate AE sources. The time of arrival (TOA) technique is traditionally used to locate AE sources, and relies on the assumption of constant wave speed within the material and uninterrupted propagation path between the source and the sensor. In complex structural geometries and complex materials such as composites, this assumption is no longer valid. Delta T mapping was developed in Cardiff in order to overcome these limitations; this technique uses artificial sources on an area of interest to create training maps. These are used to locate subsequent AE sources. However operator expertise is required to select the best data from the training maps and to choose the correct parameter to locate the sources, which can be a time consuming process. This paper presents a new and improved fully automatic delta T mapping technique where a clustering algorithm is used to automatically identify and select the highly correlated events at each grid point whilst the "Minimum Difference" approach is used to determine the source location. This removes the requirement for operator expertise, saving time and preventing human errors. A thorough assessment is conducted to evaluate the performance and the robustness of the new technique. In the initial test, the results showed excellent reduction in running time as well as improved accuracy of locating AE sources, as a result of the automatic selection of the training data. Furthermore, because the process is performed automatically, this is now a very simple and reliable technique due to the prevention of the potential source of error related to manual manipulation.
NASA Astrophysics Data System (ADS)
Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji
2017-12-01
Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.
Modeling Extra-Long Tsunami Propagation: Assessing Data, Model Accuracy and Forecast Implications
NASA Astrophysics Data System (ADS)
Titov, V. V.; Moore, C. W.; Rabinovich, A.
2017-12-01
Detecting and modeling tsunamis propagating tens of thousands of kilometers from the source is a formidable scientific challenge and seemingly satisfies only scientific curiosity. However, results of such analyses produce a valuable insight into the tsunami propagation dynamics, model accuracy and would provide important implications for tsunami forecast. The Mw = 9.3 megathrust earthquake of December 26, 2004 off the coast of Sumatra generated a tsunami that devastated Indian Ocean coastlines and spread into the Pacific and Atlantic oceans. The tsunami was recorded by a great number of coastal tide gauges, including those located in 15-25 thousand kilometers from the source area. To date, it is still the farthest instrumentally detected tsunami. The data from these instruments throughout the world oceans enabled to estimate various statistical parameters and energy decay of this event. High-resolution records of this tsunami from DARTs 32401 (offshore of northern Chile), 46405 and NeMO (both offshore of the US West Coast), combined with the mainland tide gauge measurements enabled us to examine far-field characteristics of the 2004 in the Pacific Ocean and to compare the results of global numerical simulations with the observations. Despite their small heights (less than 2 cm at deep-ocean locations), the records demonstrated consistent spatial and temporal structure. The numerical model described well the frequency content, amplitudes and general structure of the observed waves at deep-ocean and coastal gages. We present analysis of the measurements and comparison with model data to discuss implication for tsunami forecast accuracy. Model study for such extreme distances from the tsunami source and at extra-long times after the event is an attempt to find accuracy bounds for tsunami models and accuracy limitations of model use for forecast. We discuss results in application to tsunami model forecast and tsunami modeling in general.
Locating Microseism Sources Using Spurious Arrivals in Intercontinental Noise Correlations
NASA Astrophysics Data System (ADS)
Retailleau, Lise; Boué, Pierre; Stehly, Laurent; Campillo, Michel
2017-10-01
The accuracy of Green's functions retrieved from seismic noise correlations in the microseism frequency band is limited by the uneven distribution of microseism sources at the surface of the Earth. As a result, correlation functions are often biased as compared to the expected Green's functions, and they can include spurious arrivals. These spurious arrivals are seismic arrivals that are visible on the correlation and do not belong to the theoretical impulse response. In this article, we propose to use Rayleigh wave spurious arrivals detected on correlation functions computed between European and United States seismic stations to locate microseism sources in the Atlantic Ocean. We perform a slant stack on a time distance gather of correlations obtained from an array of stations that comprises a regional deployment and a distant station. The arrival times and the apparent slowness of the spurious arrivals lead to the location of their source, which is obtained through a grid search procedure. We discuss improvements in the location through this methodology as compared to classical back projection of microseism energy. This method is interesting because it only requires an array and a distant station on each side of an ocean, conditions that can be met relatively easily.
Localization of focused-ultrasound beams in a tissue phantom, using remote thermocouple arrays.
Hariharan, Prasanna; Dibaji, Seyed Ahmad Reza; Banerjee, Rupak K; Nagaraja, Srinidhi; Myers, Matthew R
2014-12-01
In focused-ultrasound procedures such as vessel cauterization or clot lysis, targeting accuracy is critical. To investigate the targeting accuracy of the focused-ultrasound systems, tissue phantoms embedded with thermocouples can be employed. This paper describes a method that utilizes an array of thermocouples to localize the focused ultrasound beam. All of the thermocouples are located away from the beam, so that thermocouple artifacts and sensor interference are minimized. Beam propagation and temperature rise in the phantom are simulated numerically, and an optimization routine calculates the beam location that produces the best agreement between the numerical temperature values and those measured with thermocouples. The accuracy of the method was examined as a function of the array characteristics, including the number of thermocouples in the array and their orientation. For exposures with a 3.3-MHz source, the remote-thermocouple technique was able to predict the focal position to within 0.06 mm. Once the focal location is determined using the localization method, temperatures at desired locations (including the focus) can be estimated from remote thermocouple measurements by curve fitting an analytical solution to the heat equation. Temperature increases in the focal plane were predicted to within 5% agreement with measured values using this method.
Estimated Accuracy of Three Common Trajectory Statistical Methods
NASA Technical Reports Server (NTRS)
Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.
2011-01-01
Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.
Douk, Hamid Shafaei; Aghamiri, Mahmoud Reza; Ghorbani, Mahdi; Farhood, Bagher; Bakhshandeh, Mohsen; Hemmati, Hamid Reza
2018-01-01
The aim of this study is to evaluate the accuracy of the inverse square law (ISL) method for determining location of virtual electron source ( S Vir ) in Siemens Primus linac. So far, different experimental methods have presented for determining virtual and effective electron source location such as Full Width at Half Maximum (FWHM), Multiple Coulomb Scattering (MCS), and Multi Pinhole Camera (MPC) and Inverse Square Law (ISL) methods. Among these methods, Inverse Square Law is the most common used method. Firstly, Siemens Primus linac was simulated using MCNPX Monte Carlo code. Then, by using dose profiles obtained from the Monte Carlo simulations, the location of S Vir was calculated for 5, 7, 8, 10, 12 and 14 MeV electron energies and 10 cm × 10 cm, 15 cm × 15 cm, 20 cm × 20 cm and 25 cm × 25 cm field sizes. Additionally, the location of S Vir was obtained by the ISL method for the mentioned electron energies and field sizes. Finally, the values obtained by the ISL method were compared to the values resulted from Monte Carlo simulation. The findings indicate that the calculated S Vir values depend on beam energy and field size. For a specific energy, with increase of field size, the distance of S Vir increases for most cases. Furthermore, for a special applicator, with increase of electron energy, the distance of S Vir increases for most cases. The variation of S Vir values versus change of field size in a certain energy is more than the variation of S Vir values versus change of electron energy in a certain field size. According to the results, it is concluded that the ISL method can be considered as a good method for calculation of S Vir location in higher electron energies (14 MeV).
Delivery and application of precise timing for a traveling wave powerline fault locator system
NASA Technical Reports Server (NTRS)
Street, Michael A.
1990-01-01
The Bonneville Power Administration (BPA) has successfully operated an in-house developed powerline fault locator system since 1986. The BPA fault locator system consists of remotes installed at cardinal power transmission line system nodes and a central master which polls the remotes for traveling wave time-of-arrival data. A power line fault produces a fast rise-time traveling wave which emanates from the fault point and propagates throughout the power grid. The remotes time-tag the traveling wave leading edge as it passes through the power system cardinal substation nodes. A synchronizing pulse transmitted via the BPA analog microwave system on a wideband channel sychronizes the time-tagging counters in the remote units to a different accuracy of better than one microsecond. The remote units correct the raw time tags for synchronizing pulse propagation delay and return these corrected values to the fault locator master. The master then calculates the power system disturbance source using the collected time tags. The system design objective is a fault location accuracy of 300 meters. BPA's fault locator system operation, error producing phenomena, and method of distributing precise timing are described.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
Drive-by large-region acoustic noise-source mapping via sparse beamforming tomography.
Tuna, Cagdas; Zhao, Shengkui; Nguyen, Thi Ngoc Tho; Jones, Douglas L
2016-10-01
Environmental noise is a risk factor for human physical and mental health, demanding an efficient large-scale noise-monitoring scheme. The current technology, however, involves extensive sound pressure level (SPL) measurements at a dense grid of locations, making it impractical on a city-wide scale. This paper presents an alternative approach using a microphone array mounted on a moving vehicle to generate two-dimensional acoustic tomographic maps that yield the locations and SPLs of the noise-sources sparsely distributed in the neighborhood traveled by the vehicle. The far-field frequency-domain delay-and-sum beamforming output power values computed at multiple locations as the vehicle drives by are used as tomographic measurements. The proposed method is tested with acoustic data collected by driving an electric vehicle with a rooftop-mounted microphone array along a straight road next to a large open field, on which various pre-recorded noise-sources were produced by a loudspeaker at different locations. The accuracy of the tomographic imaging results demonstrates the promise of this approach for rapid, low-cost environmental noise-monitoring.
Bian, Xu; Zhang, Yu; Li, Yibo; Gong, Xiaoyue; Jin, Shijiu
2015-01-01
This paper proposes a time-space domain correlation-based method for gas leakage detection and location. It acquires the propagated signal on the skin of the plate by using a piezoelectric acoustic emission (AE) sensor array. The signal generated from the gas leakage hole (which diameter is less than 2 mm) is time continuous. By collecting and analyzing signals from different sensors’ positions in the array, the correlation among those signals in the time-space domain can be achieved. Then, the directional relationship between the sensor array and the leakage source can be calculated. The method successfully solves the real-time orientation problem of continuous ultrasonic signals generated from leakage sources (the orientation time is about 15 s once), and acquires high accuracy location information of leakage sources by the combination of multiple sets of orientation results. According to the experimental results, the mean value of the location absolute error is 5.83 mm on a one square meter plate, and the maximum location error is generally within a ±10 mm interval. Meanwhile, the error variance is less than 20.17. PMID:25860070
Bian, Xu; Zhang, Yu; Li, Yibo; Gong, Xiaoyue; Jin, Shijiu
2015-04-09
This paper proposes a time-space domain correlation-based method for gas leakage detection and location. It acquires the propagated signal on the skin of the plate by using a piezoelectric acoustic emission (AE) sensor array. The signal generated from the gas leakage hole (which diameter is less than 2 mm) is time continuous. By collecting and analyzing signals from different sensors' positions in the array, the correlation among those signals in the time-space domain can be achieved. Then, the directional relationship between the sensor array and the leakage source can be calculated. The method successfully solves the real-time orientation problem of continuous ultrasonic signals generated from leakage sources (the orientation time is about 15 s once), and acquires high accuracy location information of leakage sources by the combination of multiple sets of orientation results. According to the experimental results, the mean value of the location absolute error is 5.83 mm on a one square meter plate, and the maximum location error is generally within a ±10 mm interval. Meanwhile, the error variance is less than 20.17.
Microseismic event location by master-event waveform stacking
NASA Astrophysics Data System (ADS)
Grigoli, F.; Cesca, S.; Dahm, T.
2016-12-01
Waveform stacking location methods are nowadays extensively used to monitor induced seismicity monitoring assoiciated with several underground industrial activities such as Mining, Oil&Gas production and Geothermal energy exploitation. In the last decade a significant effort has been spent to develop or improve methodologies able to perform automated seismological analysis for weak events at a local scale. This effort was accompanied by the improvement of monitoring systems, resulting in an increasing number of large microseismicity catalogs. The analysis of microseismicity is challenging, because of the large number of recorded events often characterized by a low signal-to-noise ratio. A significant limitation of the traditional location approaches is that automated picking is often done on each seismogram individually, making little or no use of the coherency information between stations. In order to improve the performance of the traditional location methods, in the last year, alternative approaches have been proposed. These methods exploits the coherence of the waveforms recorded at different stations and do not require any automated picking procedure. The main advantage of this methods relies on their robustness even when the recorded waveforms are very noisy. On the other hand, like any other location method, the location performance strongly depends on the accuracy of the available velocity model. When dealing with inaccurate velocity models, in fact, location results can be affected by large errors. Here we will introduce a new automated waveform stacking location method which is less dependent on the knowledge of the velocity model and presents several benefits, which improve the location accuracy: 1) it accounts for phase delays due to local site effects, e.g. surface topography or variable sediment thickness 2) theoretical velocity model are only used to estimate travel times within the source volume, and not along the whole source-sensor path. We finally compare the location results for both synthetics and real data with those obtained by using classical waveforms stacking approaches.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.
2013-12-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.
Accurate, reliable prototype earth horizon sensor head
NASA Technical Reports Server (NTRS)
Schwarz, F.; Cohen, H.
1973-01-01
The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.
Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M
2017-10-01
Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.
NASA Astrophysics Data System (ADS)
Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.
2017-10-01
Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.
76 FR 23713 - Wireless E911 Location Accuracy Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-28
... Location Accuracy Requirements AGENCY: Federal Communications Commission. ACTION: Final rule; announcement... contained in regulations concerning wireless E911 location accuracy requirements. The information collection... standards for wireless Enhanced 911 (E911) Phase II location accuracy and reliability to satisfy these...
Results of the Australian geodetic VLBI experiment
NASA Technical Reports Server (NTRS)
Harvey, B. R.; Stolz, A.; Jauncey, D. L.; Niell, A.; Morabito, D. D.; Preston, R.
1983-01-01
The 250-2500 km baseline vectors between radio telescopes located at Tidbinbilla (DSS43) near Canberra, Parkes, Fleurs (X3) near Sydney, Hobart and Alice Springs were determined from radio interferometric observations of extragalactic sources. The observations were made during two 24-hour sessions on 26 April and 3 May 1982, and one 12-hour night-time session on 28 April 1982. The 275 km Tidbinbilla - Parkes baseline was measured with an accuracy of plus or minus 6 cm. The remaining baselines were measured with accuracies ranging from 15 cm to 6 m. The higher accuracies were achieved for the better instrumented sites of Tidbinbilla, Parkes and Fleurs. The data reduction technique and results of the experiment are discussed.
Errors of five-day mean surface wind and temperature conditions due to inadequate sampling
NASA Technical Reports Server (NTRS)
Legler, David M.
1991-01-01
Surface meteorological reports of wind components, wind speed, air temperature, and sea-surface temperature from buoys located in equatorial and midlatitude regions are used in a simulation of random sampling to determine errors of the calculated means due to inadequate sampling. Subsampling the data with several different sample sizes leads to estimates of the accuracy of the subsampled means. The number N of random observations needed to compute mean winds with chosen accuracies of 0.5 (N sub 0.5) and 1.0 (N sub 1,0) m/s and mean air and sea surface temperatures with chosen accuracies of 0.1 (N sub 0.1) and 0.2 (N sub 0.2) C were calculated for each 5-day and 30-day period in the buoy datasets. Mean values of N for the various accuracies and datasets are given. A second-order polynomial relation is established between N and the variability of the data record. This relationship demonstrates that for the same accuracy, N increases as the variability of the data record increases. The relationship is also independent of the data source. Volunteer-observing ship data do not satisfy the recommended minimum number of observations for obtaining 0.5 m/s and 0.2 C accuracy for most locations. The effect of having remotely sensed data is discussed.
Monitoring microearthquakes with the San Andreas fault observatory at depth
Oye, V.; Ellsworth, W.L.
2007-01-01
In 2005, the San Andreas Fault Observatory at Depth (SAFOD) was drilled through the San Andreas Fault zone at a depth of about 3.1 km. The borehole has subsequently been instrumented with high-frequency geophones in order to better constrain locations and source processes of nearby microearthquakes that will be targeted in the upcoming phase of SAFOD. The microseismic monitoring software MIMO, developed by NORSAR, has been installed at SAFOD to provide near-real time locations and magnitude estimates using the high sampling rate (4000 Hz) waveform data. To improve the detection and location accuracy, we incorporate data from the nearby, shallow borehole (???250 m) seismometers of the High Resolution Seismic Network (HRSN). The event association algorithm of the MIMO software incorporates HRSN detections provided by the USGS real time earthworm software. The concept of the new event association is based on the generalized beam forming, primarily used in array seismology. The method requires the pre-computation of theoretical travel times in a 3D grid of potential microearthquake locations to the seismometers of the current station network. By minimizing the differences between theoretical and observed detection times an event is associated and the location accuracy is significantly improved.
Kozunov, Vladimir V.; Ossadtchi, Alexei
2015-01-01
Although MEG/EEG signals are highly variable between subjects, they allow characterizing systematic changes of cortical activity in both space and time. Traditionally a two-step procedure is used. The first step is a transition from sensor to source space by the means of solving an ill-posed inverse problem for each subject individually. The second is mapping of cortical regions consistently active across subjects. In practice the first step often leads to a set of active cortical regions whose location and timecourses display a great amount of interindividual variability hindering the subsequent group analysis. We propose Group Analysis Leads to Accuracy (GALA)—a solution that combines the two steps into one. GALA takes advantage of individual variations of cortical geometry and sensor locations. It exploits the ensuing variability in electromagnetic forward model as a source of additional information. We assume that for different subjects functionally identical cortical regions are located in close proximity and partially overlap and their timecourses are correlated. This relaxed similarity constraint on the inverse solution can be expressed within a probabilistic framework, allowing for an iterative algorithm solving the inverse problem jointly for all subjects. A systematic simulation study showed that GALA, as compared with the standard min-norm approach, improves accuracy of true activity recovery, when accuracy is assessed both in terms of spatial proximity of the estimated and true activations and correct specification of spatial extent of the activated regions. This improvement obtained without using any noise normalization techniques for both solutions, preserved for a wide range of between-subject variations in both spatial and temporal features of regional activation. The corresponding activation timecourses exhibit significantly higher similarity across subjects. Similar results were obtained for a real MEG dataset of face-specific evoked responses. PMID:25954141
NASA Technical Reports Server (NTRS)
Smith, G. L.; Green, R. N.; Young, G. R.
1974-01-01
The NIMBUS-G environmental monitoring satellite has an instrument (a gas correlation spectrometer) onboard for measuring the mass of a given pollutant within a gas volume. The present paper treats the problem: How can this type measurement be used to estimate the distribution of pollutant levels in a metropolitan area. Estimation methods are used to develop this distribution. The pollution concentration caused by a point source is modeled as a Gaussian plume. The uncertainty in the measurements is used to determine the accuracy of estimating the source strength, the wind velocity, diffusion coefficients and source location.
POLICAN: A near-infrared imaging polarimeter at OAGH
NASA Astrophysics Data System (ADS)
Devaraj, R.; Luna, A.; Carrasco, L.; Mayya, Y. D.; Serrano-Bernal, O.
2017-07-01
We present a near-infrared linear imaging polarimeter POLICAN, developed for the Cananea near-infrared camera (CANICA) at the 2.1m telescope of the Guillermo Haro Astrophysical Observatory (OAGH) located at Cananea, Sonora, México. POLICAN reaches a limiting magnitude to about 16th mag with a polarimetric accuracy of about 1% for bright sources.
TDRS orbit determination by radio interferometry
NASA Technical Reports Server (NTRS)
Pavloff, Michael S.
1994-01-01
In support of a NASA study on the application of radio interferometry to satellite orbit determination, MITRE developed a simulation tool for assessing interferometry tracking accuracy. The Orbit Determination Accuracy Estimator (ODAE) models the general batch maximum likelihood orbit determination algorithms of the Goddard Trajectory Determination System (GTDS) with the group and phase delay measurements from radio interferometry. ODAE models the statistical properties of tracking error sources, including inherent observable imprecision, atmospheric delays, clock offsets, station location uncertainty, and measurement biases, and through Monte Carlo simulation, ODAE calculates the statistical properties of errors in the predicted satellites state vector. This paper presents results from ODAE application to orbit determination of the Tracking and Data Relay Satellite (TDRS) by radio interferometry. Conclusions about optimal ground station locations for interferometric tracking of TDRS are presented, along with a discussion of operational advantages of radio interferometry.
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.
1997-01-01
A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from in Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions and solutions for the plane (i.e.. no Earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated data sets and the relative influence of bearing and arrival time data on the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA's Optical Transient Detector (OTD) and Lightning Imaging System (LIS). We also introduce a quadratic planar solution that is useful when only three arrival time measurements are available. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in source location. Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated data sets and the results are generally better than those obtained from the three station linear planar method when bearing errors are about 2 degrees.
A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing
Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian
2016-01-01
Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623
Reduced order modelling in searches for continuous gravitational waves - I. Barycentring time delays
NASA Astrophysics Data System (ADS)
Pitkin, M.; Doolan, S.; McMenamin, L.; Wette, K.
2018-06-01
The frequencies and phases of emission from extra-solar sources measured by Earth-bound observers are modulated by the motions of the observer with respect to the source, and through relativistic effects. These modulations depend critically on the source's sky-location. Precise knowledge of the modulations are required to coherently track the source's phase over long observations, for example, in pulsar timing, or searches for continuous gravitational waves. The modulations can be modelled as sky-location and time-dependent time delays that convert arrival times at the observer to the inertial frame of the source, which can often be the Solar system barycentre. We study the use of reduced order modelling for speeding up the calculation of this time delay for any sky-location. We find that the time delay model can be decomposed into just four basis vectors, and with these the delay for any sky-location can be reconstructed to sub-nanosecond accuracy. When compared to standard routines for time delay calculation in gravitational wave searches, using the reduced basis can lead to speed-ups of 30 times. We have also studied components of time delays for sources in binary systems. Assuming eccentricities <0.25, we can reconstruct the delays to within 100 s of nanoseconds, with best case speed-ups of a factor of 10, or factors of two when interpolating the basis for different orbital periods or time stamps. In long-duration phase-coherent searches for sources with sky-position uncertainties, or binary parameter uncertainties, these speed-ups could allow enhancements in their scopes without large additional computational burdens.
Komssi, S; Huttunen, J; Aronen, H J; Ilmoniemi, R J
2004-03-01
Dipole models, which are frequently used in attempts to solve the electromagnetic inverse problem, require explicit a priori assumptions about the cerebral current sources. This is not the case for solutions based on minimum-norm estimates. In the present study, we evaluated the spatial accuracy of the L2 minimum-norm estimate (MNE) in realistic noise conditions by assessing its ability to localize sources of evoked responses at the primary somatosensory cortex (SI). Multichannel somatosensory evoked potentials (SEPs) and magnetic fields (SEFs) were recorded in 5 subjects while stimulating the median and ulnar nerves at the left wrist. A Tikhonov-regularized L2-MNE, constructed on a spherical surface from the SEP signals, was compared with an equivalent current dipole (ECD) solution obtained from the SEFs. Primarily tangential current sources accounted for both SEP and SEF distributions at around 20 ms (N20/N20m) and 70 ms (P70/P70m), which deflections were chosen for comparative analysis. The distances between the locations of the maximum current densities obtained from MNE and the locations of ECDs were on the average 12-13 mm for both deflections and nerves stimulated. In accordance with the somatotopical order of SI, both the MNE and ECD tended to localize median nerve activation more laterally than ulnar nerve activation for the N20/N20m deflection. Simulation experiments further indicated that, with a proper estimate of the source depth and with a good fit of the head model, the MNE can reach a mean accuracy of 5 mm in 0.2-microV root-mean-square noise. When compared with previously reported localizations based on dipole modelling of SEPs, it appears that equally accurate localization of S1 can be obtained with the MNE. MNE can be used to verify parametric source modelling results. Having a relatively good localization accuracy and requiring minimal assumptions, the MNE may be useful for the localization of poorly known activity distributions and for tracking activity changes between brain areas as a function of time.
NASA Astrophysics Data System (ADS)
Shen, Y.; Wang, N.; Bao, X.; Flinders, A. F.
2016-12-01
Scattered waves generated near the source contains energy converted from the near-field waves to the far-field propagating waves, which can be used to achieve location accuracy beyond the diffraction limit. In this work, we apply a novel full-wave location method that combines a grid-search algorithm with the 3D Green's tensor database to locate the Non-Proliferation Experiment (NPE) at the Nevada test site and the North Korean nuclear tests. We use the first arrivals (Pn/Pg) and their immediate codas, which are likely dominated by waves scattered at the surface topography near the source, to determine the source location. We investigate seismograms in the frequency of [1.0 2.0] Hz to reduce noises in the data and highlight topography scattered waves. High resolution topographic models constructed from 10 and 90 m grids are used for Nevada and North Korea, respectively. The reference velocity model is based on CRUST 1.0. We use the collocated-grid finite difference method on curvilinear grids to calculate the strain Green's tensor and obtain synthetic waveforms using source-receiver reciprocity. The `best' solution is found based on the least-square misfit between the observed and synthetic waveforms. To suppress random noises, an optimal weighting method for three-component seismograms is applied in misfit calculation. Our results show that the scattered waves are crucial in improving resolution and allow us to obtain accurate solutions with a small number of stations. Since the scattered waves depends on topography, which is known at the wavelengths of regional seismic waves, our approach yields absolute, instead of relative, source locations. We compare our solutions with those of USGS and other studies. Moreover, we use differential waveforms to locate pairs of the North Korea tests from years 2006, 2009, 2013 and 2016 to further reduce the effects of unmodeled heterogeneities and errors in the reference velocity model.
Coordinate Transformation Assembly
NASA Astrophysics Data System (ADS)
Huang, C.-C.; Barney, J.
1983-08-01
The coordinate transformation assembly (CTA) is a non-contact electro-optical device designed to link the angular coordinates between two remote platforms to a high degree of accuracy. Each assembly, which is compact and without moving parts, consists of two units: the transmitter and the receiver. The transmitter consists of one polarizing beamsplitter and two laser diodes with polarized output. The receiver consists of a polarizing beam-splitter, two lenses, a dual-axis photodetector and a regular photodetector. The angular roll is measured about the line-of-sight between two assemblies using a polarizing sensing method. Accuracy is calculated to be better than 0.01 degrees with a signal-to-noise ratio of 35 db. Pitch and yaw are measured relative to the line-of-sight at each assembly by locating a laser spot in the field-of-view of a dual-axis photodetector located in the focal plane of a small lens. The coordinate transformation parameter most difficult to obtain is the roll coordinate because high resolution involves observing a small variation in the difference of two strong signals. Under such an arrangement, any variation in source strength or detector sensitivity will cause an error. In the scheme devised for the CTA, this source of error has been eliminated through a paring and signal processing arrangement wherein the detector sensitivity and the source intensity are made common to the paired measurements and thus eliminated. The ±0.01 degree accuracy of the angular roll as well as the pitch and yaw measurements over ±2 degrees angular range has been demonstrated. An attractive feature of the CTA is that paired assemblies can be deployed to relay coordinates around corners and over extended distances.
Lightning Location Using Acoustic Signals
NASA Astrophysics Data System (ADS)
Badillo, E.; Arechiga, R. O.; Thomas, R. J.
2013-05-01
In the summer of 2011 and 2012 a network of acoustic arrays was deployed in the Magdalena mountains of central New Mexico to locate lightning flashes. A Times-Correlation (TC) ray-tracing-based-technique was developed in order to obtain the location of lightning flashes near the network. The TC technique, locates acoustic sources from lightning. It was developed to complement the lightning location of RF sources detected by the Lightning Mapping Array (LMA) developed at Langmuir Laboratory, in New Mexico Tech. The network consisted of four arrays with four microphones each. The microphones on each array were placed in a triangular configuration with one of the microphones in the center of the array. The distance between the central microphone and the rest of them was about 30 m. The distance between centers of the arrays ranged from 500 m to 1500 m. The TC technique uses times of arrival (TOA) of acoustic waves to trace back the location of thunder sources. In order to obtain the times of arrival, the signals were filtered in a frequency band of 2 to 20 hertz and cross-correlated. Once the times of arrival were obtained, the Levenberg-Marquardt algorithm was applied to locate the spatial coordinates (x,y, and z) of thunder sources. Two techniques were used and contrasted to compute the accuracy of the TC method: Nearest-Neighbors (NN), between acoustic and LMA located sources, and standard deviation from the curvature matrix of the system as a measure of dispersion of the results. For the best case scenario, a triggered lightning event, the TC method applied with four microphones, located sources with a median error of 152 m and 142.9 m using nearest-neighbors and standard deviation respectively.; Results of the TC method in the lightning event recorded at 18:47:35 UTC, August 6, 2012. Black dots represent the results computed. Light color dots represent the LMA data for the same event. The results were obtained with the MGTM station (four channels). This figure shows a map of Altitude vs Longitude (in km).
Poletti, Mark A; Betlehem, Terence; Abhayapala, Thushara D
2014-07-01
Higher order sound sources of Nth order can radiate sound with 2N + 1 orthogonal radiation patterns, which can be represented as phase modes or, equivalently, amplitude modes. This paper shows that each phase mode response produces a spiral wave front with a different spiral rate, and therefore a different direction of arrival of sound. Hence, for a given receiver position a higher order source is equivalent to a linear array of 2N + 1 monopole sources. This interpretation suggests performance similar to a circular array of higher order sources can be produced by an array of sources, each of which consists of a line array having monopoles at the apparent source locations of the corresponding phase modes. Simulations of higher order arrays and arrays of equivalent line sources are presented. It is shown that the interior fields produced by the two arrays are essentially the same, but that the exterior fields differ because the higher order sources produces different equivalent source locations for field positions outside the array. This work provides an explanation of the fact that an array of L Nth order sources can reproduce sound fields whose accuracy approaches the performance of (2N + 1)L monopoles.
Beck, Christoph; Garreau, Guillaume; Georgiou, Julius
2016-01-01
Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.
Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers
Thompson, Clarissa A.; Opfer, John E.
2016-01-01
Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy. PMID:26834688
Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers.
Thompson, Clarissa A; Opfer, John E
2016-01-01
Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children's representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.
Wren, Christopher; Vogel, Melanie; Lord, Stephen; Abrams, Dominic; Bourke, John; Rees, Philip; Rosenthal, Eric
2012-02-01
The aim of this study was to examine the accuracy in predicting pathway location in children with Wolff-Parkinson-White syndrome for each of seven published algorithms. ECGs from 100 consecutive children with Wolff-Parkinson-White syndrome undergoing electrophysiological study were analysed by six investigators using seven published algorithms, six of which had been developed in adult patients. Accuracy and concordance of predictions were adjusted for the number of pathway locations. Accessory pathways were left-sided in 49, septal in 20 and right-sided in 31 children. Overall accuracy of prediction was 30-49% for the exact location and 61-68% including adjacent locations. Concordance between investigators varied between 41% and 86%. No algorithm was better at predicting septal pathways (accuracy 5-35%, improving to 40-78% including adjacent locations), but one was significantly worse. Predictive accuracy was 24-53% for the exact location of right-sided pathways (50-71% including adjacent locations) and 32-55% for the exact location of left-sided pathways (58-73% including adjacent locations). All algorithms were less accurate in our hands than in other authors' own assessment. None performed well in identifying midseptal or right anteroseptal accessory pathway locations.
Hydroacoustic Signals Recorded by the International Monitoring System
NASA Astrophysics Data System (ADS)
Blackman, D.; de Groot-Hedlin, C.; Orcutt, J.; Harben, P.
2002-12-01
Networks of hydrophones, such as the hydroacoustic part of the International Monitoring System (IMS), and hydrophone arrays, such as the U.S. Navy operates, record many types of signals, some of which travel thousands of kilometers in the oceanic sound channel. Abyssal earthquakes generate many such individual events and occasionally occur in swarms. Here we focus on signals generated by other types of sources, illustrating their character with recent data, mostly from the Indian Ocean. Shipping generates signals in the 5-40 Hz band. Large airgun arrays can generate T-waves that travel across an ocean basin if the near-source seafloor has appropriate depth/slope. Airgun array shots from our 2001 experiment were located with an accuracy of 25-40 km at 700-1000 km ranges, using data from a Diego Garcia tripartite sensor station. Shots at greater range (up to 4800 km) were recorded at multiple stations but their higher background noise levels in the 5-30 Hz band resulted in location errors of ~100 km. Imploding glass spheres shattered within the sound channel produce a very impulsive arrival, even after propagating 4400 km. Recordings of the sphere signal have energy concentrated in the band above 40 Hz. Natural sources such as undersea volcanic eruptions and marine mammals also produce signals that are clearly evident in hydrophone recordings. For whales, the frequency range is 20~120Hz and specific patterns of vocalization characterize different species. Volcanic eruptions typically produce intense swarms of acoustic activity that last days-weeks and the source area can migrate tens of kms during the period. The utility of these types of hydroacoustic sources for research and/or monitoring purposes depends on the accuracy with which recordings can be used to locate and quantitatively characterize the source. Oceanic weather, both local and regional, affect background noise levels in key frequency bands at the recording stations. Databases used in forward modeling of propagation and acoustic losses can be sparse in remote regions. Our Indian Ocean results suggest that when bathymetric coverage is poor, predictions for 8 Hz propagation/loss match observations better than those for propagation of 30 Hz signals over 1000-km distances.
Ivanoff, Jason; Blagdon, Ryan; Feener, Stefanie; McNeil, Melanie; Muir, Paul H.
2014-01-01
The Simon effect refers to the performance (response time and accuracy) advantage for responses that spatially correspond to the task-irrelevant location of a stimulus. It has been attributed to a natural tendency to respond toward the source of stimulation. When location is task-relevant, however, and responses are intentionally directed away (incompatible) or toward (compatible) the source of the stimulation, there is also an advantage for spatially compatible responses over spatially incompatible responses. Interestingly, a number of studies have demonstrated a reversed, or reduced, Simon effect following practice with a spatial incompatibility task. One interpretation of this finding is that practicing a spatial incompatibility task disables the natural tendency to respond toward stimuli. Here, the temporal dynamics of this stimulus-response (S-R) transfer were explored with speed-accuracy trade-offs (SATs). All experiments used the mixed-task paradigm in which Simon and spatial compatibility/incompatibility tasks were interleaved across blocks of trials. In general, bidirectional S-R transfer was observed: while the spatial incompatibility task had an influence on the Simon effect, the task-relevant S-R mapping of the Simon task also had a small impact on congruency effects within the spatial compatibility and incompatibility tasks. These effects were generally greater when the task contexts were similar. Moreover, the SAT analysis of performance in the Simon task demonstrated that the tendency to respond to the location of the stimulus was not eliminated because of the spatial incompatibility task. Rather, S-R transfer from the spatial incompatibility task appeared to partially mask the natural tendency to respond to the source of stimulation with a conflicting inclination to respond away from it. These findings support the use of SAT methodology to quantitatively describe rapid response tendencies. PMID:25191217
Padilla, Mabel; Mattson, Christine L; Scheer, Susan; Udeagu, Chi-Chi N; Buskin, Susan E; Hughes, Alison J; Jaenicke, Thomas; Wohl, Amy Rock; Prejean, Joseph; Wei, Stanley C
Human immunodeficiency virus (HIV) case surveillance and other health care databases are increasingly being used for public health action, which has the potential to optimize the health outcomes of people living with HIV (PLWH). However, often PLWH cannot be located based on the contact information available in these data sources. We assessed the accuracy of contact information for PLWH in HIV case surveillance and additional data sources and whether time since diagnosis was associated with accurate contact information in HIV case surveillance and successful contact. The Case Surveillance-Based Sampling (CSBS) project was a pilot HIV surveillance system that selected a random population-based sample of people diagnosed with HIV from HIV case surveillance registries in 5 state and metropolitan areas. From November 2012 through June 2014, CSBS staff members attempted to locate and interview 1800 sampled people and used 22 data sources to search for contact information. Among 1063 contacted PLWH, HIV case surveillance data provided accurate telephone number, address, or HIV care facility information for 239 (22%), 412 (39%), and 827 (78%) sampled people, respectively. CSBS staff members used additional data sources, such as support services and commercial people-search databases, to locate and contact PLWH with insufficient contact information in HIV case surveillance. PLWH diagnosed <1 year ago were more likely to have accurate contact information in HIV case surveillance than were PLWH diagnosed ≥1 year ago ( P = .002), and the benefit from using additional data sources was greater for PLWH with more longstanding HIV infection ( P < .001). When HIV case surveillance cannot provide accurate contact information, health departments can prioritize searching additional data sources, especially for people with more longstanding HIV infection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, C; Xing, L; Fahimian, B
Purpose: Accuracy of positioning, timing and activity is of critical importance for High Dose Rate (HDR) brachytherapy delivery. Respective measurements via film autoradiography, stop-watches and well chambers can be cumbersome, crude or lack dynamic source evaluation capabilities. To address such limitations, a single device radioluminescent detection system enabling automated real-time quantification of activity, position and timing accuracy is presented and experimentally evaluated. Methods: A radioluminescent sheet was fabricated by mixing Gd?O?S:Tb with PDMS and incorporated into a 3D printed device where it was fixated below a CMOS digital camera. An Ir-192 HDR source (VS2000, VariSource iX) with an effective activemore » length of 5 mm was introduced using a 17-gauge stainless steel needle below the sheet. Pixel intensity values for determining activity were taken from an ROI centered on the source location. A calibration curve relating intensity values to activity was generated and used to evaluate automated activity determination with data gathered over 6 weeks. Positioning measurements were performed by integrating images for an entire delivery and fitting peaks to the resulting profile. Timing measurements were performed by evaluating source location and timestamps from individual images. Results: Average predicted activity error over 6 weeks was .35 ± .5%. The distance between four dwell positions was determined by the automated system to be 1.99 ± .02 cm. The result from autoradiography was 2.00 ± .03 cm. The system achieved a time resolution of 10 msec and determined the dwell time to be 1.01 sec ± .02 sec. Conclusion: The system was able to successfully perform automated detection of activity, positioning and timing concurrently under a single setup. Relative to radiochromic and radiographic film-based autoradiography, which can only provide a static evaluation positioning, optical detection of temporary radiation induced luminescence enables dynamic detection of position enabling automated quantification of timing with millisecond accuracy.« less
Evaluation of a head-repositioner and Z-plate system for improved accuracy of dose delivery.
Charney, Sarah C; Lutz, Wendell R; Klein, Mary K; Jones, Pamela D
2009-01-01
Radiation therapy requires accurate dose delivery to targets often identifiable only on computed tomography (CT) images. Translation between the isocenter localized on CT and laser setup for radiation treatment, and interfractional head repositioning are frequent sources of positioning error. The objective was to design a simple, accurate apparatus to eliminate these sources of error. System accuracy was confirmed with phantom and in vivo measurements. A head repositioner that fixates the maxilla via dental mold with fiducial marker Z-plates attached was fabricated to facilitate the connection between the isocenter on CT and laser treatment setup. A phantom study targeting steel balls randomly located within the head repositioner was performed. The center of each ball was marked on a transverse CT slice on which six points of the Z-plate were also visible. Based on the relative position of the six Z-plate points and the ball center, the laser setup position on each Z-plate and a top plate was calculated. Based on these setup marks, orthogonal port films, directed toward each target, were evaluated for accuracy without regard to visual setup. A similar procedure was followed to confirm accuracy of in vivo treatment setups in four dogs using implanted gold seeds. Sequential port films of three dogs were made to confirm interfractional accuracy. Phantom and in vivo measurements confirmed accuracy of 2 mm between isocenter on CT and the center of the treatment dose distribution. Port films confirmed similar accuracy for interfractional treatments. The system reliably connects CT target localization to accurate initial and interfractional radiation treatment setup.
Validity of Secondary Retail Food Outlet Data
Fleischhacker, Sheila E.; Evenson, Kelly R.; Sharkey, Joseph; Pitts, Stephanie B.J.; Rodriguez, Daniel A.
2013-01-01
Context Improving access to healthy foods is a promising strategy to prevent nutrition-related chronic diseases. To characterize retail food environments and identify areas with limited retail access, researchers, government programs, and community advocates have primarily used secondary retail food outlet data sources (e.g., InfoUSA or government food registries). To advance the state of the science on measuring retail food environments, this systematic review examined the evidence for validity reported for secondary retail food outlet data sources for characterizing retail food environments. Evidence acquisition A literature search was conducted through December 31, 2012 to identify peer-reviewed published literature that compared secondary retail food outlet data sources to primary data sources (i.e., field observations) for accuracy of identifying the type and location of retail food outlets. Data were analyzed in 2013. Evidence synthesis Nineteen studies met the inclusion criteria. The evidence for validity reported varied by secondary data sources examined, primary data–gathering approaches, retail food outlets examined, and geographic and sociodemographic characteristics. More than half of the studies (53%) did not report evidence for validity by type of food outlet examined and by a particular secondary data source. Conclusions Researchers should strive to gather primary data but if relying on secondary data sources, InfoUSA and government food registries had higher levels of agreement than reported by other secondary data sources and may provide sufficient accuracy for exploring these associations in large study areas. PMID:24050423
NASA Astrophysics Data System (ADS)
Burman, Jerry; Hespanha, Joao; Madhow, Upamanyu; Pham, Tien
2011-06-01
A team consisting of Teledyne Scientific Company, the University of California at Santa Barbara and the Army Research Laboratory* is developing technologies in support of automated data exfiltration from heterogeneous battlefield sensor networks to enhance situational awareness for dismounts and command echelons. Unmanned aerial vehicles (UAV) provide an effective means to autonomously collect data from a sparse network of unattended ground sensors (UGSs) that cannot communicate with each other. UAVs are used to reduce the system reaction time by generating autonomous collection routes that are data-driven. Bio-inspired techniques for search provide a novel strategy to detect, capture and fuse data. A fast and accurate method has been developed to localize an event by fusing data from a sparse number of UGSs. This technique uses a bio-inspired algorithm based on chemotaxis or the motion of bacteria seeking nutrients in their environment. A unique acoustic event classification algorithm was also developed based on using swarm optimization. Additional studies addressed the problem of routing multiple UAVs, optimally placing sensors in the field and locating the source of gunfire at helicopters. A field test was conducted in November of 2009 at Camp Roberts, CA. The field test results showed that a system controlled by bio-inspired software algorithms can autonomously detect and locate the source of an acoustic event with very high accuracy and visually verify the event. In nine independent test runs of a UAV, the system autonomously located the position of an explosion nine times with an average accuracy of 3 meters. The time required to perform source localization using the UAV was on the order of a few minutes based on UAV flight times. In June 2011, additional field tests of the system will be performed and will include multiple acoustic events, optimal sensor placement based on acoustic phenomenology and the use of the International Technology Alliance (ITA) Sensor Network Fabric (IBM).
Wang, Hubiao; Chai, Hua; Bao, Lifeng; Wang, Yong
2017-01-01
An experiment comparing the location accuracy of gravity matching-aided navigation in the ocean and simulation is very important to evaluate the feasibility and the performance of an INS/gravity-integrated navigation system (IGNS) in underwater navigation. Based on a 1′ × 1′ marine gravity anomaly reference map and multi-model adaptive Kalman filtering algorithm, a matching location experiment of IGNS was conducted using data obtained using marine gravimeter. The location accuracy under actual ocean conditions was 2.83 nautical miles (n miles). Several groups of simulated data of marine gravity anomalies were obtained by establishing normally distributed random error N(u,σ2) with varying mean u and noise variance σ2. Thereafter, the matching location of IGNS was simulated. The results show that the changes in u had little effect on the location accuracy. However, an increase in σ2 resulted in a significant decrease in the location accuracy. A comparison between the actual ocean experiment and the simulation along the same route demonstrated the effectiveness of the proposed simulation method and quantitative analysis results. In addition, given the gravimeter (1–2 mGal accuracy) and the reference map (resolution 1′ × 1′; accuracy 3–8 mGal), location accuracy of IGNS was up to reach ~1.0–3.0 n miles in the South China Sea. PMID:29261136
Wang, Hubiao; Wu, Lin; Chai, Hua; Bao, Lifeng; Wang, Yong
2017-12-20
An experiment comparing the location accuracy of gravity matching-aided navigation in the ocean and simulation is very important to evaluate the feasibility and the performance of an INS/gravity-integrated navigation system (IGNS) in underwater navigation. Based on a 1' × 1' marine gravity anomaly reference map and multi-model adaptive Kalman filtering algorithm, a matching location experiment of IGNS was conducted using data obtained using marine gravimeter. The location accuracy under actual ocean conditions was 2.83 nautical miles (n miles). Several groups of simulated data of marine gravity anomalies were obtained by establishing normally distributed random error N ( u , σ 2 ) with varying mean u and noise variance σ 2 . Thereafter, the matching location of IGNS was simulated. The results show that the changes in u had little effect on the location accuracy. However, an increase in σ 2 resulted in a significant decrease in the location accuracy. A comparison between the actual ocean experiment and the simulation along the same route demonstrated the effectiveness of the proposed simulation method and quantitative analysis results. In addition, given the gravimeter (1-2 mGal accuracy) and the reference map (resolution 1' × 1'; accuracy 3-8 mGal), location accuracy of IGNS was up to reach ~1.0-3.0 n miles in the South China Sea.
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.
2000-01-01
A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from an Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions. Solutions for the plane (i.e., no earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated datasets, and the relative influence of bearing and arrival time data an the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA:s Optical Transient Detector and Lightning Imaging Sensor. A quadratic planar solution that is useful when only three arrival time measurements are available is also introduced. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in sc)iirce location, Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated datasets, and the results are generally better than those obtained from the three-station linear planar method when bearing errors are about 2 deg.
Short-Period Surface Wave Based Seismic Event Relocation
NASA Astrophysics Data System (ADS)
White-Gaynor, A.; Cleveland, M.; Nyblade, A.; Kintner, J. A.; Homman, K.; Ammon, C. J.
2017-12-01
Accurate and precise seismic event locations are essential for a broad range of geophysical investigations. Superior location accuracy generally requires calibration with ground truth information, but superb relative location precision is often achievable independently. In explosion seismology, low-yield explosion monitoring relies on near-source observations, which results in a limited number of observations that challenges our ability to estimate any locations. Incorporating more distant observations means relying on data with lower signal-to-noise ratios. For small, shallow events, the short-period (roughly 1/2 to 8 s period) fundamental-mode and higher-mode Rayleigh waves (including Rg) are often the most stable and visible portion of the waveform at local distances. Cleveland and Ammon [2013] have shown that teleseismic surface waves are valuable observations for constructing precise, relative event relocations. We extend the teleseismic surface wave relocation method, and apply them to near-source distances using Rg observations from the Bighorn Arche Seismic Experiment (BASE) and the Earth Scope USArray Transportable Array (TA) seismic stations. Specifically, we present relocation results using short-period fundamental- and higher-mode Rayleigh waves (Rg) in a double-difference relative event relocation for 45 delay-fired mine blasts and 21 borehole chemical explosions. Our preliminary efforts are to explore the sensitivity of the short-period surface waves to local geologic structure, source depth, explosion magnitude (yield), and explosion characteristics (single-shot vs. distributed source, etc.). Our results show that Rg and the first few higher-mode Rayleigh wave observations can be used to constrain the relative locations of shallow low-yield events.
Development and evaluation of modified envelope correlation method for deep tectonic tremor
NASA Astrophysics Data System (ADS)
Mizuno, N.; Ide, S.
2017-12-01
We develop a new location method for deep tectonic tremors, as an improvement of widely used envelope correlation method, and applied it to construct a tremor catalog in western Japan. Using the cross-correlation functions as objective functions and weighting components of data by the inverse of error variances, the envelope cross-correlation method is redefined as a maximum likelihood method. This method is also capable of multiple source detection, because when several events occur almost simultaneously, they appear as local maxima of likelihood.The average of weighted cross-correlation functions, defined as ACC, is a nonlinear function whose variable is a position of deep tectonic tremor. The optimization method has two steps. First, we fix the source depth to 30 km and use a grid search with 0.2 degree intervals to find the maxima of ACC, which are candidate event locations. Then, using each of the candidate locations as initial values, we apply a gradient method to determine horizontal and vertical components of a hypocenter. Sometimes, several source locations are determined in a time window of 5 minutes. We estimate the resolution, which is defined as a distance of sources to be detected separately by the location method, is about 100 km. The validity of this estimation is confirmed by a numerical test using synthetic waveforms. Applying to continuous seismograms in western Japan for over 10 years, the new method detected 27% more tremors than a previous method, owing to the multiple detection and improvement of accuracy by appropriate weighting scheme.
Trust index based fault tolerant multiple event localization algorithm for WSNs.
Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue
2011-01-01
This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms.
Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs
Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue
2011-01-01
This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms. PMID:22163972
Localizing gravitational wave sources with single-baseline atom interferometers
NASA Astrophysics Data System (ADS)
Graham, Peter W.; Jung, Sunghoon
2018-02-01
Localizing sources on the sky is crucial for realizing the full potential of gravitational waves for astronomy, astrophysics, and cosmology. We show that the midfrequency band, roughly 0.03 to 10 Hz, has significant potential for angular localization. The angular location is measured through the changing Doppler shift as the detector orbits the Sun. This band maximizes the effect since these are the highest frequencies in which sources live for several months. Atom interferometer detectors can observe in the midfrequency band, and even with just a single baseline they can exploit this effect for sensitive angular localization. The single-baseline orbits around the Earth and the Sun, causing it to reorient and change position significantly during the lifetime of the source, and making it similar to having multiple baselines/detectors. For example, atomic detectors could predict the location of upcoming black hole or neutron star merger events with sufficient accuracy to allow optical and other electromagnetic telescopes to observe these events simultaneously. Thus, midband atomic detectors are complementary to other gravitational wave detectors and will help complete the observation of a broad range of the gravitational spectrum.
Emitter location errors in electronic recognition system
NASA Astrophysics Data System (ADS)
Matuszewski, Jan; Dikta, Anna
2017-04-01
The paper describes some of the problems associated with emitter location calculations. This aspect is the most important part of the series of tasks in the electronic recognition systems. The basic tasks include: detection of emission of electromagnetic signals, tracking (determining the direction of emitter sources), signal analysis in order to classify different emitter types and the identification of the sources of emission of the same type. The paper presents a brief description of the main methods of emitter localization and the basic mathematical formulae for calculating their location. The errors' estimation has been made to determine the emitter location for three different methods and different scenarios of emitters and direction finding (DF) sensors deployment in the electromagnetic environment. The emitter has been established using a special computer program. On the basis of extensive numerical calculations, the evaluation of precise emitter location in the recognition systems for different configuration alignment of bearing devices and emitter was conducted. The calculations which have been made based on the simulated data for different methods of location are presented in the figures and respective tables. The obtained results demonstrate that calculation of the precise emitter location depends on: the number of DF sensors, the distances between emitter and DF sensors, their mutual location in the reconnaissance area and bearing errors. The precise emitter location varies depending on the number of obtained bearings. The higher the number of bearings, the better the accuracy of calculated emitter location in spite of relatively high bearing errors for each DF sensor.
Rapid Regional Centroid Solutions
NASA Astrophysics Data System (ADS)
Wei, S.; Zhan, Z.; Luo, Y.; Ni, S.; Chen, Y.; Helmberger, D. V.
2009-12-01
The 2008 Wells Nevada Earthquake was recorded by 164 broadband USArray stations within a distance of 550km (5 degrees) with all azimuths uniformly sampled. To establish the source parameters, we applied the Cut and Paste (CAP) code to all the stations to obtain a mechanism (strike/dip/rake=35/41/-85) at a depth of 9km and Mw=5.9. Surface wave shifts range from -8s to 8s which are in good agreement with ambient seismic noise (ASN) predictions. Here we use this data set to test the accuracy of the number of stations needed to obtain adequate solutions (position of the compressional and tension axis) for mechanism. The stations were chosen at random where combinations of Pnl and surface waves were used to establish mechanism and depth. If the event is bracketed by two stations, we obtain an accurate magnitude with good solutions about 80% of the trials. Complete solutions from four stations or Pnl from 10 stations prove reliable in nearly all situations. We also explore the use of this dataset in locating the event using a combination of surface wave travel times and/or the full waveform inversion (CAPloc) that uses the CAP shifts to refine locations. If the mechanism is known (fixed) only a few stations is needed to locate an event to within 5km if date is available at less than 150km. In contrast, surface wave travel times (calibrated to within one second) produce amazing accurate locations with only 6 stations reasonably distributed. It appears this approach is easily automated as suggested by Scrivner and Helmberger (1995) who discussed travel times of Pnl and surface waves and the evolving of source accuracy as the various phases arrive.
Using internet search engines and library catalogs to locate toxicology information.
Wukovitz, L D
2001-01-12
The increasing importance of the Internet demands that toxicologists become aquainted with its resources. To find information, researchers must be able to effectively use Internet search engines, directories, subject-oriented websites, and library catalogs. The article will explain these resources, explore their benefits and weaknesses, and identify skills that help the researcher to improve search results and critically evaluate sources for their relevancy, validity, accuracy, and timeliness.
Assessing the Accuracy of the Tracer Dilution Method with Atmospheric Dispersion Modeling
NASA Astrophysics Data System (ADS)
Taylor, D.; Delkash, M.; Chow, F. K.; Imhoff, P. T.
2015-12-01
Landfill methane emissions are difficult to estimate due to limited observations and data uncertainty. The mobile tracer dilution method is a widely used and cost-effective approach for predicting landfill methane emissions. The method uses a tracer gas released on the surface of the landfill and measures the concentrations of both methane and the tracer gas downwind. Mobile measurements are conducted with a gas analyzer mounted on a vehicle to capture transects of both gas plumes. The idea behind the method is that if the measurements are performed far enough downwind, the methane plume from the large area source of the landfill and the tracer plume from a small number of point sources will be sufficiently well-mixed to behave similarly, and the ratio between the concentrations will be a good estimate of the ratio between the two emissions rates. The mobile tracer dilution method is sensitive to different factors of the setup such as placement of the tracer release locations and distance from the landfill to the downwind measurements, which have not been thoroughly examined. In this study, numerical modeling is used as an alternative to field measurements to study the sensitivity of the tracer dilution method and provide estimates of measurement accuracy. Using topography and wind conditions for an actual landfill, a landfill emissions rate is prescribed in the model and compared against the emissions rate predicted by application of the tracer dilution method. Two different methane emissions scenarios are simulated: homogeneous emissions over the entire surface of the landfill, and heterogeneous emissions with a hot spot containing 80% of the total emissions where the daily cover area is located. Numerical modeling of the tracer dilution method is a useful tool for evaluating the method without having the expense and labor commitment of multiple field campaigns. Factors tested include number of tracers, distance between tracers, distance from landfill to transect path, and location of tracers with respect to the hot spot. Results show that location of the tracers relative to the hot spot of highest landfill emissions makes the largest difference in accuracy of the tracer dilution method.
Towards 3D Noise Source Localization using Matched Field Processing
NASA Astrophysics Data System (ADS)
Umlauft, J.; Walter, F.; Lindner, F.; Flores Estrella, H.; Korn, M.
2017-12-01
The Matched Field Processing (MFP) is an array-processing and beamforming method, initially developed in ocean acoustics, that locates noise sources in range, depth and azimuth. In this study, we discuss the applicability of MFP for geophysical problems on the exploration scale and its suitability as a monitoring tool for near surface processes. First, we used synthetic seismograms to analyze the resolution and sensitivity of MFP in a 3D environment. The inversion shows how the localization accuracy is affected by the array design, pre-processing techniques, the velocity model and considered wave field characteristics. Hence, we can formulate guidelines for an improved MFP handling. Additionally, we present field datasets, aquired from two different environmental settings and in the presence of different source types. Small-scale, dense aperture arrays (Ø <1 km) were installed on a natural CO2 degassing field (Czech Republic) and on a Glacier site (Switzerland). The located noise sources form distinct 3 dimensional zones and channel-like structures (several 100 m depth range), which could be linked to the expected environmental processes taking place at each test site. Furthermore, fast spatio-temporal variations (hours to days) of the source distribution could be succesfully monitored.
NASA Astrophysics Data System (ADS)
Na, M.; Lee, S.; Kim, G.; Kim, H. S.; Rho, J.; Ok, J. G.
2017-12-01
Detecting and mapping the spatial distribution of radioactive materials is of great importance for environmental and security issues. We design and present a novel hemispherical rotational modulation collimator (H-RMC) system which can visualize the location of the radiation source by collecting signals from incident rays that go through collimator masks. The H-RMC system comprises a servo motor-controlled rotating module and a hollow heavy-metallic hemisphere with slits/slats equally spaced with the same angle subtended from the main axis. In addition, we also designed an auxiliary instrument to test the imaging performance of the H-RMC system, comprising a high-precision x- and y-axis staging station on which one can mount radiation sources of various shapes. We fabricated the H-RMC system which can be operated in a fully-automated fashion through the computer-based controller, and verify the accuracy and reproducibility of the system by measuring the rotational and linear positions with respect to the programmed values. Our H-RMC system may provide a pivotal tool for spatial radiation imaging with high reliability and accuracy.
Lonini, Luca; Reissman, Timothy; Ochoa, Jose M; Mummidisetty, Chaithanya K; Kording, Konrad; Jayaraman, Arun
2017-10-01
The objective of rehabilitation after spinal cord injury is to enable successful function in everyday life and independence at home. Clinical tests can assess whether patients are able to execute functional movements but are limited in assessing such information at home. A prototype system is developed that detects stand-to-reach activities, a movement with important functional implications, at multiple locations within a mock kitchen. Ten individuals with incomplete spinal cord injuries performed a sequence of standing and reaching tasks. The system monitored their movements by combining two sources of information: a triaxial accelerometer, placed on the subject's thigh, detected sitting or standing, and a network of radio frequency tags, wirelessly connected to a wrist-worn device, detected reaching at three locations. A threshold-based algorithm detected execution of the combined tasks and accuracy was measured by the number of correctly identified events. The system was shown to have an average accuracy of 98% for inferring when individuals performed stand-to-reach activities at each tag location within the same room. The combination of accelerometry and tags yielded accurate assessments of functional stand-to-reach activities within a home environment. Optimization of this technology could simplify patient compliance and allow clinicians to assess functional home activities.
Improved Bayesian Infrasonic Source Localization for regional infrasound
Blom, Philip S.; Marcillo, Omar; Arrowsmith, Stephen J.
2015-10-20
The Bayesian Infrasonic Source Localization (BISL) methodology is examined and simplified providing a generalized method of estimating the source location and time for an infrasonic event and the mathematical framework is used therein. The likelihood function describing an infrasonic detection used in BISL has been redefined to include the von Mises distribution developed in directional statistics and propagation-based, physically derived celerity-range and azimuth deviation models. Frameworks for constructing propagation-based celerity-range and azimuth deviation statistics are presented to demonstrate how stochastic propagation modelling methods can be used to improve the precision and accuracy of the posterior probability density function describing themore » source localization. Infrasonic signals recorded at a number of arrays in the western United States produced by rocket motor detonations at the Utah Test and Training Range are used to demonstrate the application of the new mathematical framework and to quantify the improvement obtained by using the stochastic propagation modelling methods. Moreover, using propagation-based priors, the spatial and temporal confidence bounds of the source decreased by more than 40 per cent in all cases and by as much as 80 per cent in one case. Further, the accuracy of the estimates remained high, keeping the ground truth within the 99 per cent confidence bounds for all cases.« less
Ambient Seismic Source Inversion in a Heterogeneous Earth: Theory and Application to the Earth's Hum
NASA Astrophysics Data System (ADS)
Ermert, Laura; Sager, Korbinian; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas
2017-11-01
The sources of ambient seismic noise are extensively studied both to better understand their influence on ambient noise tomography and related techniques, and to infer constraints on their excitation mechanisms. Here we develop a gradient-based inversion method to infer the space-dependent and time-varying source power spectral density of the Earth's hum from cross correlations of continuous seismic data. The precomputation of wavefields using spectral elements allows us to account for both finite-frequency sensitivity and for three-dimensional Earth structure. Although similar methods have been proposed previously, they have not yet been applied to data to the best of our knowledge. We apply this method to image the seasonally varying sources of Earth's hum during North and South Hemisphere winter. The resulting models suggest that hum sources are localized, persistent features that occur at Pacific coasts or shelves and in the North Atlantic during North Hemisphere winter, as well as South Pacific coasts and several distinct locations in the Southern Ocean in South Hemisphere winter. The contribution of pelagic sources from the central North Pacific cannot be constrained. Besides improving the accuracy of noise source locations through the incorporation of finite-frequency effects and 3-D Earth structure, this method may be used in future cross-correlation waveform inversion studies to provide initial source models and source model updates.
s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography
Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai
2016-01-01
EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529
Pulsars as Celestial Beacons to Detect the Motion of the Earth
NASA Astrophysics Data System (ADS)
Ruggiero, Matteo Luca; Capolongo, Emiliano; Tartaglia, Angelo
In order to show the principle viability of a recently proposed relativistic positioning method based on the use of pulsed signals from sources at infinity, we present an application example reconstructing the world line of an idealized Earth in the reference frame of distant pulsars. The method considers the null four-vectors built from the period of the pulses and the direction cosines of the propagation from each source. Starting from a simplified problem (a receiver at rest) we have been able to calibrate our procedure, evidencing the influence of the uncertainty on the arrival times of the pulses as measured by the receiver, and of the numerical treatment of the data. The most relevant parameter turns out to be the accuracy of the clock used by the receiver. Actually, the uncertainty used in the simulations combines the accuracy of the clock and the fluctuations in the sources. As an evocative example the method has then been applied to the case of an ideal observer moving as a point on the surface of the Earth. The input has been the simulated arrival times of the signals from four pulsars at the location of the Parkes radiotelescope in Australia. Some substantial simplifications have been made both excluding the problems of visibility due to the actual size of the planet, and the behavior of the sources. A rough application of the method to a three-day run gives a correct result with a poor accuracy. The accuracy is then enhanced to the order of a few hundred meters if a continuous set of data is assumed. The method could actually be used for navigation across the solar system and be based on artificial sources, rather than pulsars. The viability of the method, whose additional value is in the self-sufficiency, i.e. independence from any control from other operators, has been confirmed.
Local indicators of geocoding accuracy (LIGA): theory and application
Jacquez, Geoffrey M; Rommel, Robert
2009-01-01
Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately follow the underlying population distribution increases perturbability and introduces error into the spatial weights matrix. In some studies positional error may not impact the statistical results, and in others it might invalidate the results. We therefore must understand the relationships between positional accuracy and the perturbability of the spatial weights in order to have confidence in a study's results. PMID:19863795
An iterative method for the localization of a neutron source in a large box (container)
NASA Astrophysics Data System (ADS)
Dubinski, S.; Presler, O.; Alfassi, Z. B.
2007-12-01
The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.
NASA Technical Reports Server (NTRS)
Casper, Paul W.; Bent, Rodney B.
1991-01-01
The algorithm used in previous technology time-of-arrival lightning mapping systems was based on the assumption that the earth is a perfect spheroid. These systems yield highly-accurate lightning locations, which is their major strength. However, extensive analysis of tower strike data has revealed occasionally significant (one to two kilometer) systematic offset errors which are not explained by the usual error sources. It was determined that these systematic errors reduce dramatically (in some cases) when the oblate shape of the earth is taken into account. The oblate spheroid correction algorithm and a case example is presented.
Validity of secondary retail food outlet data: a systematic review.
Fleischhacker, Sheila E; Evenson, Kelly R; Sharkey, Joseph; Pitts, Stephanie B Jilcott; Rodriguez, Daniel A
2013-10-01
Improving access to healthy foods is a promising strategy to prevent nutrition-related chronic diseases. To characterize retail food environments and identify areas with limited retail access, researchers, government programs, and community advocates have primarily used secondary retail food outlet data sources (e.g., InfoUSA or government food registries). To advance the state of the science on measuring retail food environments, this systematic review examined the evidence for validity reported for secondary retail food outlet data sources for characterizing retail food environments. A literature search was conducted through December 31, 2012, to identify peer-reviewed published literature that compared secondary retail food outlet data sources to primary data sources (i.e., field observations) for accuracy of identifying the type and location of retail food outlets. Data were analyzed in 2013. Nineteen studies met the inclusion criteria. The evidence for validity reported varied by secondary data sources examined, primary data-gathering approaches, retail food outlets examined, and geographic and sociodemographic characteristics. More than half of the studies (53%) did not report evidence for validity by type of food outlet examined and by a particular secondary data source. Researchers should strive to gather primary data but if relying on secondary data sources, InfoUSA and government food registries had higher levels of agreement than reported by other secondary data sources and may provide sufficient accuracy for exploring these associations in large study areas. Published by Elsevier Inc. on behalf of American Journal of Preventive Medicine.
Requirements for Coregistration Accuracy in On-Scalp MEG.
Zetter, Rasmus; Iivanainen, Joonas; Stenroos, Matti; Parkkonen, Lauri
2018-06-22
Recent advances in magnetic sensing has made on-scalp magnetoencephalography (MEG) possible. In particular, optically-pumped magnetometers (OPMs) have reached sensitivity levels that enable their use in MEG. In contrast to the SQUID sensors used in current MEG systems, OPMs do not require cryogenic cooling and can thus be placed within millimetres from the head, enabling the construction of sensor arrays that conform to the shape of an individual's head. To properly estimate the location of neural sources within the brain, one must accurately know the position and orientation of sensors in relation to the head. With the adaptable on-scalp MEG sensor arrays, this coregistration becomes more challenging than in current SQUID-based MEG systems that use rigid sensor arrays. Here, we used simulations to quantify how accurately one needs to know the position and orientation of sensors in an on-scalp MEG system. The effects that different types of localisation errors have on forward modelling and source estimates obtained by minimum-norm estimation, dipole fitting, and beamforming are detailed. We found that sensor position errors generally have a larger effect than orientation errors and that these errors affect the localisation accuracy of superficial sources the most. To obtain similar or higher accuracy than with current SQUID-based MEG systems, RMS sensor position and orientation errors should be [Formula: see text] and [Formula: see text], respectively.
Testing the seismology-based landquake monitoring system
NASA Astrophysics Data System (ADS)
Chao, Wei-An
2016-04-01
I have developed a real-time landquake monitoring system (RLMs), which monitor large-scale landquake activities in the Taiwan using real-time seismic network of Broadband Array in Taiwan for Seismology (BATS). The RLM system applies a grid-based general source inversion (GSI) technique to obtain the preliminary source location and force mechanism. A 2-D virtual source-grid on the Taiwan Island is created with an interval of 0.2° in both latitude and longitude. The depth of each grid point is fixed on the free surface topography. A database is stored on the hard disk for the synthetics, which are obtained using Green's functions computed by the propagator matrix approach for 1-D average velocity model, at all stations from each virtual source-grid due to nine elementary source components: six elementary moment tensors and three orthogonal (north, east and vertical) single-forces. Offline RLM system was carried out for events detected in previous studies. An important aspect of the RLM system is the implementation of GSI approach for different source types (e.g., full moment tensor, double couple faulting, and explosion source) by the grid search through the 2-D virtual source to automatically identify landquake event based on the improvement in waveform fitness and evaluate the best-fit solution in the monitoring area. With this approach, not only the force mechanisms but also the event occurrence time and location can be obtained simultaneously about 6-8 min after an occurrence of an event. To improve the insufficient accuracy of GSI-determined lotion, I further conduct a landquake epicenter determination (LED) method that maximizes the coherency of the high-frequency (1-3 Hz) horizontal envelope functions to determine the final source location. With good knowledge about the source location, I perform landquake force history (LFH) inversion to investigate the source dynamics (e.g., trajectory) for the relatively large-sized landquake event. With providing aforementioned source information in real-time, the government and emergency response agencies have sufficient reaction time for rapid assessment and response to landquake hazards. Since 2016, the RLM system has operated online.
Shariat, M H; Gazor, S; Redfearn, D
2015-08-01
Atrial fibrillation (AF), the most common sustained cardiac arrhythmia, is an extremely costly public health problem. Catheter-based ablation is a common minimally invasive procedure to treat AF. Contemporary mapping methods are highly dependent on the accuracy of anatomic localization of rotor sources within the atria. In this paper, using simulated atrial intracardiac electrograms (IEGMs) during AF, we propose a computationally efficient method for localizing the tip of the electrical rotor with an Archimedean/arithmetic spiral wavefront. The proposed method deploys the locations of electrodes of a catheter and their IEGMs activation times to estimate the unknown parameters of the spiral wavefront including its tip location. The proposed method is able to localize the spiral as soon as the wave hits three electrodes of the catheter. Our simulation results show that the method can efficiently localize the spiral wavefront that rotates either clockwise or counterclockwise.
a Comparative Analysis of Five Cropland Datasets in Africa
NASA Astrophysics Data System (ADS)
Wei, Y.; Lu, M.; Wu, W.
2018-04-01
The food security, particularly in Africa, is a challenge to be resolved. The cropland area and spatial distribution obtained from remote sensing imagery are vital information. In this paper, according to cropland area and spatial location, we compare five global cropland datasets including CCI Land Cover, GlobCover, MODIS Collection 5, GlobeLand30 and Unified Cropland in circa 2010 of Africa in terms of cropland area and spatial location. The accuracy of cropland area calculated from five datasets was analyzed compared with statistic data. Based on validation samples, the accuracies of spatial location for the five cropland products were assessed by error matrix. The results show that GlobeLand30 has the best fitness with the statistics, followed by MODIS Collection 5 and Unified Cropland, GlobCover and CCI Land Cover have the lower accuracies. For the accuracy of spatial location of cropland, GlobeLand30 reaches the highest accuracy, followed by Unified Cropland, MODIS Collection 5 and GlobCover, CCI Land Cover has the lowest accuracy. The spatial location accuracy of five datasets in the Csa with suitable farming condition is generally higher than in the Bsk.
Development of Parameters for the Collection and Analysis of Lidar at Military Munitions Sites
2010-01-01
and inertial measurement unit (IMU) equipment is used to locate the sensor in the air . The time of return of the laser signal allows for the...approximately 15 centimeters (cm) on soft ground surfaces and a horizontal accuracy of approximately 60 cm, both compared to surveyed control points...provide more accurate topographic data than other sources, at a reasonable cost compared to alternatives such as ground survey or photogrammetry
Testing contamination source identification methods for water distribution networks
Seth, Arpan; Klise, Katherine A.; Siirola, John D.; ...
2016-04-01
In the event of contamination in a water distribution network (WDN), source identification (SI) methods that analyze sensor data can be used to identify the source location(s). Knowledge of the source location and characteristics are important to inform contamination control and cleanup operations. Various SI strategies that have been developed by researchers differ in their underlying assumptions and solution techniques. The following manuscript presents a systematic procedure for testing and evaluating SI methods. The performance of these SI methods is affected by various factors including the size of WDN model, measurement error, modeling error, time and number of contaminant injections,more » and time and number of measurements. This paper includes test cases that vary these factors and evaluates three SI methods on the basis of accuracy and specificity. The tests are used to review and compare these different SI methods, highlighting their strengths in handling various identification scenarios. These SI methods and a testing framework that includes the test cases and analysis tools presented in this paper have been integrated into EPA’s Water Security Toolkit (WST), a suite of software tools to help researchers and others in the water industry evaluate and plan various response strategies in case of a contamination incident. Lastly, a set of recommendations are made for users to consider when working with different categories of SI methods.« less
NASA Technical Reports Server (NTRS)
Carey, Lawrence D.; Schultz, Chris J.; Petersen, Walter A.; Rudlosky, Scott D.; Bateman, Monte; Cecil, Daniel J.; Blakeslee, Richard J.; Goodman, Steven J.
2011-01-01
The planned GOES-R Geostationary Lightning Mapper (GLM) will provide total lightning data on the location and intensity of thunderstorms over a hemispheric spatial domain. Ongoing GOES-R research activities are demonstrating the utility of total flash rate trends for enhancing forecasting skill of severe storms. To date, GLM total lightning proxy trends have been well served by ground-based VHF systems such as the Northern Alabama Lightning Mapping Array (NALMA). The NALMA (and other similar networks in Washington DC and Oklahoma) provide high detection efficiency (> 90%) and location accuracy (< 1 km) observations of total lightning within about 150 km from network center. To expand GLM proxy applications for high impact convective weather (e.g., severe, aviation hazards), it is desirable to investigate the utility of additional sources of continuous lightning that can serve as suitable GLM proxy over large spatial scales (order 100 s to 1000 km or more), including typically data denied regions such as the oceans. Potential sources of GLM proxy include ground-based long-range (regional or global) VLF/LF lightning networks such as the relatively new Vaisala Global Lightning Dataset (GLD360) and Weatherbug Total Lightning Network (WTLN). Before using these data in GLM research applications, it is necessary to compare them with LMAs and well-quantified cloud-to-ground (CG) lightning networks, such as Vaisala s National Lightning Detection Network (NLDN), for assessment of total and CG lightning location accuracy, detection efficiency and flash rate trends. Preliminary inter-comparisons from these lightning networks during selected severe weather events will be presented and their implications discussed.
Optimal networks of future gravitational-wave telescopes
NASA Astrophysics Data System (ADS)
Raffai, Péter; Gondán, László; Heng, Ik Siong; Kelecsényi, Nándor; Logue, Josh; Márka, Zsuzsa; Márka, Szabolcs
2013-08-01
We aim to find the optimal site locations for a hypothetical network of 1-3 triangular gravitational-wave telescopes. We define the following N-telescope figures of merit (FoMs) and construct three corresponding metrics: (a) capability of reconstructing the signal polarization; (b) accuracy in source localization; and (c) accuracy in reconstructing the parameters of a standard binary source. We also define a combined metric that takes into account the three FoMs with practically equal weight. After constructing a geomap of possible telescope sites, we give the optimal 2-telescope networks for the four FoMs separately in example cases where the location of the first telescope has been predetermined. We found that based on the combined metric, placing the first telescope to Australia provides the most options for optimal site selection when extending the network with a second instrument. We suggest geographical regions where a potential second and third telescope could be placed to get optimal network performance in terms of our FoMs. Additionally, we use a similar approach to find the optimal location and orientation for the proposed LIGO-India detector within a five-detector network with Advanced LIGO (Hanford), Advanced LIGO (Livingston), Advanced Virgo, and KAGRA. We found that the FoMs do not change greatly in sites within India, though the network can suffer a significant loss in reconstructing signal polarizations if the orientation angle of an L-shaped LIGO-India is not set to the optimal value of ˜58.2°( + k × 90°) (measured counterclockwise from East to the bisector of the arms).
NASA Astrophysics Data System (ADS)
Marshall, M. E.; Salzberg, D. H.
2006-05-01
The purpose of this study is to further demonstrate the accuracy of full-waveform earthquake location method using semi-empirical synthetic waveforms and received data from two or more regional stations. To test the method, well-constrained events from southern and central California are being used as a testbed. A suite of regional California events is being processed. Our focus is on aftershocks of the Parkfield event, the Hector Mine event, and the San Simian event. In all three cases, the aftershock locations are known to within 1 km. For Parkfield, with its extremely dense local network, the events are located to within 300 m or better. We are processing the data using a grid spacing of 0.5 km in three dimensions. Often, the minimum in residual from the semi-empirical waveform matching is within one grid point of the 'ground truth' location, which is as good as can be expected. We will present the results and compare those to the event locations reported in catalogs using the dense local seismic networks that are present in California. The preliminary results indicate that matched-waveform locations are able to resolve the locations with accuracies better than GT5, and possibly approaching GT1. These results only require two stations at regional distances and differing azimuths. One of the disadvantages of the California testbed is that all of the earthquakes in a particular region typically have very similar focal mechanisms. In theory, the semi-empirical approach should allow us to generate the well-matched synthetic waveforms regardless of the varying mechanisms. To verify this aspect, we apply the technique to relocate and simulate the JUNCTION nuclear test (March 26, 1992) using waveforms from the Little Skull Mountain mainshock.
Naser, Mohamed A.; Patterson, Michael S.
2011-01-01
Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647
Jones, Kelly K; Zenk, Shannon N; Tarlov, Elizabeth; Powell, Lisa M; Matthews, Stephen A; Horoi, Irina
2017-01-07
Food environment characterization in health studies often requires data on the location of food stores and restaurants. While commercial business lists are commonly used as data sources for such studies, current literature provides little guidance on how to use validation study results to make decisions on which commercial business list to use and how to maximize the accuracy of those lists. Using data from a retrospective cohort study [Weight And Veterans' Environments Study (WAVES)], we (a) explain how validity and bias information from existing validation studies (count accuracy, classification accuracy, locational accuracy, as well as potential bias by neighborhood racial/ethnic composition, economic characteristics, and urbanicity) were used to determine which commercial business listing to purchase for retail food outlet data and (b) describe the methods used to maximize the quality of the data and results of this approach. We developed data improvement methods based on existing validation studies. These methods included purchasing records from commercial business lists (InfoUSA and Dun and Bradstreet) based on store/restaurant names as well as standard industrial classification (SIC) codes, reclassifying records by store type, improving geographic accuracy of records, and deduplicating records. We examined the impact of these procedures on food outlet counts in US census tracts. After cleaning and deduplicating, our strategy resulted in a 17.5% reduction in the count of food stores that were valid from those purchased from InfoUSA and 5.6% reduction in valid counts of restaurants purchased from Dun and Bradstreet. Locational accuracy was improved for 7.5% of records by applying street addresses of subsequent years to records with post-office (PO) box addresses. In total, up to 83% of US census tracts annually experienced a change (either positive or negative) in the count of retail food outlets between the initial purchase and the final dataset. Our study provides a step-by-step approach to purchase and process business list data obtained from commercial vendors. The approach can be followed by studies of any size, including those with datasets too large to process each record by hand and will promote consistency in characterization of the retail food environment across studies.
Localizing gravitational wave sources with single-baseline atom interferometers
Graham, Peter W.; Jung, Sunghoon
2018-01-31
Localizing sources on the sky is crucial for realizing the full potential of gravitational waves for astronomy, astrophysics, and cosmology. Here in this paper, we show that the midfrequency band, roughly 0.03 to 10 Hz, has significant potential for angular localization. The angular location is measured through the changing Doppler shift as the detector orbits the Sun. This band maximizes the effect since these are the highest frequencies in which sources live for several months. Atom interferometer detectors can observe in the midfrequency band, and even with just a single baseline they can exploit this effect for sensitive angular localization.more » The single-baseline orbits around the Earth and the Sun, causing it to reorient and change position significantly during the lifetime of the source, and making it similar to having multiple baselines/detectors. For example, atomic detectors could predict the location of upcoming black hole or neutron star merger events with sufficient accuracy to allow optical and other electromagnetic telescopes to observe these events simultaneously. Thus, midband atomic detectors are complementary to other gravitational wave detectors and will help complete the observation of a broad range of the gravitational spectrum.« less
Localizing gravitational wave sources with single-baseline atom interferometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Peter W.; Jung, Sunghoon
Localizing sources on the sky is crucial for realizing the full potential of gravitational waves for astronomy, astrophysics, and cosmology. Here in this paper, we show that the midfrequency band, roughly 0.03 to 10 Hz, has significant potential for angular localization. The angular location is measured through the changing Doppler shift as the detector orbits the Sun. This band maximizes the effect since these are the highest frequencies in which sources live for several months. Atom interferometer detectors can observe in the midfrequency band, and even with just a single baseline they can exploit this effect for sensitive angular localization.more » The single-baseline orbits around the Earth and the Sun, causing it to reorient and change position significantly during the lifetime of the source, and making it similar to having multiple baselines/detectors. For example, atomic detectors could predict the location of upcoming black hole or neutron star merger events with sufficient accuracy to allow optical and other electromagnetic telescopes to observe these events simultaneously. Thus, midband atomic detectors are complementary to other gravitational wave detectors and will help complete the observation of a broad range of the gravitational spectrum.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seth, Arpan; Klise, Katherine A.; Siirola, John D.
In the event of contamination in a water distribution network (WDN), source identification (SI) methods that analyze sensor data can be used to identify the source location(s). Knowledge of the source location and characteristics are important to inform contamination control and cleanup operations. Various SI strategies that have been developed by researchers differ in their underlying assumptions and solution techniques. The following manuscript presents a systematic procedure for testing and evaluating SI methods. The performance of these SI methods is affected by various factors including the size of WDN model, measurement error, modeling error, time and number of contaminant injections,more » and time and number of measurements. This paper includes test cases that vary these factors and evaluates three SI methods on the basis of accuracy and specificity. The tests are used to review and compare these different SI methods, highlighting their strengths in handling various identification scenarios. These SI methods and a testing framework that includes the test cases and analysis tools presented in this paper have been integrated into EPA’s Water Security Toolkit (WST), a suite of software tools to help researchers and others in the water industry evaluate and plan various response strategies in case of a contamination incident. Lastly, a set of recommendations are made for users to consider when working with different categories of SI methods.« less
NASA Astrophysics Data System (ADS)
Suyehiro, K.; Sugioka, H.; Watanabe, T.
2008-12-01
The hydroacoustic monitoring by the International Monitoring System for CTBT (Comprehensive Nuclear- Test-Ban Treaty) verification system utilizes hydrophone stations (6) and seismic stations (5 and called T- phase stations) for worldwide detection. Some conspicuous signals of natural origin include those from earthquakes, volcanic eruptions, or whale calls. Among artificial sources are non-nuclear explosions and airgun shots. It is important for the IMS system to detect and locate hydroacoustic events with sufficient accuracy and correctly characterize the signals and identify the source. As there are a number of seafloor cable networks operated offshore Japanese islands basically facing the Pacific Ocean for monitoring regional seismicity, the data from these stations (pressure and seismic sensors) may be utilized to increase the capability of IMS. We use these data to compare some selected event parameters with those by IMS. In particular, there have been several unconventional acoustic signals in the western Pacific,which were also captured by IMS hydrophones across the Pacific in the time period of 2007-present. These anomalous examples and also dynamite shots used for seismic crustal structure studies and other natural sources will be presented in order to help improve the IMS verification capabilities for detection, location and characterization of anomalous signals.
Evaluation of Automatic Vehicle Location accuracy
DOT National Transportation Integrated Search
1999-01-01
This study assesses the accuracy of the Automatic Vehicle Location (AVL) data provided for the buses of the Ann Arbor Transportation Authority with Global Positioning System (GPS) technology. In a sample of eighty-nine bus trips two kinds of accuracy...
NASA Astrophysics Data System (ADS)
Nooshiri, Nima; Saul, Joachim; Heimann, Sebastian; Tilmann, Frederik; Dahm, Torsten
2017-02-01
Global earthquake locations are often associated with very large systematic travel-time residuals even for clear arrivals, especially for regional and near-regional stations in subduction zones because of their strongly heterogeneous velocity structure. Travel-time corrections can drastically reduce travel-time residuals at regional stations and, in consequence, improve the relative location accuracy. We have extended the shrinking-box source-specific station terms technique to regional and teleseismic distances and adopted the algorithm for probabilistic, nonlinear, global-search location. We evaluated the potential of the method to compute precise relative hypocentre locations on a global scale. The method has been applied to two specific test regions using existing P- and pP-phase picks. The first data set consists of 3103 events along the Chilean margin and the second one comprises 1680 earthquakes in the Tonga-Fiji subduction zone. Pick data were obtained from the GEOFON earthquake bulletin, produced using data from all available, global station networks. A set of timing corrections varying as a function of source position was calculated for each seismic station. In this way, we could correct the systematic errors introduced into the locations by the inaccuracies in the assumed velocity structure without explicitly solving for a velocity model. Residual statistics show that the median absolute deviation of the travel-time residuals is reduced by 40-60 per cent at regional distances, where the velocity anomalies are strong. Moreover, the spread of the travel-time residuals decreased by ˜20 per cent at teleseismic distances (>28°). Furthermore, strong variations in initial residuals as a function of recording distance are smoothed out in the final residuals. The relocated catalogues exhibit less scattered locations in depth and sharper images of the seismicity associated with the subducting slabs. Comparison with a high-resolution local catalogue reveals that our relocation process significantly improves the hypocentre locations compared to standard locations.
The assessment of accuracy of inner shapes manufactured by FDM
NASA Astrophysics Data System (ADS)
Gapiński, Bartosz; Wieczorowski, Michał; Båk, Agata; Domínguez, Alejandro Pereira; Mathia, Thomas
2018-05-01
3D printing created a totally new manufacturing possibilities. It is possible e.g. to produce closed inner shapes with different geometrical features. Unfortunately traditional methods are not suitable to verify the manufacturing accuracy, because it would be necessary to cut workpieces. In the paper the possibilities of computed tomography (x-ray micro-CT) application for accuracy assessment of inner shapes are presented. This was already reported in some papers. For research works hollow cylindrical samples with 20mm diameter and 300mm length were manufactured by means of FDM. A sphere, cone and cube were put inside these elements. All measurements were made with the application of CT. The measurement results enable us to obtain a full geometrical image of both inner and outer surfaces of a cylinder as well as shapes of inner elements. Additionally, it is possible to inspect the structure of a printed element - size and location of supporting net and all the other supporting elements necessary to hold up the walls created over empty spaces. The results obtained with this method were compared with CAD models which were a source of data for 3D printing. This in turn made it possible to assess the manufacturing accuracy of particular figures inserted into the cylinders. The influence of location of the inner supporting walls on a shape deformation was also investigated. The results obtained with this way show us how important CT can be during the assessment of 3D printing of objects.
Three dimensional time reversal optical tomography
NASA Astrophysics Data System (ADS)
Wu, Binlin; Cai, W.; Alrubaiee, M.; Xu, M.; Gayen, S. K.
2011-03-01
Time reversal optical tomography (TROT) approach is used to detect and locate absorptive targets embedded in a highly scattering turbid medium to assess its potential in breast cancer detection. TROT experimental arrangement uses multi-source probing and multi-detector signal acquisition and Multiple-Signal-Classification (MUSIC) algorithm for target location retrieval. Light transport from multiple sources through the intervening medium with embedded targets to the detectors is represented by a response matrix constructed using experimental data. A TR matrix is formed by multiplying the response matrix by its transpose. The eigenvectors with leading non-zero eigenvalues of the TR matrix correspond to embedded objects. The approach was used to: (a) obtain the location and spatial resolution of an absorptive target as a function of its axial position between the source and detector planes; and (b) study variation in spatial resolution of two targets at the same axial position but different lateral positions. The target(s) were glass sphere(s) of diameter ~9 mm filled with ink (absorber) embedded in a 60 mm-thick slab of Intralipid-20% suspension in water with an absorption coefficient μa ~ 0.003 mm-1 and a transport mean free path lt ~ 1 mm at 790 nm, which emulate the average values of those parameters for human breast tissue. The spatial resolution and accuracy of target location depended on axial position, and target contrast relative to the background. Both the targets could be resolved and located even when they were only 4-mm apart. The TROT approach is fast, accurate, and has the potential to be useful in breast cancer detection and localization.
Owen, Julia P; Wipf, David P; Attias, Hagai T; Sekihara, Kensuke; Nagarajan, Srikantan S
2012-03-01
In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Moriya, Gentaro; Chikatsu, Hirofumi
2011-07-01
Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.
NASA Technical Reports Server (NTRS)
Stuart, J. R.
1984-01-01
The evolution of NASA's planetary navigation techniques is traced, and radiometric and optical data types are described. Doppler navigation; the Deep Space Network; differenced two-way range techniques; differential very long base interferometry; and optical navigation are treated. The Doppler system enables a spacecraft in cruise at high absolute declination to be located within a total angular uncertainty of 1/4 microrad. The two-station range measurement provides a 1 microrad backup at low declinations. Optical data locate the spacecraft relative to the target to an angular accuracy of 5 microrad. Earth-based radio navigation and its less accurate but target-relative counterpart, optical navigation, thus form complementary measurement sources, which provide a powerful sensory system to produce high-precision orbit estimates.
NASA Astrophysics Data System (ADS)
McMullen, Kyla A.
Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.
Using Bluetooth proximity sensing to determine where office workers spend time at work.
Clark, Bronwyn K; Winkler, Elisabeth A; Brakenridge, Charlotte L; Trost, Stewart G; Healy, Genevieve N
2018-01-01
Most wearable devices that measure movement in workplaces cannot determine the context in which people spend time. This study examined the accuracy of Bluetooth sensing (10-second intervals) via the ActiGraph GT9X Link monitor to determine location in an office setting, using two simple, bespoke algorithms. For one work day (mean±SD 6.2±1.1 hours), 30 office workers (30% men, aged 38±11 years) simultaneously wore chest-mounted cameras (video recording) and Bluetooth-enabled monitors (initialised as receivers) on the wrist and thigh. Additional monitors (initialised as beacons) were placed in the entry, kitchen, photocopy room, corridors, and the wearer's office. Firstly, participant presence/absence at each location was predicted from the presence/absence of signals at that location (ignoring all other signals). Secondly, using the information gathered at multiple locations simultaneously, a simple heuristic model was used to predict at which location the participant was present. The Bluetooth-determined location for each algorithm was tested against the camera in terms of F-scores. When considering locations individually, the accuracy obtained was excellent in the office (F-score = 0.98 and 0.97 for thigh and wrist positions) but poor in other locations (F-score = 0.04 to 0.36), stemming primarily from a high false positive rate. The multi-location algorithm exhibited high accuracy for the office location (F-score = 0.97 for both wear positions). It also improved the F-scores obtained in the remaining locations, but not always to levels indicating good accuracy (e.g., F-score for photocopy room ≈0.1 in both wear positions). The Bluetooth signalling function shows promise for determining where workers spend most of their time (i.e., their office). Placing beacons in multiple locations and using a rule-based decision model improved classification accuracy; however, for workplace locations visited infrequently or with considerable movement, accuracy was below desirable levels. Further development of algorithms is warranted.
NASA Astrophysics Data System (ADS)
Lyu, Jiang-Tao; Zhou, Chen
2017-12-01
Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.
Perrin, Maxine; Robillard, Manon; Roy-Charland, Annie
2017-12-01
This study examined eye movements during a visual search task as well as cognitive abilities within three age groups. The aim was to explore scanning patterns across symbol grids and to better understand the impact of symbol location in AAC displays on speed and accuracy of symbol selection. For the study, 60 students were asked to locate a series of symbols on 16 cell grids. The EyeLink 1000 was used to measure eye movements, accuracy, and response time. Accuracy was high across all cells. Participants had faster response times, longer fixations, and more frequent fixations on symbols located in the middle of the grid. Group comparisons revealed significant differences for accuracy and reaction times. The Leiter-R was used to evaluate cognitive abilities. Sustained attention and cognitive flexibility scores predicted the participants' reaction time and accuracy in symbol selection. Findings suggest that symbol location within AAC devices and individuals' cognitive abilities influence the speed and accuracy of retrieving symbols.
Acoustic Network Localization and Interpretation of Infrasonic Pulses from Lightning
NASA Astrophysics Data System (ADS)
Arechiga, R. O.; Johnson, J. B.; Badillo, E.; Michnovicz, J. C.; Thomas, R. J.; Edens, H. E.; Rison, W.
2011-12-01
We improve on the localization accuracy of thunder sources and identify infrasonic pulses that are correlated across a network of acoustic arrays. We attribute these pulses to electrostatic charge relaxation (collapse of the electric field) and attempt to model their spatial extent and acoustic source strength. Toward this objective we have developed a single audio range (20-15,000 Hz) acoustic array and a 4-station network of broadband (0.01-500 Hz) microphone arrays with aperture of ~45 m. The network has an aperture of 1700 m and was installed during the summers of 2009-2011 in the Magdalena mountains of New Mexico, an area that is subject to frequent lightning activity. We are exploring a new technique based on inverse theory that integrates information from the audio range and the network of broadband acoustic arrays to locate thunder sources more accurately than can be achieved with a single array. We evaluate the performance of the technique by comparing the location of thunder sources with RF sources located by the lightning mapping array (LMA) of Langmuir Laboratory at New Mexico Tech. We will show results of this technique for lightning flashes that occurred in the vicinity of our network of acoustic arrays and over the LMA. We will use acoustic network detection of infrasonic pulses together with LMA data and electric field measurements to estimate the spatial distribution of the charge (within the cloud) that is used to produce a lightning flash, and will try to quantify volumetric charges (charge magnitude) within clouds.
Lightning Radio Source Retrieval Using Advanced Lightning Direction Finder (ALDF) Networks
NASA Technical Reports Server (NTRS)
Koshak, William J.; Blakeslee, Richard J.; Bailey, J. C.
1998-01-01
A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from an Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing and arrival time of lightning radio emissions. Solutions for the plane (i.e., no Earth curvature) are provided that implement all of tile measurements mentioned above. Tests of the retrieval method are provided using computer-simulated data sets. We also introduce a quadratic planar solution that is useful when only three arrival time measurements are available. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in source location. Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. In the absence of measurement errors, quadratic root degeneracy (no source location ambiguity) is shown to exist exactly on the outer sensor baselines for arbitrary non-collinear network geometries. The accuracy of the quadratic planar method is tested with computer generated data sets. The results are generally better than those obtained from the three station linear planar method when bearing errors are about 2 deg. We also note some of the advantages and disadvantages of these methods over the nonlinear method of chi(sup 2) minimization employed by the National Lightning Detection Network (NLDN) and discussed in Cummins et al.(1993, 1995, 1998).
NASA Astrophysics Data System (ADS)
Ding, Lei; Lai, Yuan; He, Bin
2005-01-01
It is of importance to localize neural sources from scalp recorded EEG. Low resolution brain electromagnetic tomography (LORETA) has received considerable attention for localizing brain electrical sources. However, most such efforts have used spherical head models in representing the head volume conductor. Investigation of the performance of LORETA in a realistic geometry head model, as compared with the spherical model, will provide useful information guiding interpretation of data obtained by using the spherical head model. The performance of LORETA was evaluated by means of computer simulations. The boundary element method was used to solve the forward problem. A three-shell realistic geometry (RG) head model was constructed from MRI scans of a human subject. Dipole source configurations of a single dipole located at different regions of the brain with varying depth were used to assess the performance of LORETA in different regions of the brain. A three-sphere head model was also used to approximate the RG head model, and similar simulations performed, and results compared with the RG-LORETA with reference to the locations of the simulated sources. Multi-source localizations were discussed and examples given in the RG head model. Localization errors employing the spherical LORETA, with reference to the source locations within the realistic geometry head, were about 20-30 mm, for four brain regions evaluated: frontal, parietal, temporal and occipital regions. Localization errors employing the RG head model were about 10 mm over the same four brain regions. The present simulation results suggest that the use of the RG head model reduces the localization error of LORETA, and that the RG head model based LORETA is desirable if high localization accuracy is needed.
2010-08-01
astigmatism and other sources, and stay constant from time to time (LC Technologies, 2000). Systematic errors can sometimes reach many degrees of visual angle...Taking the average of all disparities would mean treating each as equally important regardless of whether they are from correct or incorrect mappings. In...likely stop somewhere near the centroid because the large hM basically treats every point equally (or nearly equally if using the multivariate
NASA Astrophysics Data System (ADS)
Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi
2018-06-01
Seismic wave propagation from shallow subduction-zone earthquakes can be strongly affected by 3D heterogeneous structures, such as oceanic water and sedimentary layers with irregular thicknesses. Synthetic waveforms must incorporate these effects so that they reproduce the characteristics of the observed waveforms properly. In this paper, we evaluate the accuracy of synthetic waveforms for small earthquakes in the source area of the 2011 Tohoku-Oki earthquake ( M JMA 9.0) at the Japan Trench. We compute the synthetic waveforms on the basis of a land-ocean unified 3D structure model using our heterogeneity, oceanic layer, and topography finite-difference method. In estimating the source parameters, we apply the first-motion augmented moment tensor (FAMT) method that we have recently proposed to minimize biases due to inappropriate source parameters. We find that, among several estimates, only the FAMT solutions are located very near the plate interface, which demonstrates the importance of using a 3D model for ensuring the self-consistency of the structure model, source position, and source mechanisms. Using several different filter passbands, we find that the full waveforms with periods longer than about 10 s can be reproduced well, while the degree of waveform fitting becomes worse for periods shorter than about 10 s. At periods around 4 s, the initial body waveforms can be modeled, but the later large-amplitude surface waves are difficult to reproduce correctly. The degree of waveform fitting depends on the source location, with better fittings for deep sources near land. We further examine the 3D sensitivity kernels: for the period of 12.8 s, the kernel shows a symmetric pattern with respect to the straight path between the source and the station, while for the period of 6.1 s, a curved pattern is obtained. Also, the range of the sensitive area becomes shallower for the latter case. Such a 3D spatial pattern cannot be predicted by 1D Earth models and indicates the strong effects of 3D heterogeneity on short-period ( ≲ 10s) waveforms. Thus, it would be necessary to consider such 3D effects when improving the structure and source models.
NASA Technical Reports Server (NTRS)
Vakhtin, Andrei; Krasnoperov, Lev
2011-01-01
An affordable technology designed to facilitate extensive global atmospheric aerosol measurements has been developed. This lightweight instrument is compatible with newly developed platforms such as tethered balloons, blimps, kites, and even disposable instruments such as dropsondes. This technology is based on detection of light scattered by aerosol particles where an optical layout is used to enhance the performance of the laboratory prototype instrument, which allows detection of smaller aerosol particles and improves the accuracy of aerosol particle size measurement. It has been determined that using focused illumination geometry without any apertures is advantageous over using the originally proposed collimated beam/slit geometry (that is supposed to produce uniform illumination over the beam cross-section). The illumination source is used more efficiently, which allows detection of smaller aerosol particles. Second, the obtained integral scattered light intensity measured for the particle can be corrected for the beam intensity profile inhomogeneity based on the measured beam intensity profile and measured particle location. The particle location (coordinates) in the illuminated sample volume is determined based on the information contained in the image frame. The procedure considerably improves the accuracy of determination of the aerosol particle size.
Cairney, Scott A; Lindsay, Shane; Sobczak, Justyna M; Paller, Ken A; Gaskell, M Gareth
2016-05-01
To investigate how the effects of targeted memory reactivation (TMR) are influenced by memory accuracy prior to sleep and the presence or absence of direct cue-memory associations. 30 participants associated each of 50 pictures with an unrelated word and then with a screen location in two separate tasks. During picture-location training, each picture was also presented with a semantically related sound. The sounds were therefore directly associated with the picture locations but indirectly associated with the words. During a subsequent nap, half of the sounds were replayed in slow wave sleep (SWS). The effect of TMR on memory for the picture locations (direct cue-memory associations) and picture-word pairs (indirect cue-memory associations) was then examined. TMR reduced overall memory decay for recall of picture locations. Further analyses revealed a benefit of TMR for picture locations recalled with a low degree of accuracy prior to sleep, but not those recalled with a high degree of accuracy. The benefit of TMR for low accuracy memories was predicted by time spent in SWS. There was no benefit of TMR for memory of the picture-word pairs, irrespective of memory accuracy prior to sleep. TMR provides the greatest benefit to memories recalled with a low degree of accuracy prior to sleep. The memory benefits of TMR may also be contingent on direct cue-memory associations. © 2016 Associated Professional Sleep Societies, LLC.
Han, Zifa; Leung, Chi Sing; So, Hing Cheung; Constantinides, Anthony George
2017-08-15
A commonly used measurement model for locating a mobile source is time-difference-of-arrival (TDOA). As each TDOA measurement defines a hyperbola, it is not straightforward to compute the mobile source position due to the nonlinear relationship in the measurements. This brief exploits the Lagrange programming neural network (LPNN), which provides a general framework to solve nonlinear constrained optimization problems, for the TDOA-based localization. The local stability of the proposed LPNN solution is also analyzed. Simulation results are included to evaluate the localization accuracy of the LPNN scheme by comparing with the state-of-the-art methods and the optimality benchmark of Cramér-Rao lower bound.
Harold S.J. Zald; Janet L. Ohmann; Heather M. Roberts; Matthew J. Gregory; Emilie B. Henderson; Robert J. McGaughey; Justin Braaten
2014-01-01
This study investigated how lidar-derived vegetation indices, disturbance history from Landsat time series (LTS) imagery, plot location accuracy, and plot size influenced accuracy of statistical spatial models (nearest-neighbor imputation maps) of forest vegetation composition and structure. Nearest-neighbor (NN) imputation maps were developed for 539,000 ha in the...
NASA Astrophysics Data System (ADS)
Nooshiri, N.; Saul, J.; Heimann, S.; Tilmann, F. J.; Dahm, T.
2015-12-01
The use of a 1D velocity model for seismic event location is often associated with significant travel-time residuals. Particularly for regional stations in subduction zones, where the velocity structure strongly deviates from the assumed 1D model, residuals of up to ±10 seconds are observed even for clear arrivals, which leads to strongly biased locations. In fact, due to mostly regional travel-time anomalies, arrival times at regional stations do not match the location obtained with teleseismic picks, and vice versa. If the earthquake is weak and only recorded regionally, or if fast locations based on regional stations are needed, the location may be far off the corresponding teleseismic location. In this case, implementation of travel-time corrections may leads to a reduction of the travel-time residuals at regional stations and, in consequence, significantly improve the relative location accuracy. Here, we have extended the source-specific station terms (SSST) technique to regional and teleseismic distances and adopted the algorithm for probabilistic, non-linear, global-search earthquake location. The method has been applied to specific test regions using P and pP phases from the GEOFON bulletin data for all available station networks. By using this method, a set of timing corrections has been calculated for each station varying as a function of source position. In this way, an attempt is made to correct for the systematic errors, introduced by limitations and inaccuracies in the assumed velocity structure, without solving for a new earth model itself. In this presentation, we draw on examples of the application of this global SSST technique to relocate earthquakes from the Tonga-Fiji subduction zone and from the Chilean margin. Our results have been showing a considerable decrease of the root-mean-square (RMS) residual in earthquake location final catalogs, a major reduction of the median absolute deviation (MAD) of the travel-time residuals at regional stations and sharper images of the seismicity compared to the initial locations.
Fiber optic distributed temperature sensing for fire source localization
NASA Astrophysics Data System (ADS)
Sun, Miao; Tang, Yuquan; Yang, Shuang; Sigrist, Markus W.; Li, Jun; Dong, Fengzhong
2017-08-01
A method for localizing a fire source based on a distributed temperature sensor system is proposed. Two sections of optical fibers were placed orthogonally to each other as the sensing elements. A tray of alcohol was lit to act as a fire outbreak in a cabinet with an uneven ceiling to simulate a real scene of fire. Experiments were carried out to demonstrate the feasibility of the method. Rather large fluctuations and systematic errors with respect to predicting the exact room coordinates of the fire source caused by the uneven ceiling were observed. Two mathematical methods (smoothing recorded temperature curves and finding temperature peak positions) to improve the prediction accuracy are presented, and the experimental results indicate that the fluctuation ranges and systematic errors are significantly reduced. The proposed scheme is simple and appears reliable enough to locate a fire source in large spaces.
Geologic map of Detrital, Hualapai, and Sacramento Valleys and surrounding areas, northwest Arizona
Beard, L. Sue; Kennedy, Jeffrey; Truini, Margot; Felger, Tracey
2011-01-01
A 1:250,000-scale geologic map and report covering the Detrital, Hualapai, and Sacramento valleys in northwest Arizona is presented for the purpose of improving understanding of the geology and geohydrology of the basins beneath those valleys. The map was compiled from existing geologic mapping, augmented by digital photogeologic reconnaissance mapping. The most recent geologic map for the area, and the only digital one, is the 1:1,000,000-scale Geologic Map of Arizona. The larger scale map presented here includes significantly more detailed geology than the Geologic Map of Arizona in terms of accuracy of geologic unit contacts, number of faults, fault type, fault location, and details of Neogene and Quaternary deposits. Many sources were used to compile the geology; the accompanying geodatabase includes a source field in the polygon feature class that lists source references for polygon features. The citations for the source field are included in the reference section.
Sound source localization identification accuracy: Envelope dependencies.
Yost, William A
2017-07-01
Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.
Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.; Ovall, J.; Holst, M.
2014-12-01
We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented adaptive refinement code named MARE2DEM. We demonstrate the performance and parallel scaling of this algorithm on a medium-scale computing cluster with a marine controlled-source EM example that includes a 3D array of receivers located over a 3D model that includes significant seafloor bathymetry variations and a heterogeneous subsurface.
Application of terrestrial laser scanning to the development and updating of the base map
NASA Astrophysics Data System (ADS)
Klapa, Przemysław; Mitka, Bartosz
2017-06-01
The base map provides basic information about land to individuals, companies, developers, design engineers, organizations, and government agencies. Its contents include spatial location data for control network points, buildings, land lots, infrastructure facilities, and topographic features. As the primary map of the country, it must be developed in accordance with specific laws and regulations and be continuously updated. The base map is a data source used for the development and updating of derivative maps and other large scale cartographic materials such as thematic or topographic maps. Thanks to the advancement of science and technology, the quality of land surveys carried out by means of terrestrial laser scanning (TLS) matches that of traditional surveying methods in many respects. This paper discusses the potential application of output data from laser scanners (point clouds) to the development and updating of cartographic materials, taking Poland's base map as an example. A few research sites were chosen to present the method and the process of conducting a TLS land survey: a fragment of a residential area, a street, the surroundings of buildings, and an undeveloped area. The entire map that was drawn as a result of the survey was checked by comparing it to a map obtained from PODGiK (pol. Powiatowy Ośrodek Dokumentacji Geodezyjnej i Kartograficznej - Regional Centre for Geodetic and Cartographic Records) and by conducting a field inspection. An accuracy and quality analysis of the conducted fieldwork and deskwork yielded very good results, which provide solid grounds for predicating that cartographic materials based on a TLS point cloud are a reliable source of information about land. The contents of the map that had been created with the use of the obtained point cloud were very accurately located in space (x, y, z). The conducted accuracy analysis and the inspection of the performed works showed that high quality is characteristic of TLS surveys. The accuracy of determining the location of the various map contents has been estimated at 0.02-0.03 m. The map was developed in conformity with the applicable laws and regulations as well as with best practice requirements.
Acoustic emission testing on an F/A-18 E/F titanium bulkhead
NASA Astrophysics Data System (ADS)
Martin, Christopher A.; Van Way, Craig B.; Lockyer, Allen J.; Kudva, Jayanth N.; Ziola, Steve M.
1995-04-01
An important opportunity recently transpired at Northrop Grumman Corporation to instrument an F/A - 18 E/F titanium bulkhead with broad band acoustic emission sensors during a scheduled structural fatigue test. The overall intention of this effort was to investigate the potential for detecting crack propagation using acoustic transmission signals for a large structural component. Key areas of experimentation and experience included (1) acoustic noise characterization, (2) separation of crack signals from extraneous noise, (3) source location accuracy, and (4) methods of acoustic transducer attachment. Fatigue cracking was observed and monitored by strategically placed acoustic emission sensors. The outcome of the testing indicated that accurate source location still remains enigmatic for non-specialist engineering personnel especially at this level of structural complexity. However, contrary to preconceived expectations, crack events could be readily separated from extraneous noise. A further dividend from the investigation materialized in the form of close correspondence between frequency domain waveforms of the bulkhead test specimen tested and earlier work with thick plates.
Okoniewska, Barbara; Graham, Alecia; Gavrilova, Marina; Wah, Dannel; Gilgen, Jonathan; Coke, Jason; Burden, Jack; Nayyar, Shikha; Kaunda, Joseph; Yergens, Dean; Baylis, Barry
2012-01-01
Real-time locating systems (RTLS) have the potential to enhance healthcare systems through the live tracking of assets, patients and staff. This study evaluated a commercially available RTLS system deployed in a clinical setting, with three objectives: (1) assessment of the location accuracy of the technology in a clinical setting; (2) assessment of the value of asset tracking to staff; and (3) assessment of threshold monitoring applications developed for patient tracking and inventory control. Simulated daily activities were monitored by RTLS and compared with direct research team observations. Staff surveys and interviews concerning the system's effectiveness and accuracy were also conducted and analyzed. The study showed only modest location accuracy, and mixed reactions in staff interviews. These findings reveal that the technology needs to be refined further for better specific location accuracy before full-scale implementation can be recommended. PMID:22298566
Okoniewska, Barbara; Graham, Alecia; Gavrilova, Marina; Wah, Dannel; Gilgen, Jonathan; Coke, Jason; Burden, Jack; Nayyar, Shikha; Kaunda, Joseph; Yergens, Dean; Baylis, Barry; Ghali, William A
2012-01-01
Real-time locating systems (RTLS) have the potential to enhance healthcare systems through the live tracking of assets, patients and staff. This study evaluated a commercially available RTLS system deployed in a clinical setting, with three objectives: (1) assessment of the location accuracy of the technology in a clinical setting; (2) assessment of the value of asset tracking to staff; and (3) assessment of threshold monitoring applications developed for patient tracking and inventory control. Simulated daily activities were monitored by RTLS and compared with direct research team observations. Staff surveys and interviews concerning the system's effectiveness and accuracy were also conducted and analyzed. The study showed only modest location accuracy, and mixed reactions in staff interviews. These findings reveal that the technology needs to be refined further for better specific location accuracy before full-scale implementation can be recommended.
Cairney, Scott A.; Lindsay, Shane; Sobczak, Justyna M.; Paller, Ken A.; Gaskell, M. Gareth
2016-01-01
Study Objectives: To investigate how the effects of targeted memory reactivation (TMR) are influenced by memory accuracy prior to sleep and the presence or absence of direct cue-memory associations. Methods: 30 participants associated each of 50 pictures with an unrelated word and then with a screen location in two separate tasks. During picture-location training, each picture was also presented with a semantically related sound. The sounds were therefore directly associated with the picture locations but indirectly associated with the words. During a subsequent nap, half of the sounds were replayed in slow wave sleep (SWS). The effect of TMR on memory for the picture locations (direct cue-memory associations) and picture-word pairs (indirect cue-memory associations) was then examined. Results: TMR reduced overall memory decay for recall of picture locations. Further analyses revealed a benefit of TMR for picture locations recalled with a low degree of accuracy prior to sleep, but not those recalled with a high degree of accuracy. The benefit of TMR for low accuracy memories was predicted by time spent in SWS. There was no benefit of TMR for memory of the picture-word pairs, irrespective of memory accuracy prior to sleep. Conclusions: TMR provides the greatest benefit to memories recalled with a low degree of accuracy prior to sleep. The memory benefits of TMR may also be contingent on direct cue-memory associations. Citation: Cairney SA, Lindsay S, Sobczak JM, Paller KA, Gaskell MG. The benefits of targeted memory reactivation for consolidation in sleep are contingent on memory accuracy and direct cue-memory associations. SLEEP 2016;39(5):1139–1150. PMID:26856905
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, L; Ding, G
Purpose: Dose calculation accuracy for the out-of-field dose is important for predicting the dose to the organs-at-risk when they are located outside primary beams. The investigations on evaluating the calculation accuracy of treatment planning systems (TPS) on out-of-field dose in existing publications have focused on low energy (6MV) photon. This study evaluates out-of-field dose calculation accuracy of AAA algorithm for 15MV high energy photon beams. Methods: We used the EGSnrc Monte Carlo (MC) codes to evaluate the AAA algorithm in Varian Eclipse TPS (v.11). The incident beams start with validated Varian phase-space sources for a TrueBeam linac equipped with Millenniummore » 120 MLC. Dose comparisons between using AAA and MC for CT based realistic patient treatment plans using VMAT techniques for prostate and lung were performed and uncertainties of organ dose predicted by AAA at out-of-field location were evaluated. Results: The results show that AAA calculations under-estimate doses at the dose level of 1% (or less) of prescribed dose for CT based patient treatment plans using VMAT techniques. In regions where dose is only 1% of prescribed dose, although AAA under-estimates the out-of-field dose by 30% relative to the local dose, it is only about 0.3% of prescribed dose. For example, the uncertainties of calculated organ dose to liver or kidney that is located out-of-field is <0.3% of prescribed dose. Conclusion: For 15MV high energy photon beams, very good agreements (<1%) in calculating dose distributions were obtained between AAA and MC. The uncertainty of out-of-field dose calculations predicted by the AAA algorithm for realistic patient VMAT plans is <0.3% of prescribed dose in regions where the dose relative to the prescribed dose is <1%, although the uncertainties can be much larger relative to local doses. For organs-at-risk located at out-of-field, the error of dose predicted by Eclipse using AAA is negligible. This work was conducted in part using the resources of Varian research grant VUMC40590-R.« less
Almendros, J.; Chouet, B.; Dawson, P.
2001-01-01
We present a probabilistic method to locate the source of seismic events using seismic antennas. The method is based on a comparison of the event azimuths and slownesses derived from frequency-slowness analyses of array data, with a slowness vector model. Several slowness vector models are considered including both homogeneous and horizontally layered half-spaces and also a more complex medium representing the actual topography and three-dimensional velocity structure of the region under study. In this latter model the slowness vector is obtained from frequency-slowness analyses of synthetic signals. These signals are generated using the finite difference method and include the effects of topography and velocity structure to reproduce as closely as possible the behavior of the observed wave fields. A comparison of these results with those obtained with a homogeneous half-space demonstrates the importance of structural and topographic effects, which, if ignored, lead to a bias in the source location. We use synthetic seismograms to test the accuracy and stability of the method and to investigate the effect of our choice of probability distributions. We conclude that this location method can provide the source position of shallow events within a complex volcanic structure such as Kilauea Volcano with an error of ??200 m. Copyright 2001 by the American Geophysical Union.
Rewinding the waves: tracking underwater signals to their source.
Kadri, Usama; Crivelli, Davide; Parsons, Wade; Colbourne, Bruce; Ryan, Amanda
2017-10-24
Analysis of data, recorded on March 8th 2014 at the Comprehensive Nuclear-Test-Ban Treaty Organisation's hydroacoustic stations off Cape Leeuwin Western Australia, and at Diego Garcia, reveal unique pressure signatures that could be associated with objects impacting at the sea surface, such as falling meteorites, or the missing Malaysian Aeroplane MH370. To examine the recorded signatures, we carried out experiments with spheres impacting at the surface of a water tank, where we observed almost identical pressure signature structures. While the pressure structure is unique to impacting objects, the evolution of the radiated acoustic waves carries information on the source. Employing acoustic-gravity wave theory we present an analytical inverse method to retrieve the impact time and location. The solution was validated using field observations of recent earthquakes, where we were able to calculate the eruption time and location to a satisfactory degree of accuracy. Moreover, numerical validations confirm an error below 0.02% for events at relatively large distances of over 1000 km. The method can be developed to calculate other essential properties such as impact duration and geometry. Besides impacting objects and earthquakes, the method could help in identifying the location of underwater explosions and landslides.
Uncertainty quantification in volumetric Particle Image Velocimetry
NASA Astrophysics Data System (ADS)
Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos
2016-11-01
Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.
An accuracy assessment of Magellan Very Long Baseline Interferometry (VLBI)
NASA Technical Reports Server (NTRS)
Engelhardt, D. B.; Kronschnabl, G. R.; Border, J. S.
1990-01-01
Very Long Baseline Interferometry (VLBI) measurements of the Magellan spacecraft's angular position and velocity were made during July through September, 1989, during the spacecraft's heliocentric flight to Venus. The purpose of this data acquisition and reduction was to verify this data type for operational use before Magellan is inserted into Venus orbit, in August, 1990. The accuracy of these measurements are shown to be within 20 nanoradians in angular position, and within 5 picoradians/sec in angular velocity. The media effects and their calibrations are quantified; the wet fluctuating troposphere is the dominant source of measurement error for angular velocity. The charged particle effect is completely calibrated with S- and X-Band dual-frequency calibrations. Increasing the accuracy of the Earth platform model parameters, by using VLBI-derived tracking station locations consistent with the planetary ephemeris frame, and by including high frequency Earth tidal terms in the Earth rotation model, add a few nanoradians improvement to the angular position measurements. Angular velocity measurements were insensitive to these Earth platform modelling improvements.
Improvements on the accuracy of beam bugs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y.J.; Fessenden, T.
1998-08-17
At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as ''beam bugs'', have been used throughoutmore » linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug.« less
Improvements on the accuracy of beam bugs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y J; Fessenden, T
1998-09-02
At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as "beam bugs", have been used throughoutmore » linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug.« less
Bauer, Timothy J
2013-06-15
The Jack Rabbit Test Program was sponsored in April and May 2010 by the Department of Homeland Security Transportation Security Administration to generate source data for large releases of chlorine and ammonia from transport tanks. In addition to a variety of data types measured at the release location, concentration versus time data was measured using sensors at distances up to 500 m from the tank. Release data were used to create accurate representations of the vapor flux versus time for the ten releases. This study was conducted to determine the importance of source terms and meteorological conditions in predicting downwind concentrations and the accuracy that can be obtained in those predictions. Each source representation was entered into an atmospheric transport and dispersion model using simplifying assumptions regarding the source characterization and meteorological conditions, and statistics for cloud duration and concentration at the sensor locations were calculated. A detailed characterization for one of the chlorine releases predicted 37% of concentration values within a factor of two, but cannot be considered representative of all the trials. Predictions of toxic effects at 200 m are relevant to incidents involving 1-ton chlorine tanks commonly used in parts of the United States and internationally. Published by Elsevier B.V.
Pinder, John E; Rowan, David J; Rasmussen, Joseph B; Smith, Jim T; Hinton, Thomas G; Whicker, F W
2014-08-01
Data from published studies and World Wide Web sources were combined to produce and test a regression model to predict Cs concentration ratios for freshwater fish species. The accuracies of predicted concentration ratios, which were computed using 1) species trophic levels obtained from random resampling of known food items and 2) K concentrations in the water for 207 fish from 44 species and 43 locations, were tested against independent observations of ratios for 57 fish from 17 species from 25 locations. Accuracy was assessed as the percent of observed to predicted ratios within factors of 2 or 3. Conservatism, expressed as the lack of under prediction, was assessed as the percent of observed to predicted ratios that were less than 2 or less than 3. The model's median observed to predicted ratio was 1.26, which was not significantly different from 1, and 50% of the ratios were between 0.73 and 1.85. The percentages of ratios within factors of 2 or 3 were 67 and 82%, respectively. The percentages of ratios that were <2 or <3 were 79 and 88%, respectively. An example for Perca fluviatilis demonstrated that increased prediction accuracy could be obtained when more detailed knowledge of diet was available to estimate trophic level. Copyright © 2014 Elsevier Ltd. All rights reserved.
Semantic Location Extraction from Crowdsourced Data
NASA Astrophysics Data System (ADS)
Koswatte, S.; Mcdougall, K.; Liu, X.
2016-06-01
Crowdsourced Data (CSD) has recently received increased attention in many application areas including disaster management. Convenience of production and use, data currency and abundancy are some of the key reasons for attracting this high interest. Conversely, quality issues like incompleteness, credibility and relevancy prevent the direct use of such data in important applications like disaster management. Moreover, location information availability of CSD is problematic as it remains very low in many crowd sourced platforms such as Twitter. Also, this recorded location is mostly related to the mobile device or user location and often does not represent the event location. In CSD, event location is discussed descriptively in the comments in addition to the recorded location (which is generated by means of mobile device's GPS or mobile communication network). This study attempts to semantically extract the CSD location information with the help of an ontological Gazetteer and other available resources. 2011 Queensland flood tweets and Ushahidi Crowd Map data were semantically analysed to extract the location information with the support of Queensland Gazetteer which is converted to an ontological gazetteer and a global gazetteer. Some preliminary results show that the use of ontologies and semantics can improve the accuracy of place name identification of CSD and the process of location information extraction.
Real-time realizations of the Bayesian Infrasonic Source Localization Method
NASA Astrophysics Data System (ADS)
Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.
2015-12-01
The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.
Ferreira, Ana Paula A; Póvoa, Luciana C; Zanier, José F C; Ferreira, Arthur S
2017-02-01
The aim of this study was to assess the thorax-rib static method (TRSM), a palpation method for locating the seventh cervical spinous process (C7SP), and to report clinical data on the accuracy of this method and that of the neck flexion-extension method (FEM), using radiography as the gold standard. A single-blinded, cross-sectional diagnostic accuracy study was conducted. One hundred and one participants from a primary-to-tertiary health care center (63 men, 56 ± 17 years of age) had their neck palpated using the FEM and the TRSM. A single examiner performed both the FEM and TRSM in a random sequence. Radiopaque markers were placed at each location with the aid of an ultraviolet lamp. Participants underwent chest radiography for assessment of the superimposed inner body structure, which was located by using either the FEM or the TRSM. Accuracy in identifying the C7SP was 18% and 33% (P = .013) with use of the FEM and the TRSM, respectively. The cumulative accuracy considering both caudal and cephalic directions (C7SP ± 1SP) increased to 58% and 81% (P = .001) with use of the FEM and the TRSM, respectively. Age had a significant effect on the accuracy of FEM (P = .027) but not on the accuracy of TRSM (P = .939). Sex, body mass, body height, and body mass index had no significant effects on the accuracy of both the FEM (P = .209 or higher) and the TRSM (P = .265 or higher). The TRMS located the C7SP more accurately compared with the FEM at any given level of anatomic detail, although both still underperformed in terms of acceptable accuracy for a clinical setting. Copyright © 2016. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Sugioka, H.; Suyehiro, K.; Shinohara, M.
2009-12-01
The hydroacoustic monitoring by the International Monitoring System (IMS) for Comprehensive Nuclear-Test-Treaty (CTBT) verification system utilize hydrophone stations and seismic stations called T-phase stations for worldwide detection. Some signals of natural origin include those from earthquakes, submarine volcanic eruptions, or whale calls. Among artificial sources there are non-nuclear explosions and air-gun shots. It is important for IMS system to detect and locate hydroacoustic events with sufficient accuracy and correctly characterize the signals and identify the source. As there are a number of seafloor cable networks operated offshore Japanese islands basically facing the Pacific Ocean for monitoring regional seismicity, the data from these stations (pressures, hydrophones and seismic sensors) may be utilized to verify and increase the capability of the IMS. We use these data to compare some selected event parameters with those by Pacific in the time period of 2004-present. These anomalous examples and also dynamite shots used for seismic crustal structure studies and other natural sources will be presented in order to help improve the IMS verification capabilities for detection, location and characterization of anomalous signals. The seafloor cable networks composed of three hydrophones and six seismometers and a temporal dense seismic array detected and located hydroacoustic events offshore Japanese island on 12th of March in 2008, which had been reported by the IMS. We detected not only the reverberated hydroacoustic waves between the sea surface and the sea bottom but also the seismic waves going through the crust associated with the events. The determined source of the seismic waves is almost coincident with the one of hydroacoustic waves, suggesting that the seismic waves are converted very close to the origin of the hydroacoustic source. We also detected very similar signals on 16th of March in 2009 to the ones associated with the event of 12th of March in 2008.
Reconstructing cortical current density by exploring sparseness in the transform domain
NASA Astrophysics Data System (ADS)
Ding, Lei
2009-05-01
In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.
Advances in audio source seperation and multisource audio content retrieval
NASA Astrophysics Data System (ADS)
Vincent, Emmanuel
2012-06-01
Audio source separation aims to extract the signals of individual sound sources from a given recording. In this paper, we review three recent advances which improve the robustness of source separation in real-world challenging scenarios and enable its use for multisource content retrieval tasks, such as automatic speech recognition (ASR) or acoustic event detection (AED) in noisy environments. We present a Flexible Audio Source Separation Toolkit (FASST) and discuss its advantages compared to earlier approaches such as independent component analysis (ICA) and sparse component analysis (SCA). We explain how cues as diverse as harmonicity, spectral envelope, temporal fine structure or spatial location can be jointly exploited by this toolkit. We subsequently present the uncertainty decoding (UD) framework for the integration of audio source separation and audio content retrieval. We show how the uncertainty about the separated source signals can be accurately estimated and propagated to the features. Finally, we explain how this uncertainty can be efficiently exploited by a classifier, both at the training and the decoding stage. We illustrate the resulting performance improvements in terms of speech separation quality and speaker recognition accuracy.
NASA Astrophysics Data System (ADS)
Jayet, Baptiste; Ahmad, Junaid; Taylor, Shelley L.; Hill, Philip J.; Dehghani, Hamid; Morgan, Stephen P.
2017-03-01
Bioluminescence imaging (BLI) is a commonly used imaging modality in biology to study cancer in vivo in small animals. Images are generated using a camera to map the optical fluence emerging from the studied animal, then a numerical reconstruction algorithm is used to locate the sources and estimate their sizes. However, due to the strong light scattering properties of biological tissues, the resolution is very limited (around a few millimetres). Therefore obtaining accurate information about the pathology is complicated. We propose a combined ultrasound/optics approach to improve accuracy of these techniques. In addition to the BLI data, an ultrasound probe driven by a scanner is used for two main objectives. First, to obtain a pure acoustic image, which provides structural information of the sample. And second, to alter the light emission by the bioluminescent sources embedded inside the sample, which is monitored using a high speed optical detector (e.g. photomultiplier tube). We will show that this last measurement, used in conjunction with the ultrasound data, can provide accurate localisation of the bioluminescent sources. This can be used as a priori information by the numerical reconstruction algorithm, greatly increasing the accuracy of the BLI image reconstruction as compared to the image generated using only BLI data.
NASA Astrophysics Data System (ADS)
Hansen, Scott K.; Vesselinov, Velimir V.
2016-10-01
We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-02
...'s CMRS E911 location requirements without ensuring that time is taken to study location technologies... accuracy requirements on interconnected VoIP service without further study.'' A number of commenters... study the technical, operational and economic issues related to the provision of ALI for interconnected...
Virtual targeting in three-dimensional space with sound and light interference
NASA Astrophysics Data System (ADS)
Chua, Florence B.; DeMarco, Robert M.; Bergen, Michael T.; Short, Kenneth R.; Servatius, Richard J.
2006-05-01
Law enforcement and the military are critically concerned with the targeting and firing accuracy of opponents. Stimuli which impede opponent targeting and firing accuracy can be incorporated into defense systems. An automated virtual firing range was developed to assess human targeting accuracy under conditions of sound and light interference, while avoiding dangers associated with live fire. This system has the ability to quantify sound and light interference effects on targeting and firing accuracy in three dimensions. This was achieved by development of a hardware and software system that presents the subject with a sound or light target, preceded by a sound or light interference. SonyXplod. TM 4-way speakers present sound interference and sound targeting. The Martin ® MiniMAC TM Profile operates as a source of light interference, while a red laser light serves as a target. A tracking system was created to monitor toy gun movement and firing in three-dimensional space. Data are collected via the Ascension ® Flock of Birds TM tracking system and a custom National Instrument ® LabVIEW TM 7.0 program to monitor gun movement and firing. A test protocol examined system parameters. Results confirm that the system enables tracking of virtual shots from a fired simulation gun to determine shot accuracy and location in three dimensions.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.
Bergmann, Helmar; Minear, Gregory; Raith, Maria; Schaffarich, Peter M
2008-12-09
The accuracy of multiple window spatial resolution characterises the performance of a gamma camera for dual isotope imaging. In the present study we investigate an alternative method to the standard NEMA procedure for measuring this performance parameter. A long-lived 133Ba point source with gamma energies close to 67Ga and a single bore lead collimator were used to measure the multiple window spatial registration error. Calculation of the positions of the point source in the images used the NEMA algorithm. The results were validated against the values obtained by the standard NEMA procedure which uses a liquid 67Ga source with collimation. Of the source-collimator configurations under investigation an optimum collimator geometry, consisting of a 5 mm thick lead disk with a diameter of 46 mm and a 5 mm central bore, was selected. The multiple window spatial registration errors obtained by the 133Ba method showed excellent reproducibility (standard deviation < 0.07 mm). The values were compared with the results from the NEMA procedure obtained at the same locations and showed small differences with a correlation coefficient of 0.51 (p < 0.05). In addition, the 133Ba point source method proved to be much easier to use. A Bland-Altman analysis showed that the 133Ba and the 67Ga Method can be used interchangeably. The 133Ba point source method measures the multiple window spatial registration error with essentially the same accuracy as the NEMA-recommended procedure, but is easier and safer to use and has the potential to replace the current standard procedure.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485
Dual-head gamma camera system for intraoperative localization of radioactive seeds
NASA Astrophysics Data System (ADS)
Arsenali, B.; de Jong, H. W. A. M.; Viergever, M. A.; Dickerscheid, D. B. M.; Beijst, C.; Gilhuijs, K. G. A.
2015-10-01
Breast-conserving surgery is a standard option for the treatment of patients with early-stage breast cancer. This form of surgery may result in incomplete excision of the tumor. Iodine-125 labeled titanium seeds are currently used in clinical practice to reduce the number of incomplete excisions. It seems likely that the number of incomplete excisions can be reduced even further if intraoperative information about the location of the radioactive seed is combined with preoperative information about the extent of the tumor. This can be combined if the location of the radioactive seed is established in a world coordinate system that can be linked to the (preoperative) image coordinate system. With this in mind, we propose a radioactive seed localization system which is composed of two static ceiling-suspended gamma camera heads and two parallel-hole collimators. Physical experiments and computer simulations which mimic realistic clinical situations were performed to estimate the localization accuracy (defined as trueness and precision) of the proposed system with respect to collimator-source distance (ranging between 50 cm and 100 cm) and imaging time (ranging between 1 s and 10 s). The goal of the study was to determine whether or not a trueness of 5 mm can be achieved if a collimator-source distance of 50 cm and imaging time of 5 s are used (these specifications were defined by a group of dedicated breast cancer surgeons). The results from the experiments indicate that the location of the radioactive seed can be established with an accuracy of 1.6 mm ± 0.6 mm if a collimator-source distance of 50 cm and imaging time of 5 s are used (these experiments were performed with a 4.5 cm thick block phantom). Furthermore, the results from the simulations indicate that a trueness of 3.2 mm or less can be achieved if a collimator-source distance of 50 cm and imaging time of 5 s are used (this trueness was achieved for all 14 breast phantoms which were used in this study). Based on these results we conclude that the proposed system can be a valuable tool for (real-time) intraoperative breast cancer localization.
Google Earth elevation data extraction and accuracy assessment for transportation applications
Wang, Yinsong; Zou, Yajie; Henrickson, Kristian; Wang, Yinhai; Tang, Jinjun; Park, Byung-Jung
2017-01-01
Roadway elevation data is critical for a variety of transportation analyses. However, it has been challenging to obtain such data and most roadway GIS databases do not have them. This paper intends to address this need by proposing a method to extract roadway elevation data from Google Earth (GE) for transportation applications. A comprehensive accuracy assessment of the GE-extracted elevation data is conducted for the area of conterminous USA. The GE elevation data was compared with the ground truth data from nationwide GPS benchmarks and roadway monuments from six states in the conterminous USA. This study also compares the GE elevation data with the elevation raster data from the U.S. Geological Survey National Elevation Dataset (USGS NED), which is a widely used data source for extracting roadway elevation. Mean absolute error (MAE) and root mean squared error (RMSE) are used to assess the accuracy and the test results show MAE, RMSE and standard deviation of GE roadway elevation error are 1.32 meters, 2.27 meters and 2.27 meters, respectively. Finally, the proposed extraction method was implemented and validated for the following three scenarios: (1) extracting roadway elevation differentiating by directions, (2) multi-layered roadway recognition in freeway segment and (3) slope segmentation and grade calculation in freeway segment. The methodology validation results indicate that the proposed extraction method can locate the extracting route accurately, recognize multi-layered roadway section, and segment the extracted route by grade automatically. Overall, it is found that the high accuracy elevation data available from GE provide a reliable data source for various transportation applications. PMID:28445480
Google Earth elevation data extraction and accuracy assessment for transportation applications.
Wang, Yinsong; Zou, Yajie; Henrickson, Kristian; Wang, Yinhai; Tang, Jinjun; Park, Byung-Jung
2017-01-01
Roadway elevation data is critical for a variety of transportation analyses. However, it has been challenging to obtain such data and most roadway GIS databases do not have them. This paper intends to address this need by proposing a method to extract roadway elevation data from Google Earth (GE) for transportation applications. A comprehensive accuracy assessment of the GE-extracted elevation data is conducted for the area of conterminous USA. The GE elevation data was compared with the ground truth data from nationwide GPS benchmarks and roadway monuments from six states in the conterminous USA. This study also compares the GE elevation data with the elevation raster data from the U.S. Geological Survey National Elevation Dataset (USGS NED), which is a widely used data source for extracting roadway elevation. Mean absolute error (MAE) and root mean squared error (RMSE) are used to assess the accuracy and the test results show MAE, RMSE and standard deviation of GE roadway elevation error are 1.32 meters, 2.27 meters and 2.27 meters, respectively. Finally, the proposed extraction method was implemented and validated for the following three scenarios: (1) extracting roadway elevation differentiating by directions, (2) multi-layered roadway recognition in freeway segment and (3) slope segmentation and grade calculation in freeway segment. The methodology validation results indicate that the proposed extraction method can locate the extracting route accurately, recognize multi-layered roadway section, and segment the extracted route by grade automatically. Overall, it is found that the high accuracy elevation data available from GE provide a reliable data source for various transportation applications.
Gao, Lin; Li, Chang-chun; Wang, Bao-shan; Yang Gui-jun; Wang, Lei; Fu, Kui
2016-01-01
With the innovation of remote sensing technology, remote sensing data sources are more and more abundant. The main aim of this study was to analyze retrieval accuracy of soybean leaf area index (LAI) based on multi-source remote sensing data including ground hyperspectral, unmanned aerial vehicle (UAV) multispectral and the Gaofen-1 (GF-1) WFV data. Ratio vegetation index (RVI), normalized difference vegetation index (NDVI), soil-adjusted vegetation index (SAVI), difference vegetation index (DVI), and triangle vegetation index (TVI) were used to establish LAI retrieval models, respectively. The models with the highest calibration accuracy were used in the validation. The capability of these three kinds of remote sensing data for LAI retrieval was assessed according to the estimation accuracy of models. The experimental results showed that the models based on the ground hyperspectral and UAV multispectral data got better estimation accuracy (R² was more than 0.69 and RMSE was less than 0.4 at 0.01 significance level), compared with the model based on WFV data. The RVI logarithmic model based on ground hyperspectral data was little superior to the NDVI linear model based on UAV multispectral data (The difference in E(A), R² and RMSE were 0.3%, 0.04 and 0.006, respectively). The models based on WFV data got the lowest estimation accuracy with R2 less than 0.30 and RMSE more than 0.70. The effects of sensor spectral response characteristics, sensor geometric location and spatial resolution on the soybean LAI retrieval were discussed. The results demonstrated that ground hyperspectral data were advantageous but not prominent over traditional multispectral data in soybean LAI retrieval. WFV imagery with 16 m spatial resolution could not meet the requirements of crop growth monitoring at field scale. Under the condition of ensuring the high precision in retrieving soybean LAI and working efficiently, the approach to acquiring agricultural information by UAV remote sensing could yet be regarded as an optimal plan. Therefore, in the case of more and more available remote sensing information sources, agricultural UAV remote sensing could become an important information resource for guiding field-scale crop management and provide more scientific and accurate information for precision agriculture research.
NASA Technical Reports Server (NTRS)
Aldcroft, T.; Karovska, M.; Cresitello-Dittmar, M.; Cameron, R.
2000-01-01
The aspect system of the Chandra Observatory plays a key role in realizing the full potential of Chandra's x-ray optics and detectors. To achieve the highest spatial and spectral resolution (for grating observations), an accurate post-facto time history of the spacecraft attitude and internal alignment is needed. The CXC has developed a suite of tools which process sensor data from the aspect camera assembly and gyroscopes, and produce the spacecraft aspect solution. In this poster, the design of the aspect pipeline software is briefly described, followed by details of aspect system performance during the first eight months of flight. The two key metrics of aspect performance are: image reconstruction accuracy, which measures the x-ray image blurring introduced by aspect; and celestial location, which is the accuracy of detected source positions in absolute sky coordinates.
The detection error of thermal test low-frequency cable based on M sequence correlation algorithm
NASA Astrophysics Data System (ADS)
Wu, Dongliang; Ge, Zheyang; Tong, Xin; Du, Chunlin
2018-04-01
The problem of low accuracy and low efficiency of off-line detecting on thermal test low-frequency cable faults could be solved by designing a cable fault detection system, based on FPGA export M sequence code(Linear feedback shift register sequence) as pulse signal source. The design principle of SSTDR (Spread spectrum time-domain reflectometry) reflection method and hardware on-line monitoring setup figure is discussed in this paper. Testing data show that, this detection error increases with fault location of thermal test low-frequency cable.
Shokraneh, Farhad; Adams, Clive E
2017-08-04
Data extraction is one of the most time-consuming tasks in performing a systematic review. Extraction is often onto some sort of form. Sharing completed forms can be used to check quality and accuracy of extraction or for re-cycling data to other researchers for updating. However, validating each piece of extracted data is time-consuming and linking to source problematic.In this methodology paper, we summarize three methods for reporting the location of data in original full-text reports, comparing their advantages and disadvantages.
Approach to identifying pollutant source and matching flow field
NASA Astrophysics Data System (ADS)
Liping, Pang; Yu, Zhang; Hongquan, Qu; Tao, Hu; Wei, Wang
2013-07-01
Accidental pollution events often threaten people's health and lives, and it is necessary to identify a pollutant source rapidly so that prompt actions can be taken to prevent the spread of pollution. But this identification process is one of the difficulties in the inverse problem areas. This paper carries out some studies on this issue. An approach using single sensor information with noise was developed to identify a sudden continuous emission trace pollutant source in a steady velocity field. This approach first compares the characteristic distance of the measured concentration sequence to the multiple hypothetical measured concentration sequences at the sensor position, which are obtained based on a source-three-parameter multiple hypotheses. Then we realize the source identification by globally searching the optimal values with the objective function of the maximum location probability. Considering the large amount of computation load resulting from this global searching, a local fine-mesh source search method based on priori coarse-mesh location probabilities is further used to improve the efficiency of identification. Studies have shown that the flow field has a very important influence on the source identification. Therefore, we also discuss the impact of non-matching flow fields with estimation deviation on identification. Based on this analysis, a method for matching accurate flow field is presented to improve the accuracy of identification. In order to verify the practical application of the above method, an experimental system simulating a sudden pollution process in a steady flow field was set up and some experiments were conducted when the diffusion coefficient was known. The studies showed that the three parameters (position, emission strength and initial emission time) of the pollutant source in the experiment can be estimated by using the method for matching flow field and source identification.
Teleseismic P wave coda from oceanic trench and other bathymetric features
NASA Astrophysics Data System (ADS)
Wu, W.; Ni, S.
2012-12-01
Teleseismic P waves are essential for studying rupture processes of great earthquakes, either in the back projection method or in finite fault inversion method involving of quantitative waveform modeling. In these studies, P waves are assumed to be direct P waves generated by localized patches of the ruptured fault. However, for some oceanic earthquakes happening near the subductiontrenches or mid-ocean ridges, we observed strong signals between P and PP are often observed on theat telseseismic networkdistances. These P wave coda signals show strong coherence and their amplitudes are sometimes comparable with those of the direct P wave or even higher for some special frequenciesfrequency band. With array analysis, we find that the coda's slowness is very close to that of the direct P wave, suggesting that they are generated near the source region. As the earthquakes occur near the trenches or mid-ocean ridges which are both featured by rapid variation of bathymetry, the coda waves are very probably generated by the scattered surface wave or S wave at the irregular bathymetry. Then, we apply the realistic bathymetry data to calculate the 3D synthetics and the coda can be well predicted by the synthetics. So the topography/bathymetry is confirmed to be the main source of the coda. The coda waves are so strong that it may affect the imaging rupture processes of ocean earthquakes, so the topography/bathymetry effect should be taken into account. However, these strong coda waves can also be used utilized to locate the oceanic earthquakes. The 3D synthetics demonstrate that the coda waves are dependent on both the specific bathymetry and the location of the earthquake. Given the determined bathymetry, the earthquake location can be constrained by the coda, e.g. the distance between trench and the earthquake can be determine from the relative arrival between the P wave and its coda which is generated by the trench. In order to locate the earthquakes using the bathymetry, it is indispensible to get all the 3D synthetics with possible different horizontal locations and depths of the earthquakes. However, the computation will be very expensive if using the numerical simulation in the whole medium. Considering that the complicated structure is only near the source region, we apply ray theory to interface full wave field from spectral-element simulation to get the teleseismic P waves. With this approach, computation efficiency is greatly improved and the relocation of the earthquake can be completed more efficiently. As for the relocation accuracy, it can be as high as 10km for the earthquakes near the trench. So it provides us another, sometimes most favorable, method to locate the ocean earthquakes with ground-truth accuracy.
NASA Astrophysics Data System (ADS)
Lanyau, T.; Hamzah, N. S.; Jalal Bayar, A. M.; Karim, J. Abdul; Phongsakorn, P. K.; Suhaimi, K. Mohammad; Hashim, Z.; Razi, H. Md; Fazli, Z. Mohd; Ligam, A. S.; Mustafa, M. K. A.
2018-01-01
Power calibration is one of the important aspect for safe operation of the reactor. In RTP, the calorimetric method has been applied in reactor power calibration. This method involves measurement of water temperature in the RTP tank. Water volume and location of the temperature measurement may play an important role to the accuracy of the measurement. In this study, the analysis of water volume changes and thermocouple location effect to the power calibration accuracy has been done. The changes of the water volume are controlled by the variation of water level in reactor tank. The water level is measured by the ultrasonic measurement device. Temperature measurement has been done by thermocouple placed at three different locations. The accuracy of the temperature trend from various condition of measurement has been determined and discussed in this paper.
Alant, Erna; Kolatsis, Anna; Lilienfeld, Margi
2010-03-01
An important aspect in AAC concerns the user's ability to locate an aided visual symbol on a communication display in order to facilitate meaningful interaction with partners. Recent studies have suggested that the use of different colored symbols may be influential in the visual search process, and that this, in turn will influence the speed and accuracy of symbol location. This study examined the role of color on rate and accuracy of identifying symbols on an 8-location overlay through the use of 3 color conditions (same, mixed and unique). Sixty typically developing preschool children were exposed to two different sequential exposures (Set 1 and Set 2). Participants searched for a target stimulus (either meaningful symbols or arbitrary forms) in a stimuli array. Findings indicated that the sequential exposures (orderings) impacted both time and accuracy for both types of symbols within specific instances.
Kim, Minyoung; Choi, Christopher Y; Gerba, Charles P
2013-09-01
Assuming a scenario of a hypothetical pathogenic outbreak, we aimed this study at developing a decision-support model for identifying the location of the pathogenic intrusion as a means of facilitating rapid detection and efficient containment. The developed model was applied to a real sewer system (the Campbell wash basin in Tucson, AZ) in order to validate its feasibility. The basin under investigation was divided into 14 sub-basins. The geometric information associated with the sewer network was digitized using GIS (Geological Information System) and imported into an urban sewer network simulation model to generate microbial breakthrough curves at the outlet. A pre-defined amount of Escherichia coli (E. coli), which is an indicator of fecal coliform bacteria, was hypothetically introduced into 56 manholes (four in each sub-basin, chosen at random), and a total of 56 breakthrough curves of E. coli were generated using the simulation model at the outlet. Transport patterns were classified depending upon the location of the injection site (manhole), various known characteristics (peak concentration and time, pipe length, travel time, etc.) extracted from each E. coli breakthrough curve and the layout of sewer network. Using this information, we back-predicted the injection location once an E. coli intrusion was detected at a monitoring site using Artificial Neural Networks (ANNs). The results showed that ANNs identified the location of the injection sites with 57% accuracy; ANNs correctly recognized eight out of fourteen expressions with relying on data from a single detection sensor. Increasing the available sensors within the basin significantly improved the accuracy of the simulation results (from 57% to 100%). Copyright © 2013 Elsevier Ltd. All rights reserved.
Consider the source: Children link the accuracy of text-based sources to the accuracy of the author.
Vanderbilt, Kimberly E; Ochoa, Karlena D; Heilbrun, Jayd
2018-05-06
The present research investigated whether young children link the accuracy of text-based information to the accuracy of its author. Across three experiments, three- and four-year-olds (N = 231) received information about object labels from accurate and inaccurate sources who provided information both in text and verbally. Of primary interest was whether young children would selectively rely on information provided by more accurate sources, regardless of the form in which the information was communicated. Experiment 1 tested children's trust in text-based information (e.g., books) written by an author with a history of either accurate or inaccurate verbal testimony and found that children showed greater trust in books written by accurate authors. Experiment 2 replicated the findings of Experiment 1 and extended them by showing that children's selective trust in more accurate text-based sources was not dependent on experience trusting or distrusting the author's verbal testimony. Experiment 3 investigated this understanding in reverse by testing children's trust in verbal testimony communicated by an individual who had authored either accurate or inaccurate text-based information. Experiment 3 revealed that children showed greater trust in individuals who had authored accurate rather than inaccurate books. Experiment 3 also demonstrated that children used the accuracy of text-based sources to make inferences about the mental states of the authors. Taken together, these results suggest children do indeed link the reliability of text-based sources to the reliability of the author. Statement of Contribution Existing knowledge Children use sources' prior accuracy to predict future accuracy in face-to-face verbal interactions. Children who are just learning to read show increased trust in text bases (vs. verbal) information. It is unknown whether children consider authors' prior accuracy when judging the accuracy of text-based information. New knowledge added by this article Preschool children track sources' accuracy across communication mediums - from verbal to text-based modalities and vice versa. Children link the reliability of text-based sources to the reliability of the author. © 2018 The British Psychological Society.
NASA Technical Reports Server (NTRS)
Aprile, Elena
1994-01-01
An instrument is described which will provide a direct image of gamma-ray line or continuum sources in the energy range 300 keV to 10 MeV. The use of this instrument to study the celestial distribution of the (exp 26)Al isotope by observing the 1.809 MeV deexcitation gamma-ray line is illustrated. The source location accuracy is 2' or better. The imaging telescope is a liquid xenon time projection chamber coupled with a coded aperture mask (LXe-CAT). This instrument will confirm and extend the COMPTEL observations from the Compton Gamma-Ray Observatory (CGRO) with an improved capability for identifying the actual Galactic source or sources of (exp 26)Al, which are currently not known with certainty. sources currently under consideration include red giants on the asymptotic giant branch (AGB), novae, Type 1b or Type 2 supernovae, Wolf-Rayet stars and cosmic-rays interacting in molecular clouds. The instrument could also identify a local source of the celestial 1.809 MeV gamma-ray line, such as a recent nearby supernova.
Advances in Inner Magnetosphere Passive and Active Wave Research
NASA Technical Reports Server (NTRS)
Green, James L.; Fung, Shing F.
2004-01-01
This review identifies a number of the principal research advancements that have occurred over the last five years in the study of electromagnetic (EM) waves in the Earth's inner magnetosphere. The observations used in this study are from the plasma wave instruments and radio sounders on Cluster, IMAGE, Geotail, Wind, Polar, Interball, and others. The data from passive plasma wave instruments have led to a number of advances such as: determining the origin and importance of whistler mode waves in the plasmasphere, discovery of the source of kilometric continuum radiation, mapping AKR source regions with "pinpoint" accuracy, and correlating the AKR source location with dipole tilt angle. Active magnetospheric wave experiments have shown that long range ducted and direct echoes can be used to obtain the density distribution of electrons in the polar cap and along plasmaspheric field lines, providing key information on plasmaspheric filling rates and polar cap outflows.
Equivalent radiation source of 3D package for electromagnetic characteristics analysis
NASA Astrophysics Data System (ADS)
Li, Jun; Wei, Xingchang; Shu, Yufei
2017-10-01
An equivalent radiation source method is proposed to characterize electromagnetic emission and interference of complex three dimensional integrated circuits (IC) in this paper. The method utilizes amplitude-only near-field scanning data to reconstruct an equivalent magnetic dipole array, and the differential evolution optimization algorithm is proposed to extract the locations, orientation and moments of those dipoles. By importing the equivalent dipoles model into a 3D full-wave simulator together with the victim circuit model, the electromagnetic interference issues in mixed RF/digital systems can be well predicted. A commercial IC is used to validate the accuracy and efficiency of this proposed method. The coupled power at the victim antenna port calculated by the equivalent radiation source is compared with the measured data. Good consistency is obtained which confirms the validity and efficiency of the method. Project supported by the National Nature Science Foundation of China (No. 61274110).
Christin, Sylvain; St-Laurent, Martin-Hugues; Berteaux, Dominique
2015-01-01
Animal tracking through Argos satellite telemetry has enormous potential to test hypotheses in animal behavior, evolutionary ecology, or conservation biology. Yet the applicability of this technique cannot be fully assessed because no clear picture exists as to the conditions influencing the accuracy of Argos locations. Latitude, type of environment, and transmitter movement are among the main candidate factors affecting accuracy. A posteriori data filtering can remove “bad” locations, but again testing is still needed to refine filters. First, we evaluate experimentally the accuracy of Argos locations in a polar terrestrial environment (Nunavut, Canada), with both static and mobile transmitters transported by humans and coupled to GPS transmitters. We report static errors among the lowest published. However, the 68th error percentiles of mobile transmitters were 1.7 to 3.8 times greater than those of static transmitters. Second, we test how different filtering methods influence the quality of Argos location datasets. Accuracy of location datasets was best improved when filtering in locations of the best classes (LC3 and 2), while the Douglas Argos filter and a homemade speed filter yielded similar performance while retaining more locations. All filters effectively reduced the 68th error percentiles. Finally, we assess how location error impacted, at six spatial scales, two common estimators of home-range size (a proxy of animal space use behavior synthetizing movements), the minimum convex polygon and the fixed kernel estimator. Location error led to a sometimes dramatic overestimation of home-range size, especially at very local scales. We conclude that Argos telemetry is appropriate to study medium-size terrestrial animals in polar environments, but recommend that location errors are always measured and evaluated against research hypotheses, and that data are always filtered before analysis. How movement speed of transmitters affects location error needs additional research. PMID:26545245
NASA Astrophysics Data System (ADS)
Smith, David R.; Gowda, Vinay R.; Yurduseven, Okan; Larouche, Stéphane; Lipworth, Guy; Urzhumov, Yaroslav; Reynolds, Matthew S.
2017-01-01
Wireless power transfer (WPT) has been an active topic of research, with a number of WPT schemes implemented in the near-field (coupling) and far-field (radiation) regimes. Here, we consider a beamed WPT scheme based on a dynamically reconfigurable source aperture transferring power to receiving devices within the Fresnel region. In this context, the dynamic aperture resembles a reconfigurable lens capable of focusing power to a well-defined spot, whose dimension can be related to a point spread function. The necessary amplitude and phase distribution of the field imposed over the aperture can be determined in a holographic sense, by interfering a hypothetical point source located at the receiver location with a plane wave at the aperture location. While conventional technologies, such as phased arrays, can achieve the required control over phase and amplitude, they typically do so at a high cost; alternatively, metasurface apertures can achieve dynamic focusing with potentially lower cost. We present an initial tradeoff analysis of the Fresnel region WPT concept assuming a metasurface aperture, relating the key parameters such as spot size, aperture size, wavelength, and focal distance, as well as reviewing system considerations such as the availability of sources and power transfer efficiency. We find that approximate design formulas derived from the Gaussian optics approximation provide useful estimates of system performance, including transfer efficiency and coverage volume. The accuracy of these formulas is confirmed through numerical studies.
Wang, Rui-Rong; Yu, Xiao-Qing; Zheng, Shu-Wang; Ye, Yang
2016-01-01
Location based services (LBS) provided by wireless sensor networks have garnered a great deal of attention from researchers and developers in recent years. Chirp spread spectrum (CSS) signaling formatting with time difference of arrival (TDOA) ranging technology is an effective LBS technique in regards to positioning accuracy, cost, and power consumption. The design and implementation of the location engine and location management based on TDOA location algorithms were the focus of this study; as the core of the system, the location engine was designed as a series of location algorithms and smoothing algorithms. To enhance the location accuracy, a Kalman filter algorithm and moving weighted average technique were respectively applied to smooth the TDOA range measurements and location results, which are calculated by the cooperation of a Kalman TDOA algorithm and a Taylor TDOA algorithm. The location management server, the information center of the system, was designed with Data Server and Mclient. To evaluate the performance of the location algorithms and the stability of the system software, we used a Nanotron nanoLOC Development Kit 3.0 to conduct indoor and outdoor location experiments. The results indicated that the location system runs stably with high accuracy at absolute error below 0.6 m.
Valderrama-Landeros, L; Flores-de-Santiago, F; Kovacs, J M; Flores-Verdugo, F
2017-12-14
Optimizing the classification accuracy of a mangrove forest is of utmost importance for conservation practitioners. Mangrove forest mapping using satellite-based remote sensing techniques is by far the most common method of classification currently used given the logistical difficulties of field endeavors in these forested wetlands. However, there is now an abundance of options from which to choose in regards to satellite sensors, which has led to substantially different estimations of mangrove forest location and extent with particular concern for degraded systems. The objective of this study was to assess the accuracy of mangrove forest classification using different remotely sensed data sources (i.e., Landsat-8, SPOT-5, Sentinel-2, and WorldView-2) for a system located along the Pacific coast of Mexico. Specifically, we examined a stressed semiarid mangrove forest which offers a variety of conditions such as dead areas, degraded stands, healthy mangroves, and very dense mangrove island formations. The results indicated that Landsat-8 (30 m per pixel) had the lowest overall accuracy at 64% and that WorldView-2 (1.6 m per pixel) had the highest at 93%. Moreover, the SPOT-5 and the Sentinel-2 classifications (10 m per pixel) were very similar having accuracies of 75 and 78%, respectively. In comparison to WorldView-2, the other sensors overestimated the extent of Laguncularia racemosa and underestimated the extent of Rhizophora mangle. When considering such type of sensors, the higher spatial resolution can be particularly important in mapping small mangrove islands that often occur in degraded mangrove systems.
Fine-scale structure of the San Andreas fault zone and location of the SAFOD target earthquakes
Thurber, C.; Roecker, S.; Zhang, H.; Baher, S.; Ellsworth, W.
2004-01-01
We present results from the tomographic analysis of seismic data from the Parkfield area using three different inversion codes. The models provide a consistent view of the complex velocity structure in the vicinity of the San Andreas, including a sharp velocity contrast across the fault. We use the inversion results to assess our confidence in the absolute location accuracy of a potential target earthquake. We derive two types of accuracy estimates, one based on a consideration of the location differences from the three inversion methods, and the other based on the absolute location accuracy of "virtual earthquakes." Location differences are on the order of 100-200 m horizontally and up to 500 m vertically. Bounds on the absolute location errors based on the "virtual earthquake" relocations are ??? 50 m horizontally and vertically. The average of our locations places the target event epicenter within about 100 m of the SAF surface trace. Copyright 2004 by the American Geophysical Union.
Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments.
Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco
2017-10-27
Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis.
Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments
Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco
2017-01-01
Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis. PMID:29077071
Secure communication via an energy-harvesting untrusted relay in the presence of an eavesdropper
NASA Astrophysics Data System (ADS)
Tuan, Van Phu; Kong, Hyung Yun
2018-02-01
This article studies a secure communication of a simultaneous wireless information and power transfer system in which an energy-constrained untrusted relay, which harvests energy from the wireless signals, helps the communication between the source and destination and is able to decode the source's confidential signal. Additionally, the source's confidential signal is also overheard by a passive eavesdropper. To create positive secrecy capacity, a destination-assisted jamming signal that is completely cancelled at the destination is adopted. Moreover, the jamming signal is also exploited as an additional energy source. To evaluate the secrecy performance, analytical expressions for the secrecy outage probability (SOP) and the average secrecy capacity are derived. Moreover, a high-power approximation for the SOP is presented. The accuracy of the analytical results is verified by Monte Carlo simulations. Numerical results provide valuable insights into the effect of various system parameters, such as the energy-harvesting efficiency, secrecy rate threshold, power-splitting ratio, transmit powers, and locations of the relay and eavesdropper, on the secrecy performance.
Performance of a novel SQUID-based superconducting imaging-surface magnetoencephalography system
NASA Astrophysics Data System (ADS)
Kraus, R. H.; Volegov, P.; Maharajh, K.; Espy, M. A.; Matlashov, A. N.; Flynn, E. R.
2002-03-01
Performance for a recently completed whole-head magnetoencephalography system using a superconducting imaging surface (SIS) surrounding an array of 150 SQUID magnetometers is reported. The helmet-like SIS is hemispherical in shape with a brim. Conceptually, the SIS images nearby sources onto the SQUIDs while shielding sensors from distant “noise” sources. A finite element method (FEM) description using the as-built geometry was developed to describe the SIS effect on source fields by imposing B⊥( surface)=0 . Sensors consist of 8×8 mm 2 SQUID magnetometers with 0.84 nT/ Φ0 sensitivity and <3 fT/ Hz noise. A series of phantom experiments to verify system efficacy have been completed. Simple dry-wire phantoms were used to eliminate model dependence from our results. Phantom coils were distributed throughout the volume encompassed by the array with a variety of orientations. Each phantom coil was precisely machined and located to better than 25 μm and 10 mRad accuracy. Excellent agreement between model-calculated and measured magnetic field distributions of all phantom coil positions and orientations was found. Good agreement was found between modeled and measured shielding of the SQUIDs from sources external to the array showing significant frequency-independent shielding. Phantom localization precision was better than 0.5 mm at all locations with a mean of better than 0.3 mm.
Activity patterns of Californians: Use of and proximity to indoor pollutant sources
NASA Astrophysics Data System (ADS)
Jenkins, Peggy L.; Phillips, Thomas J.; Mulberg, Elliot J.; Hui, Steve P.
The California Air Resources Board funded a statewide survey of activity patterns of Californians over 11 years of age in order to improve the accuracy of exposure assessments for air pollutants. Telephone interviews were conducted with 1762 respondents over the four seasons from fall 1987 through summer 1988. In addition to completing a 24-h recall diary of activities and locations, participants also responded to questions about their use of and proximity to potential pollutant sources. Results are presented regarding time spent by Californians in different activities and locations relevant to pollutant exposure, and their frequency of use of or proximity to pollutant sources including cigarettes, consumer products such as paints and deodorizers, combustion appliances and motor vehicles. The results show that Californians spend, on average, 87% of their time indoors, 7% in enclosed transit and 6% outdoors. At least 62% of the population over 11 years of age and 46% of nonsmokers are near others' tobacco smoke at some time during the day. Potential exposure to different pollutant sources appears to vary among different gender and age groups. For example, women are more likely to use or be near personal care products and household cleaning agents, while men are more likely to be exposed to environmental tobacco smoke, solvents and paints. Data from this study can be used to reduce significantly the uncertainty associated with risk assessments for many pollutants.
NASA Astrophysics Data System (ADS)
Hynds, Paul D.; Misstear, Bruce D.; Gill, Laurence W.
2012-12-01
Groundwater quality analyses were carried out on samples from 262 private sources in the Republic of Ireland during the period from April 2008 to November 2010, with microbial quality assessed by thermotolerant coliform (TTC) presence. Assessment of potential microbial contamination risk factors was undertaken at all sources, and local meteorological data were also acquired. Overall, 28.9% of wells tested positive for TTC, with risk analysis indicating that source type (i.e., borehole or hand-dug well), local bedrock type, local subsoil type, groundwater vulnerability, septic tank setback distance, and 48 h antecedent precipitation were all significantly associated with TTC presence (p < 0.05). A number of source-specific design parameters were also significantly associated with bacterial presence. Hierarchical logistic regression with stepwise parameter entry was used to develop a private well susceptibility model, with the final model exhibiting a mean predictive accuracy of >80% (TTC present or absent) when compared to an independent validation data set. Model hierarchies of primary significance are source design (20%), septic tank location (11%), hydrogeological setting (10%), and antecedent 120 h precipitation (2%). Sensitivity analysis shows that the probability of contamination is highly sensitive to septic tank setback distance, with probability increasing linearly with decreases in setback distance. Likewise, contamination probability was shown to increase with increasing antecedent precipitation. Results show that while groundwater vulnerability category is a useful indicator of aquifer susceptibility to contamination, its suitability with regard to source contamination is less clear. The final model illustrates that both localized (well-specific) and generalized (aquifer-specific) contamination mechanisms are involved in contamination events, with localized bypass mechanisms dominant. The susceptibility model developed here could be employed in the appropriate location, design, construction, and operation of private groundwater wells, thereby decreasing the contamination risk, and hence health risk, associated with these sources.
Time-of-flight mass measurements for nuclear processes in neutron star crusts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estrade, Alfredo; Matos, M.; Schatz, Hendrik
2011-01-01
The location of electron capture heat sources in the crust of accreting neutron stars depends on the masses of extremely neutron-rich nuclei. We present first results from a new implementation of the time-of-flight technique to measure nuclear masses of rare isotopes at the National Supercon- ducting Cyclotron Laboratory. The masses of 16 neutron-rich nuclei in the Sc Ni element range were determined simultaneously, improving the accuracy compared to previous data in 12 cases. The masses of 61V, 63Cr, 66Mn, and 74Ni were measured for the first time with mass excesses of 30.510(890) MeV, 35.280(650) MeV, 36.900(790) MeV, and 49.210(990) MeV,more » respectively. With the measurement of the 66Mn mass, the location of the two dominant heat sources in the outer crust of accreting neutron stars, which exhibit so called superbursts, is now experimentally constrained. We find that the location of the 66Fe 66Mn electron capture transition occurs sig- nificantly closer to the surface than previously assumed because our new experimental Q-value is 2.1 MeV smaller than predicted by the FRDM mass model. The results also provide new insights into the structure of neutron-rich nuclei around N = 40.« less
NASA Astrophysics Data System (ADS)
Hallez, Hans; Staelens, Steven; Lemahieu, Ignace
2009-10-01
EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.
NASA Astrophysics Data System (ADS)
Heim, N. A.; Kishor, P.; McClennen, M.; Peters, S. E.
2012-12-01
Free and open source software and data facilitate novel research by allowing geoscientists to quickly and easily bring together disparate data that have been independently collected for many different purposes. The Earth-Base project brings together several datasets using a common space-time framework that is managed and analyzed using open source software. Earth-Base currently draws on stratigraphic, paleontologic, tectonic, geodynamic, seismic, botanical, hydrologic and cartographic data. Furthermore, Earth-Base is powered by RESTful data services operating on top of PostgreSQL and MySQL databases and the R programming environment, making much of the functionality accessible to third-parties even though the detailed data schemas are unknown to them. We demonstrate the scientific potential of Earth-Base and other FOSS by comparing the stated age of fossil collections to the age of the bedrock upon which they are geolocated. This analysis makes use of web services for the Paleobiology Database (PaleoDB), Macrostrat, the 2005 Geologic Map of North America (Garrity et al. 2009) and geologic maps of the conterminous United States. This analysis is a way to quickly assess the accuracy of temporal and spatial congruence of the paleontologic and geologic map datasets. We find that 56.1% of the 52,593 PaleoDB collections have temporally consistent ages with the bedrock upon which they are located based on the Geologic Map of North America. Surprisingly, fossil collections within the conterminous United States are more consistently located on bedrock with congruent geological ages, even though the USA maps are spatially and temporally more precise. Approximately 57% of the 37,344 PaleoDB collections in the USA are located on similarly aged geologic map units. Increased accuracy is attributed to the lumping of Pliocene and Quaternary geologic map units along the Atlantic and Gulf coastal plains in the Geologic Map of North America. The abundant Pliocene fossil collections are thus located on geologic map units that have an erroneous age designation of Quaternary. We also demonstrate the power of the R programming environment for performing analyses and making publication-quality maps for visualizing results.
NASA Technical Reports Server (NTRS)
Ong, K. M.; Macdoran, P. F.; Thomas, J. B.; Fliegel, H. F.; Skjerve, L. J.; Spitzmesser, D. J.; Batelaan, P. D.; Paine, S. R.; Newsted, M. G.
1976-01-01
A precision geodetic measurement system (Aries, for Astronomical Radio Interferometric Earth Surveying) based on the technique of very long base line interferometry has been designed and implemented through the use of a 9-m transportable antenna and the NASA 64-m antenna of the Deep Space Communications Complex at Goldstone, California. A series of experiments designed to demonstrate the inherent accuracy of a transportable interferometer was performed on a 307-m base line during the period from December 1973 to June 1974. This short base line was chosen in order to obtain a comparison with a conventional survey with a few-centimeter accuracy and to minimize Aries errors due to transmission media effects, source locations, and earth orientation parameters. The base-line vector derived from a weighted average of the measurements, representing approximately 24 h of data, possessed a formal uncertainty of about 3 cm in all components. This average interferometry base-line vector was in good agreement with the conventional survey vector within the statistical range allowed by the combined uncertainties (3-4 cm) of the two techniques.
Calorimetric method for determination of {sup 51}Cr neutrino source activity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veretenkin, E. P., E-mail: veretenk@inr.ru; Gavrin, V. N.; Danshin, S. N.
Experimental study of nonstandard neutrino properties using high-intensity artificial neutrino sources requires the activity of the sources to be determined with high accuracy. In the BEST project, a calorimetric system for measurement of the activity of high-intensity (a few MCi) neutrino sources based on {sup 51}Cr with an accuracy of 0.5–1% is created. In the paper, the main factors affecting the accuracy of determining the neutrino source activity are discussed. The calorimetric system design and the calibration results using a thermal simulator of the source are presented.
Hansen, Scott K.; Vesselinov, Velimir Valentinov
2016-10-01
We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less
Paans, Wolter; Sermeus, Walter; Nieweg, Roos; van der Schans, Cees
2010-01-01
The purpose of this study was to determine how knowledge sources, ready knowledge, and disposition toward critical thinking and reasoning skills influence the accuracy of student nurses' diagnoses. A randomized controlled trial was conducted to determine the influence of knowledge sources. We used the following questionnaires: (a) knowledge inventory, (b) California Critical Thinking Disposition Inventory, and (c) Health Science Reasoning Test (HSRT). The use of knowledge sources had very little influence on the accuracy of nursing diagnoses. Accuracy was significantly related to the analysis domain of the HSRT. Students were unable to operationalize knowledge sources to derive accurate diagnoses and did not effectively use reasoning skills. Copyright 2010 Elsevier Inc. All rights reserved.
Evaluating the Effectiveness of DART® Buoy Networks Based on Forecast Accuracy
NASA Astrophysics Data System (ADS)
Percival, Donald B.; Denbo, Donald W.; Gica, Edison; Huang, Paul Y.; Mofjeld, Harold O.; Spillane, Michael C.; Titov, Vasily V.
2018-04-01
A performance measure for a DART® tsunami buoy network has been developed. DART® buoys are used to detect tsunamis, but the full potential of the data they collect is realized through accurate forecasts of inundations caused by the tsunamis. The performance measure assesses how well the network achieves its full potential through a statistical analysis of simulated forecasts of wave amplitudes outside an impact site and a consideration of how much the forecasts are degraded in accuracy when one or more buoys are inoperative. The analysis uses simulated tsunami amplitude time series collected at each buoy from selected source segments in the Short-term Inundation Forecast for Tsunamis database and involves a set for 1000 forecasts for each buoy/segment pair at sites just offshore of selected impact communities. Random error-producing scatter in the time series is induced by uncertainties in the source location, addition of real oceanic noise, and imperfect tidal removal. Comparison with an error-free standard leads to root-mean-square errors (RMSEs) for DART® buoys located near a subduction zone. The RMSEs indicate which buoy provides the best forecast (lowest RMSE) for sections of the zone, under a warning-time constraint for the forecasts of 3 h. The analysis also shows how the forecasts are degraded (larger minimum RMSE among the remaining buoys) when one or more buoys become inoperative. The RMSEs provide a way to assess array augmentation or redesign such as moving buoys to more optimal locations. Examples are shown for buoys off the Aleutian Islands and off the West Coast of South America for impact sites at Hilo HI and along the US West Coast (Crescent City CA and Port San Luis CA, USA). A simple measure (coded green, yellow or red) of the current status of the network's ability to deliver accurate forecasts is proposed to flag the urgency of buoy repair.
Evaluating the Effectiveness of DART® Buoy Networks Based on Forecast Accuracy
NASA Astrophysics Data System (ADS)
Percival, Donald B.; Denbo, Donald W.; Gica, Edison; Huang, Paul Y.; Mofjeld, Harold O.; Spillane, Michael C.; Titov, Vasily V.
2018-03-01
A performance measure for a DART® tsunami buoy network has been developed. DART® buoys are used to detect tsunamis, but the full potential of the data they collect is realized through accurate forecasts of inundations caused by the tsunamis. The performance measure assesses how well the network achieves its full potential through a statistical analysis of simulated forecasts of wave amplitudes outside an impact site and a consideration of how much the forecasts are degraded in accuracy when one or more buoys are inoperative. The analysis uses simulated tsunami amplitude time series collected at each buoy from selected source segments in the Short-term Inundation Forecast for Tsunamis database and involves a set for 1000 forecasts for each buoy/segment pair at sites just offshore of selected impact communities. Random error-producing scatter in the time series is induced by uncertainties in the source location, addition of real oceanic noise, and imperfect tidal removal. Comparison with an error-free standard leads to root-mean-square errors (RMSEs) for DART® buoys located near a subduction zone. The RMSEs indicate which buoy provides the best forecast (lowest RMSE) for sections of the zone, under a warning-time constraint for the forecasts of 3 h. The analysis also shows how the forecasts are degraded (larger minimum RMSE among the remaining buoys) when one or more buoys become inoperative. The RMSEs provide a way to assess array augmentation or redesign such as moving buoys to more optimal locations. Examples are shown for buoys off the Aleutian Islands and off the West Coast of South America for impact sites at Hilo HI and along the US West Coast (Crescent City CA and Port San Luis CA, USA). A simple measure (coded green, yellow or red) of the current status of the network's ability to deliver accurate forecasts is proposed to flag the urgency of buoy repair.
NASA Astrophysics Data System (ADS)
Crittenden, P. E.; Balachandar, S.
2018-07-01
The radial one-dimensional Euler equations are often rewritten in what is known as the geometric source form. The differential operator is identical to the Cartesian case, but source terms result. Since the theory and numerical methods for the Cartesian case are well-developed, they are often applied without modification to cylindrical and spherical geometries. However, numerical conservation is lost. In this article, AUSM^+-up is applied to a numerically conservative (discrete) form of the Euler equations labeled the geometric form, a nearly conservative variation termed the geometric flux form, and the geometric source form. The resulting numerical methods are compared analytically and numerically through three types of test problems: subsonic, smooth, steady-state solutions, Sedov's similarity solution for point or line-source explosions, and shock tube problems. Numerical conservation is analyzed for all three forms in both spherical and cylindrical coordinates. All three forms result in constant enthalpy for steady flows. The spatial truncation errors have essentially the same order of convergence, but the rate constants are superior for the geometric and geometric flux forms for the steady-state solutions. Only the geometric form produces the correct shock location for Sedov's solution, and a direct connection between the errors in the shock locations and energy conservation is found. The shock tube problems are evaluated with respect to feature location using an approximation with a very fine discretization as the benchmark. Extensions to second order appropriate for cylindrical and spherical coordinates are also presented and analyzed numerically. Conclusions are drawn, and recommendations are made. A derivation of the steady-state solution is given in the Appendix.
NASA Astrophysics Data System (ADS)
Crittenden, P. E.; Balachandar, S.
2018-03-01
The radial one-dimensional Euler equations are often rewritten in what is known as the geometric source form. The differential operator is identical to the Cartesian case, but source terms result. Since the theory and numerical methods for the Cartesian case are well-developed, they are often applied without modification to cylindrical and spherical geometries. However, numerical conservation is lost. In this article, AUSM^+ -up is applied to a numerically conservative (discrete) form of the Euler equations labeled the geometric form, a nearly conservative variation termed the geometric flux form, and the geometric source form. The resulting numerical methods are compared analytically and numerically through three types of test problems: subsonic, smooth, steady-state solutions, Sedov's similarity solution for point or line-source explosions, and shock tube problems. Numerical conservation is analyzed for all three forms in both spherical and cylindrical coordinates. All three forms result in constant enthalpy for steady flows. The spatial truncation errors have essentially the same order of convergence, but the rate constants are superior for the geometric and geometric flux forms for the steady-state solutions. Only the geometric form produces the correct shock location for Sedov's solution, and a direct connection between the errors in the shock locations and energy conservation is found. The shock tube problems are evaluated with respect to feature location using an approximation with a very fine discretization as the benchmark. Extensions to second order appropriate for cylindrical and spherical coordinates are also presented and analyzed numerically. Conclusions are drawn, and recommendations are made. A derivation of the steady-state solution is given in the Appendix.
Comparative study of shear wave-based elastography techniques in optical coherence tomography
NASA Astrophysics Data System (ADS)
Zvietcovich, Fernando; Rolland, Jannick P.; Yao, Jianing; Meemon, Panomsak; Parker, Kevin J.
2017-03-01
We compare five optical coherence elastography techniques able to estimate the shear speed of waves generated by one and two sources of excitation. The first two techniques make use of one piezoelectric actuator in order to produce a continuous shear wave propagation or a tone-burst propagation (TBP) of 400 Hz over a gelatin tissue-mimicking phantom. The remaining techniques utilize a second actuator located on the opposite side of the region of interest in order to create three types of interference patterns: crawling waves, swept crawling waves, and standing waves, depending on the selection of the frequency difference between the two actuators. We evaluated accuracy, contrast to noise ratio, resolution, and acquisition time for each technique during experiments. Numerical simulations were also performed in order to support the experimental findings. Results suggest that in the presence of strong internal reflections, single source methods are more accurate and less variable when compared to the two-actuator methods. In particular, TBP reports the best performance with an accuracy error <4.1%. Finally, the TBP was tested in a fresh chicken tibialis anterior muscle with a localized thermally ablated lesion in order to evaluate its performance in biological tissue.
Application of the aeroacoustic analogy to a shrouded, subsonic, radial fan
NASA Astrophysics Data System (ADS)
Buccieri, Bryan M.; Richards, Christopher M.
2016-12-01
A study was conducted to investigate the predictive capability of computational aeroacoustics with respect to a shrouded, subsonic, radial fan. A three dimensional unsteady fluid dynamics simulation was conducted to produce aerodynamic data used as the acoustic source for an aeroacoustics simulation. Two acoustic models were developed: one modeling the forces on the rotating fan blades as a set of rotating dipoles located at the center of mass of each fan blade and one modeling the forces on the stationary fan shroud as a field of distributed stationary dipoles. Predicted acoustic response was compared to experimental data measured at two operating speeds using three different outlet restrictions. The blade source model predicted overall far field sound power levels within 5 dB averaged over the six different operating conditions while the shroud model predicted overall far field sound power levels within 7 dB averaged over the same conditions. Doubling the density of the computational fluids mesh and using a scale adaptive simulation turbulence model increased broadband noise accuracy. However, computation time doubled and the accuracy of the overall sound power level prediction improved by only 1 dB.
Vehicle-based Methane Mapping Helps Find Natural Gas Leaks and Prioritize Leak Repairs
NASA Astrophysics Data System (ADS)
von Fischer, J. C.; Weller, Z.; Roscioli, J. R.; Lamb, B. K.; Ferrara, T.
2017-12-01
Recently, mobile methane sensing platforms have been developed to detect and locate natural gas (NG) leaks in urban distribution systems and to estimate their size. Although this technology has already been used in targeted deployment for prioritization of NG pipeline infrastructure repair and replacement, one open question regarding this technology is how effective the resulting data are for prioritizing infrastructure repair and replacement. To answer this question we explore the accuracy and precision of the natural gas leak location and emission estimates provided by methane sensors placed on Google Street View (GSV) vehicles. We find that the vast majority (75%) of methane emitting sources detected by these mobile platforms are NG leaks and that the location estimates are effective at identifying the general location of leaks. We also show that the emission rate estimates from mobile detection platforms are able to effectively rank NG leaks for prioritizing leak repair. Our findings establish that mobile sensing platforms are an efficient and effective tool for improving the safety and reducing the environmental impacts of low-pressure NG distribution systems by reducing atmospheric methane emissions.
The verification of lightning location accuracy in Finland deduced from lightning strikes to trees
NASA Astrophysics Data System (ADS)
Mäkelä, Antti; Mäkelä, Jakke; Haapalainen, Jussi; Porjo, Niko
2016-05-01
We present a new method to determine the ground truth and accuracy of lightning location systems (LLS), using natural lightning strikes to trees. Observations of strikes to trees are being collected with a Web-based survey tool at the Finnish Meteorological Institute. Since the Finnish thunderstorms tend to have on average a low flash rate, it is often possible to identify from the LLS data unambiguously the stroke that caused damage to a given tree. The coordinates of the tree are then the ground truth for that stroke. The technique has clear advantages over other methods used to determine the ground truth. Instrumented towers and rocket launches measure upward-propagating lightning. Video and audio records, even with triangulation, are rarely capable of high accuracy. We present data for 36 quality-controlled tree strikes in the years 2007-2008. We show that the average inaccuracy of the lightning location network for that period was 600 m. In addition, we show that the 50% confidence ellipse calculated by the lightning location network and used operationally for describing the location accuracy is physically meaningful: half of all the strikes were located within the uncertainty ellipse of the nearest recorded stroke. Using tree strike data thus allows not only the accuracy of the LLS to be estimated but also the reliability of the uncertainty ellipse. To our knowledge, this method has not been attempted before for natural lightning.
Feasibility of imaging epileptic seizure onset with EIT and depth electrodes.
Witkowska-Wrobel, Anna; Aristovich, Kirill; Faulkner, Mayo; Avery, James; Holder, David
2018-06-01
Imaging ictal and interictal activity with Electrical Impedance Tomography (EIT) using intracranial electrode mats has been demonstrated in animal models of epilepsy. In human epilepsy subjects undergoing presurgical evaluation, depth electrodes are often preferred. The purpose of this work was to evaluate the feasibility of using EIT to localise epileptogenic areas with intracranial electrodes in humans. The accuracy of localisation of the ictal onset zone was evaluated in computer simulations using 9M element FEM models derived from three subjects. 5 mm radius perturbations imitating a single seizure onset event were placed in several locations forming two groups: under depth electrode coverage and in the contralateral hemisphere. Simulations were made for impedance changes of 1% expected for neuronal depolarisation over milliseconds and 10% for cell swelling over seconds. Reconstructions were compared with EEG source modelling for a radially orientated dipole with respect to the closest EEG recording contact. The best accuracy of EIT was obtained using all depth and 32 scalp electrodes, greater than the equivalent accuracy with EEG inverse source modelling. The localisation error was 5.2 ± 1.8, 4.3 ± 0 and 46.2 ± 25.8 mm for perturbations within the volume enclosed by depth electrodes and 29.6 ± 38.7, 26.1 ± 36.2, 54.0 ± 26.2 mm for those without (EIT 1%, 10% change, EEG source modelling, n = 15 in 3 subjects, p < 0.01). As EIT was insensitive to source dipole orientation, all 15 perturbations within the volume enclosed by depth electrodes were localised, whereas the standard clinical method of visual inspection of EEG voltages, only localised 8 out of 15 cases. This suggests that adding EIT to SEEG measurements could be beneficial in localising the onset of seizures. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Accuracy of WAAS-Enabled GPS-RF Warning Signals When Crossing a Terrestrial Geofence
Grayson, Lindsay M.; Keefe, Robert F.; Tinkham, Wade T.; Eitel, Jan U. H.; Saralecos, Jarred D.; Smith, Alistair M. S.; Zimbelman, Eloise G.
2016-01-01
Geofences are virtual boundaries based on geographic coordinates. When combined with global position system (GPS), or more generally global navigation satellite system (GNSS) transmitters, geofences provide a powerful tool for monitoring the location and movements of objects of interest through proximity alarms. However, the accuracy of geofence alarms in GNSS-radio frequency (GNSS-RF) transmitter receiver systems has not been tested. To achieve these goals, a cart with a GNSS-RF locator was run on a straight path in a balanced factorial experiment with three levels of cart speed, three angles of geofence intersection, three receiver distances from the track, and three replicates. Locator speed, receiver distance and geofence intersection angle all affected geofence alarm accuracy in an analysis of variance (p = 0.013, p = 2.58 × 10−8, and p = 0.0006, respectively), as did all treatment interactions (p < 0.0001). Slower locator speed, acute geofence intersection angle, and closest receiver distance were associated with reduced accuracy of geofence alerts. PMID:27322287
Accuracy of WAAS-Enabled GPS-RF Warning Signals When Crossing a Terrestrial Geofence.
Grayson, Lindsay M; Keefe, Robert F; Tinkham, Wade T; Eitel, Jan U H; Saralecos, Jarred D; Smith, Alistair M S; Zimbelman, Eloise G
2016-06-18
Geofences are virtual boundaries based on geographic coordinates. When combined with global position system (GPS), or more generally global navigation satellite system (GNSS) transmitters, geofences provide a powerful tool for monitoring the location and movements of objects of interest through proximity alarms. However, the accuracy of geofence alarms in GNSS-radio frequency (GNSS-RF) transmitter receiver systems has not been tested. To achieve these goals, a cart with a GNSS-RF locator was run on a straight path in a balanced factorial experiment with three levels of cart speed, three angles of geofence intersection, three receiver distances from the track, and three replicates. Locator speed, receiver distance and geofence intersection angle all affected geofence alarm accuracy in an analysis of variance (p = 0.013, p = 2.58 × 10(-8), and p = 0.0006, respectively), as did all treatment interactions (p < 0.0001). Slower locator speed, acute geofence intersection angle, and closest receiver distance were associated with reduced accuracy of geofence alerts.
Accuracy and consistency of weights provided by home bathroom scales.
Yorkin, Meredith; Spaccarotella, Kim; Martin-Biggers, Jennifer; Quick, Virginia; Byrd-Bredbenner, Carol
2013-12-17
Self-reported body weight is often used for calculation of Body Mass Index because it is easy to collect. Little is known about sources of error introduced by using bathroom scales to measure weight at home. The objective of this study was to evaluate the accuracy and consistency of digital versus dial-type bathroom scales commonly used for self-reported weight. Participants brought functioning bathroom scales (n=18 dial-type, n=43 digital-type) to a central location. Trained researchers assessed accuracy and consistency using certified calibration weights at 10 kg, 25 kg, 50 kg, 75 kg, 100 kg, and 110 kg. Data also were collected on frequency of calibration, age and floor surface beneath the scale. All participants reported using their scale on hard surface flooring. Before calibration, all digital scales displayed 0, but dial scales displayed a mean absolute initial weight of 0.95 (1.9 SD) kg. Digital scales accurately weighed test loads whereas dial-type scale weights differed significantly (p<0.05). Imprecision of dial scales was significantly greater than that of digital scales at all weights (p<0.05). Accuracy and precision did not vary by scale age. Digital home bathroom scales provide sufficiently accurate and consistent weights for public health research. Reminders to zero scales before each use may further improve accuracy of self-reported weight.
Wang, Hubiao; Wu, Lin; Chai, Hua; Xiao, Yaofei; Hsu, Houtse; Wang, Yong
2017-08-10
The variation of a marine gravity anomaly reference map is one of the important factors that affect the location accuracy of INS/Gravity integrated navigation systems in underwater navigation. In this study, based on marine gravity anomaly reference maps, new characteristic parameters of the gravity anomaly were constructed. Those characteristic values were calculated for 13 zones (105°-145° E, 0°-40° N) in the Western Pacific area, and simulation experiments of gravity matching-aided navigation were run. The influence of gravity variations on the accuracy of gravity matching-aided navigation was analyzed, and location accuracy of gravity matching in different zones was determined. Studies indicate that the new parameters may better characterize the marine gravity anomaly. Given the precision of current gravimeters and the resolution and accuracy of reference maps, the location accuracy of gravity matching in China's Western Pacific area is ~1.0-4.0 nautical miles (n miles). In particular, accuracy in regions around the South China Sea and Sulu Sea was the highest, better than 1.5 n miles. The gravity characteristic parameters identified herein and characteristic values calculated in various zones provide a reference for the selection of navigation area and planning of sailing routes under conditions requiring certain navigational accuracy.
Wang, Hubiao; Chai, Hua; Xiao, Yaofei; Hsu, Houtse; Wang, Yong
2017-01-01
The variation of a marine gravity anomaly reference map is one of the important factors that affect the location accuracy of INS/Gravity integrated navigation systems in underwater navigation. In this study, based on marine gravity anomaly reference maps, new characteristic parameters of the gravity anomaly were constructed. Those characteristic values were calculated for 13 zones (105°–145° E, 0°–40° N) in the Western Pacific area, and simulation experiments of gravity matching-aided navigation were run. The influence of gravity variations on the accuracy of gravity matching-aided navigation was analyzed, and location accuracy of gravity matching in different zones was determined. Studies indicate that the new parameters may better characterize the marine gravity anomaly. Given the precision of current gravimeters and the resolution and accuracy of reference maps, the location accuracy of gravity matching in China’s Western Pacific area is ~1.0–4.0 nautical miles (n miles). In particular, accuracy in regions around the South China Sea and Sulu Sea was the highest, better than 1.5 n miles. The gravity characteristic parameters identified herein and characteristic values calculated in various zones provide a reference for the selection of navigation area and planning of sailing routes under conditions requiring certain navigational accuracy. PMID:28796158
Integration of Heterogenous Digital Surface Models
NASA Astrophysics Data System (ADS)
Boesch, R.; Ginzler, C.
2011-08-01
The application of extended digital surface models often reveals, that despite an acceptable global accuracy for a given dataset, the local accuracy of the model can vary in a wide range. For high resolution applications which cover the spatial extent of a whole country, this can be a major drawback. Within the Swiss National Forest Inventory (NFI), two digital surface models are available, one derived from LiDAR point data and the other from aerial images. Automatic photogrammetric image matching with ADS80 aerial infrared images with 25cm and 50cm resolution is used to generate a surface model (ADS-DSM) with 1m resolution covering whole switzerland (approx. 41000 km2). The spatially corresponding LiDAR dataset has a global point density of 0.5 points per m2 and is mainly used in applications as interpolated grid with 2m resolution (LiDAR-DSM). Although both surface models seem to offer a comparable accuracy from a global view, local analysis shows significant differences. Both datasets have been acquired over several years. Concerning LiDAR-DSM, different flight patterns and inconsistent quality control result in a significantly varying point density. The image acquisition of the ADS-DSM is also stretched over several years and the model generation is hampered by clouds, varying illumination and shadow effects. Nevertheless many classification and feature extraction applications requiring high resolution data depend on the local accuracy of the used surface model, therefore precise knowledge of the local data quality is essential. The commercial photogrammetric software NGATE (part of SOCET SET) generates the image based surface model (ADS-DSM) and delivers also a map with figures of merit (FOM) of the matching process for each calculated height pixel. The FOM-map contains matching codes like high slope, excessive shift or low correlation. For the generation of the LiDAR-DSM only first- and last-pulse data was available. Therefore only the point distribution can be used to derive a local accuracy measure. For the calculation of a robust point distribution measure, a constrained triangulation of local points (within an area of 100m2) has been implemented using the Open Source project CGAL. The area of each triangle is a measure for the spatial distribution of raw points in this local area. Combining the FOM-map with the local evaluation of LiDAR points allows an appropriate local accuracy evaluation of both surface models. The currently implemented strategy ("partial replacement") uses the hypothesis, that the ADS-DSM is superior due to its better global accuracy of 1m. If the local analysis of the FOM-map within the 100m2 area shows significant matching errors, the corresponding area of the triangulated LiDAR points is analyzed. If the point density and distribution is sufficient, the LiDAR-DSM will be used in favor of the ADS-DSM at this location. If the local triangulation reflects low point density or the variance of triangle areas exceeds a threshold, the investigated location will be marked as NODATA area. In a future implementation ("anisotropic fusion") an anisotropic inverse distance weighting (IDW) will be used, which merges both surface models in the point data space by using FOM-map and local triangulation to derive a quality weight for each of the interpolation points. The "partial replacement" implementation and the "fusion" prototype for the anisotropic IDW make use of the Open Source projects CGAL (Computational Geometry Algorithms Library), GDAL (Geospatial Data Abstraction Library) and OpenCV (Open Source Computer Vision).
A draft map of the mouse pluripotent stem cell spatial proteome
Christoforou, Andy; Mulvey, Claire M.; Breckels, Lisa M.; Geladaki, Aikaterini; Hurrell, Tracey; Hayward, Penelope C.; Naake, Thomas; Gatto, Laurent; Viner, Rosa; Arias, Alfonso Martinez; Lilley, Kathryn S.
2016-01-01
Knowledge of the subcellular distribution of proteins is vital for understanding cellular mechanisms. Capturing the subcellular proteome in a single experiment has proven challenging, with studies focusing on specific compartments or assigning proteins to subcellular niches with low resolution and/or accuracy. Here we introduce hyperLOPIT, a method that couples extensive fractionation, quantitative high-resolution accurate mass spectrometry with multivariate data analysis. We apply hyperLOPIT to a pluripotent stem cell population whose subcellular proteome has not been extensively studied. We provide localization data on over 5,000 proteins with unprecedented spatial resolution to reveal the organization of organelles, sub-organellar compartments, protein complexes, functional networks and steady-state dynamics of proteins and unexpected subcellular locations. The method paves the way for characterizing the impact of post-transcriptional and post-translational modification on protein location and studies involving proteome-level locational changes on cellular perturbation. An interactive open-source resource is presented that enables exploration of these data. PMID:26754106
Locating Local Earthquakes Using Single 3-Component Broadband Seismological Data
NASA Astrophysics Data System (ADS)
Das, S. B.; Mitra, S.
2015-12-01
We devised a technique to locate local earthquakes using single 3-component broadband seismograph and analyze the factors governing the accuracy of our result. The need for devising such a technique arises in regions of sparse seismic network. In state-of-the-art location algorithms, a minimum of three station recordings are required for obtaining well resolved locations. However, the problem arises when an event is recorded by less than three stations. This may be because of the following reasons: (a) down time of stations in a sparse network; (b) geographically isolated regions with limited logistic support to setup large network; (c) regions of insufficient economy for financing multi-station network and (d) poor signal-to-noise ratio for smaller events at most stations, except the one in its closest vicinity. Our technique provides a workable solution to the above problematic scenarios. However, our methodology is strongly dependent on the velocity model of the region. Our method uses a three step processing: (a) ascertain the back-azimuth of the event from the P-wave particle motion recorded on the horizontal components; (b) estimate the hypocentral distance using the S-P time; and (c) ascertain the emergent angle from the vertical and radial components. Once this is obtained, one can ray-trace through the 1-D velocity model to estimate the hypocentral location. We test our method on synthetic data, which produces results with 99% precision. With observed data, the accuracy of our results are very encouraging. The precision of our results depend on the signal-to-noise ratio (SNR) and choice of the right band-pass filter to isolate the P-wave signal. We used our method on minor aftershocks (3 < mb < 4) of the 2011 Sikkim earthquake using data from the Sikkim Himalayan network. Location of these events highlight the transverse strike-slip structure within the Indian plate, which was observed from source mechanism study of the mainshock and larger aftershocks.
The Advantage of the Second Military Survey in Fluvial Measures
NASA Astrophysics Data System (ADS)
Kovács, G.
2009-04-01
The Second Military Survey of the Habsburg Empire, completed in the 19th century, can be very useful in different scientific investigations owing to its accuracy and data content. The fact, that the mapmakers used geodetic projection, and the high accuracy of the survey guarantee that scientists can use these maps and the represented objects can be evaluated in retrospective studies. Among others, the hydrological information of the map sheets is valuable. The streams were drawn with very thin lines that also ascertain the high accuracy of their location, provided that the geodetic position of the sheet can be constructed with high accuracy. After geocoding these maps we faced the high accuracy of line elements. Not only the location of these lines but the form of the creeks are usually almost the same as recent shape. The goal of our study was the neotectonic evaluation of the western part of the Pannonian Basin, bordered by Pinka, Rába and Répce Rivers. The watercourses, especially alluvial ones, react very sensitively to tectonic forcing. However, the present-day course of the creeks and rivers are mostly regulated, therefore they are unsuitable for such studies. Consequently, the watercourses should be reconstructed from maps surveyed prior to the main water control measures. The Second Military Survey is a perfect source for such studies because it is the first survey has drawn in geodetic projection but the creeks haven't been regulated yet. The maps show intensive agricultural cultivation and silviculture in the study area. Especially grazing cultivation precincts of the streams is important for us. That phenomenon and data from other sources prove that the streams haven't been regulated in that time. The streams were able to meander, and flood its banks, and only natural levees are present. The general morphology south from the Kőszegi Mountains shows typical SSE slopes with low relief cut off by 30-60 meter high scarps followed by streams. That suggested us to investigate the neotectonic features, what also indicated by the alternate meandering of surveyed streams. After geocoding the maps of the area, the streams were digitised, and their sinuosity values were calculated. At places significant difference of sinuosity has been observed along the streams, it can be considered as indicators of differential uplift or subsidence of the bedrock/alluvium. This method can be useful in general, if the watercourses mapped in the historical map are assumed to be unaffected by human.
Locating and Modeling Regional Earthquakes with Broadband Waveform Data
NASA Astrophysics Data System (ADS)
Tan, Y.; Zhu, L.; Helmberger, D.
2003-12-01
Retrieving source parameters of small earthquakes (Mw < 4.5), including mechanism, depth, location and origin time, relies on local and regional seismic data. Although source characterization for such small events achieves a satisfactory stage in some places with a dense seismic network, such as TriNet, Southern California, a worthy revisit to the historical events in these places or an effective, real-time investigation of small events in many other places, where normally only a few local waveforms plus some short-period recordings are available, is still a problem. To address this issue, we introduce a new type of approach that estimates location, depth, origin time and fault parameters based on 3-component waveform matching in terms of separated Pnl, Rayleigh and Love waves. We show that most local waveforms can be well modeled by a regionalized 1-D model plus different timing corrections for Pnl, Rayleigh and Love waves at relatively long periods, i.e., 4-100 sec for Pnl, and 8-100 sec for surface waves, except for few anomalous paths involving greater structural complexity, meanwhile, these timing corrections reveal similar azimuthal patterns for well-located cluster events, despite their different focal mechanisms. Thus, we can calibrate the paths separately for Pnl, Rayleigh and Love waves with the timing corrections from well-determined events widely recorded by a dense modern seismic network or a temporary PASSCAL experiment. In return, we can locate events and extract their fault parameters by waveform matching for available waveform data, which could be as less as from two stations, assuming timing corrections from the calibration. The accuracy of the obtained source parameters is subject to the error carried by the events used for the calibration. The detailed method requires a Green_s function library constructed from a regionalized 1-D model together with necessary calibration information, and adopts a grid search strategy for both hypercenter and focal mechanism. We show that the whole process can be easily automated and routinely provide reliable source parameter estimates with a couple of broadband stations. Two applications in the Tibet Plateau and Southern California will be presented along with comparisons of results against other methods.
Accuracy of the NDI Wave Speech Research System
ERIC Educational Resources Information Center
Berry, Jeffrey J.
2011-01-01
Purpose: This work provides a quantitative assessment of the positional tracking accuracy of the NDI Wave Speech Research System. Method: Three experiments were completed: (a) static rigid-body tracking across different locations in the electromagnetic field volume, (b) dynamic rigid-body tracking across different locations within the…
Researchermap: a tool for visualizing author locations using Google maps.
Rastegar-Mojarad, Majid; Bales, Michael E; Yu, Hong
2013-01-01
We hereby present ResearcherMap, a tool to visualize locations of authors of scholarly papers. In response to a query, the system returns a map of author locations. To develop the system we first populated a database of author locations, geocoding institution locations for all available institutional affiliation data in our database. The database includes all authors of Medline papers from 1990 to 2012. We conducted a formative heuristic usability evaluation of the system and measured the system's accuracy and performance. The accuracy of finding the accurate address is 97.5% in our system.
Predicting Gene Structures from Multiple RT-PCR Tests
NASA Astrophysics Data System (ADS)
Kováč, Jakub; Vinař, Tomáš; Brejová, Broňa
It has been demonstrated that the use of additional information such as ESTs and protein homology can significantly improve accuracy of gene prediction. However, many sources of external information are still being omitted from consideration. Here, we investigate the use of product lengths from RT-PCR experiments in gene finding. We present hardness results and practical algorithms for several variants of the problem and apply our methods to a real RT-PCR data set in the Drosophila genome. We conclude that the use of RT-PCR data can improve the sensitivity of gene prediction and locate novel splicing variants.
THE POSITION/STRUCTURE STABILITY OF FOUR ICRF2 SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fomalont, Ed; Johnston, Kenneth; Fey, Alan
2011-03-15
Four close radio sources in the International Celestial Reference Frame (ICRF) catalog were observed using phase referencing with the VLBA at 43, 23, and 8.6 GHz, and with VERA at 23 GHz over a one-year period. The goal was to determine the stability of the radio cores and to assess structure effects associated with positions in the ICRF. Although the four sources were compact at 8.6 GHz, the VLBA images at 43 GHz with 0.3 mas resolution showed that all were composed of several components. A component in each source was identified as the radio core using some or allmore » of the following emission properties: compactness, spectral index, location at the end of the extended emission region, and stationary in the sky. Over the observing period, the relative positions between the four radio cores were constant to 0.02 mas, the phase-referencing positional accuracy obtained at 23 and 43 GHz among the sources, suggesting that once a radio core is identified, it remains stationary in the sky to this accuracy. Other radio components in two of the four sources had detectable motion in the radio jet direction. Comparison of the 23 and 43 GHz VLBA images with the VLBA 8.6 GHz images and the ICRF positions suggests that some ICRF positions are dominated by a moving jet component; hence, they can be displaced up to 0.5 mas from the radio core and may also reflect the motion of the jet component. Future astrometric efforts to determine a more accurate quasar reference frame at 23 and 43 GHz and from the VLBI2010 project are discussed, and supporting VLBA or European VLBI Network observations of ICRF sources at 43 GHz are recommended in order to determine the internal structure of the sources. A future collaboration between the radio (ICRF) and the optical frame of GAIA is discussed.« less
Accuracy of indexing coverage information as reported by serials sources.
Eldredge, J D
1993-01-01
This article reports on the accuracy of indexing service coverage information listed in three serials sources: Ulrich's International Periodicals Directory, SERLINE, and The Serials Directory. The titles studied were randomly selected journals that began publication in either 1981 or 1986. Aggregate results reveal that these serials sources perform at 92%, 97%, and 95% levels of accuracy respectively. When the results are analyzed by specific indexing services by year, the performance scores ranged from 80% to 100%. All three serials sources tend to underreport index coverage. The author advances five recommendations for improving index coverage accuracy and four specific proposals for future research. The results suggest that, for the immediate future, librarians should treat index coverage information reported in these three serials sources with some skepticism. PMID:8251971
Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed
2013-03-01
Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clément, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and 4-year-olds were asked to endorse and imitate one of two actors performing an unfamiliar action, one actor who was unconventional but successful and one who was conventional but unsuccessful. These data demonstrated that children preferred endorsing and imitating the unconventional but successful actor. Results suggest that when the accuracy and conventionality of a source are put into conflict, children may give priority to accuracy over conventionality when estimating the source's reliability and, ultimately, when deciding who to trust.
Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian
2015-01-01
Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their full potential in capturing clinical outcomes. PMID:25811838
Independent Evaluation of The Bay Area Supply Depot Consolidation Prototype
1991-12-01
extra inventory to be added to the system. In effect, receipt processing timeliness balances the cost of receiving economically with the cost of holding...that could not be found because of incorrect balance information; the ICP thinks the stock is there, but the warehouse worker cannot locate it. It is a...reflect the overall accuracy of the balance or location 4While balance accuracy is also an important measure of record accuracy, it is not included here
Reputation-Based Secure Sensor Localization in Wireless Sensor Networks
He, Jingsha; Xu, Jing; Zhu, Xingye; Zhang, Yuqiang; Zhang, Ting; Fu, Wanqing
2014-01-01
Location information of sensor nodes in wireless sensor networks (WSNs) is very important, for it makes information that is collected and reported by the sensor nodes spatially meaningful for applications. Since most current sensor localization schemes rely on location information that is provided by beacon nodes for the regular sensor nodes to locate themselves, the accuracy of localization depends on the accuracy of location information from the beacon nodes. Therefore, the security and reliability of the beacon nodes become critical in the localization of regular sensor nodes. In this paper, we propose a reputation-based security scheme for sensor localization to improve the security and the accuracy of sensor localization in hostile or untrusted environments. In our proposed scheme, the reputation of each beacon node is evaluated based on a reputation evaluation model so that regular sensor nodes can get credible location information from highly reputable beacon nodes to accomplish localization. We also perform a set of simulation experiments to demonstrate the effectiveness of the proposed reputation-based security scheme. And our simulation results show that the proposed security scheme can enhance the security and, hence, improve the accuracy of sensor localization in hostile or untrusted environments. PMID:24982940
Testing the exclusivity effect in location memory.
Clark, Daniel P A; Dunn, Andrew K; Baguley, Thom
2013-01-01
There is growing literature exploring the possibility of parallel retrieval of location memories, although this literature focuses primarily on the speed of retrieval with little attention to the accuracy of location memory recall. Baguley, Lansdale, Lines, and Parkin (2006) found that when a person has two or more memories for an object's location, their recall accuracy suggests that only one representation can be retrieved at a time (exclusivity). This finding is counterintuitive given evidence of non-exclusive recall in the wider memory literature. The current experiment explored the exclusivity effect further and aimed to promote an alternative outcome (i.e., independence or superadditivity) by encouraging the participants to combine multiple representations of space at encoding or retrieval. This was encouraged by using anchor (points of reference) labels that could be combined to form a single strongly associated combination. It was hypothesised that the ability to combine the anchor labels would allow the two representations to be retrieved concurrently, generating higher levels of recall accuracy. The results demonstrate further support for the exclusivity hypothesis, showing no significant improvement in recall accuracy when there are multiple representations of a target object's location as compared to a single representation.
75 FR 57465 - Sunshine Act Meeting; Open Commission Meeting; Thursday, September 23, 2010
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-21
... WIRELINE TITLE: Schools and COMPETITION. Libraries Universal Service Support Mechanism (CC Docket No. 02- 6... PUBLIC SAFETY & TITLE: Wireless HOMELAND SECURITY. E911 Location Accuracy Requirements (PS Docket No. 07... SAFETY & TITLE: Wireless HOMELAND SECURITY. E911 Location Accuracy Requirements (PS Docket No. 07- 114...
Maden, Orhan; Balci, Kevser Gülcihan; Selcuk, Mehmet Timur; Balci, Mustafa Mücahit; Açar, Burak; Unal, Sefa; Kara, Meryem; Selcuk, Hatice
2015-12-01
The aim of this study was to investigate the accuracy of three algorithms in predicting accessory pathway locations in adult patients with Wolff-Parkinson-White syndrome in Turkish population. A total of 207 adult patients with Wolff-Parkinson-White syndrome were retrospectively analyzed. The most preexcited 12-lead electrocardiogram in sinus rhythm was used for analysis. Two investigators blinded to the patient data used three algorithms for prediction of accessory pathway location. Among all locations, 48.5% were left-sided, 44% were right-sided, and 7.5% were located in the midseptum or anteroseptum. When only exact locations were accepted as match, predictive accuracy for Chiang was 71.5%, 72.4% for d'Avila, and 71.5% for Arruda. The percentage of predictive accuracy of all algorithms did not differ between the algorithms (p = 1.000; p = 0.875; p = 0.885, respectively). The best algorithm for prediction of right-sided, left-sided, and anteroseptal and midseptal accessory pathways was Arruda (p < 0.001). Arruda was significantly better than d'Avila in predicting adjacent sites (p = 0.035) and the percent of the contralateral site prediction was higher with d'Avila than Arruda (p = 0.013). All algorithms were similar in predicting accessory pathway location and the predicted accuracy was lower than previously reported by their authors. However, according to the accessory pathway site, the algorithm designed by Arruda et al. showed better predictions than the other algorithms and using this algorithm may provide advantages before a planned ablation.
Silva, Mónica A; Jonsen, Ian; Russell, Deborah J F; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F
2014-01-01
Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km) was nearly half that of LS estimates (11.6 ± 8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.
Silva, Mónica A.; Jonsen, Ian; Russell, Deborah J. F.; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F.
2014-01-01
Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to “true” GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6±5.6 km) was nearly half that of LS estimates (11.6±8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales’ behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates. PMID:24651252
NASA Astrophysics Data System (ADS)
Torosean, Sason; Flynn, Brendan; Samkoe, Kimberley S.; Davis, Scott C.; Gunn, Jason; Axelsson, Johan; Pogue, Brian W.
2012-02-01
An ultrasound coupled handheld-probe-based optical fluorescence molecular tomography (FMT) system has been in development for the purpose of quantifying the production of Protoporphyrin IX (PPIX) in aminolevulinic acid treated (ALA), Basal Cell Carcinoma (BCC) in vivo. The design couples fiber-based spectral sampling of PPIX fluorescence emission with a high frequency ultrasound imaging system, allowing regionally localized fluorescence intensities to be quantified [1]. The optical data are obtained by sequential excitation of the tissue with a 633nm laser, at four source locations and five parallel detections at each of the five interspersed detection locations. This method of acquisition permits fluorescence detection for both superficial and deep locations in ultrasound field. The optical boundary data, tissue layers segmented from ultrasound image and diffusion theory are used to estimate the fluorescence in tissue layers. To improve the recovery of the fluorescence signal of PPIX, eliminating tissue autofluorescence is of great importance. Here the approach was to utilize measurements which straddled the steep Qband excitation peak of PPIX, via the integration of an additional laser source, exciting at 637 nm; a wavelength with a 2 fold lower PPIX excitation value than 633nm.The auto-fluorescence spectrum acquired from the 637 nm laser is then used to spectrally decouple the fluorescence data and produce an accurate fluorescence emission signal, because the two wavelengths have very similar auto-fluorescence but substantially different PPIX excitation levels. The accuracy of this method, using a single source detector pair setup, is verified through animal tumor model experiments, and the result is compared to different methods of fluorescence signal recovery.
Enhanced anatomical calibration in human movement analysis.
Donati, Marco; Camomilla, Valentina; Vannozzi, Giuseppe; Cappozzo, Aurelio
2007-07-01
The representation of human movement requires knowledge of both movement and morphology of bony segments. The determination of subject-specific morphology data and their registration with movement data is accomplished through an anatomical calibration procedure (calibrated anatomical systems technique: CAST). This paper describes a novel approach to this calibration (UP-CAST) which, as compared with normally used techniques, achieves better repeatability, a shorter application time, and can be effectively performed by non-skilled examiners. Instead of the manual location of prominent bony anatomical landmarks, the description of which is affected by subjective interpretation, a large number of unlabelled points is acquired over prominent parts of the subject's bone, using a wand fitted with markers. A digital model of a template-bone is then submitted to isomorphic deformation and re-orientation to optimally match the above-mentioned points. The locations of anatomical landmarks are automatically made available. The UP-CAST was validated considering the femur as a paradigmatic case. Intra- and inter-examiner repeatability of the identification of anatomical landmarks was assessed both in vivo, using average weight subjects, and on bare bones. Accuracy of the identification was assessed using the anatomical landmark locations manually located on bare bones as reference. The repeatability of this method was markedly higher than that reported in the literature and obtained using the conventional palpation (ranges: 0.9-7.6 mm and 13.4-17.9, respectively). Accuracy resulted, on average, in a maximal error of 11 mm. Results suggest that the principal source of variability resides in the discrepancy between subject's and template bone morphology and not in the inter-examiner differences. The UP-CAST anatomical calibration could be considered a promising alternative to conventional calibration contributing to a more repeatable 3D human movement analysis.
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2018-01-01
The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
NASA Astrophysics Data System (ADS)
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-01
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo
NASA Astrophysics Data System (ADS)
Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.; Kouzes, Richard T.; Kulisek, Jonathan A.; Robinson, Sean M.; Wittman, Richard A.
2015-10-01
Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide an estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.
Auditory and visual localization accuracy in young children and adults.
Martin, Karen; Johnstone, Patti; Hedrick, Mark
2015-06-01
This study aimed to measure and compare sound and light source localization ability in young children and adults who have normal hearing and normal/corrected vision in order to determine the extent to which age, type of stimuli, and stimulus order affects sound localization accuracy. Two experiments were conducted. The first involved a group of adults only. The second involved a group of 30 children aged 3 to 5 years. Testing occurred in a sound-treated booth containing a semi-circular array of 15 loudspeakers set at 10° intervals from -70° to 70° azimuth. Each loudspeaker had a tiny light bulb and a small picture fastened underneath. Seven of the loudspeakers were used to randomly test sound and light source identification. The sound stimulus was the word "baseball". The light stimulus was a flashing of a light bulb triggered by the digital signal of the word "baseball". Each participant was asked to face 0° azimuth, and identify the location of the test stimulus upon presentation. Adults used a computer mouse to click on an icon; children responded by verbally naming or walking toward the picture underneath the corresponding loudspeaker or light. A mixed experimental design using repeated measures was used to determine the effect of age and stimulus type on localization accuracy in children and adults. A mixed experimental design was used to compare the effect of stimulus order (light first/last) and varying or fixed intensity sound on localization accuracy in children and adults. Localization accuracy was significantly better for light stimuli than sound stimuli for children and adults. Children, compared to adults, showed significantly greater localization errors for audition. Three-year-old children had significantly greater sound localization errors compared to 4- and 5-year olds. Adults performed better on the sound localization task when the light localization task occurred first. Young children can understand and attend to localization tasks, but show poorer localization accuracy than adults in sound localization. This may be a reflection of differences in sensory modality development and/or central processes in young children, compared to adults. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Analysis of the geophysical data using a posteriori algorithms
NASA Astrophysics Data System (ADS)
Voskoboynikova, Gyulnara; Khairetdinov, Marat
2016-04-01
The problems of monitoring, prediction and prevention of extraordinary natural and technogenic events are priority of modern problems. These events include earthquakes, volcanic eruptions, the lunar-solar tides, landslides, falling celestial bodies, explosions utilized stockpiles of ammunition, numerous quarry explosion in open coal mines, provoking technogenic earthquakes. Monitoring is based on a number of successive stages, which include remote registration of the events responses, measurement of the main parameters as arrival times of seismic waves or the original waveforms. At the final stage the inverse problems associated with determining the geographic location and time of the registration event are solving. Therefore, improving the accuracy of the parameters estimation of the original records in the high noise is an important problem. As is known, the main measurement errors arise due to the influence of external noise, the difference between the real and model structures of the medium, imprecision of the time definition in the events epicenter, the instrumental errors. Therefore, posteriori algorithms more accurate in comparison with known algorithms are proposed and investigated. They are based on a combination of discrete optimization method and fractal approach for joint detection and estimation of the arrival times in the quasi-periodic waveforms sequence in problems of geophysical monitoring with improved accuracy. Existing today, alternative approaches to solving these problems does not provide the given accuracy. The proposed algorithms are considered for the tasks of vibration sounding of the Earth in times of lunar and solar tides, and for the problem of monitoring of the borehole seismic source location in trade drilling.
Bulashevska, Alla; Eils, Roland
2006-06-14
The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.
77 FR 43536 - Wireless E911 Phase II Location Accuracy Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-25
... Docket No. 07-114; FCC 11-107] Wireless E911 Phase II Location Accuracy Requirements AGENCY: Federal...- 2413, or email: [email protected]fcc.gov . SUPPLEMENTARY INFORMATION: This document announces that, on... Commission's Order, FCC 11-107, published at 76 FR 59916, September 28, 2011. The OMB Control Number is 3060...
Location of planar targets in three space from monocular images
NASA Technical Reports Server (NTRS)
Cornils, Karin; Goode, Plesent W.
1987-01-01
Many pieces of existing and proposed space hardware that would be targets of interest for a telerobot can be represented as planar or near-planar surfaces. Examples include the biostack modules on the Long Duration Exposure Facility, the panels on Solar Max, large diameter struts, and refueling receptacles. Robust and temporally efficient methods for locating such objects with sufficient accuracy are therefore worth developing. Two techniques that derive the orientation and location of an object from its monocular image are discussed and the results of experiments performed to determine translational and rotational accuracy are presented. Both the quadrangle projection and elastic matching techniques extract three-space information using a minimum of four identifiable target points and the principles of the perspective transformation. The selected points must describe a convex polygon whose geometric characteristics are prespecified in a data base. The rotational and translational accuracy of both techniques was tested at various ranges. This experiment is representative of the sensing requirements involved in a typical telerobot target acquisition task. Both techniques determined target location to an accuracy sufficient for consistent and efficient acquisition by the telerobot.
NASA Technical Reports Server (NTRS)
Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret
1992-01-01
Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.
X-Ray Optics: Past, Present, and Future
NASA Technical Reports Server (NTRS)
Zhang, William W.
2010-01-01
X-ray astronomy started with a small collimated proportional counter atop a rocket in the early 1960s. It was immediately recognized that focusing X-ray optics would drastically improve both source location accuracy and source detection sensitivity. In the past 5 decades, X-ray astronomy has made significant strides in achieving better angular resolution, large photon collection area, and better spectral and timing resolutions, culminating in the three currently operating X-ray observatories: Chandra, XMM/Newton, and Suzaku. In this talk I will give a brief history of X-ray optics, concentrating on the characteristics of the optics of these three observatories. Then I will discuss current X-ray mirror technologies being developed in several institutions. I will end with a discussion of the optics for the International X-ray Observatory that I have been developing at Goddard Space Flight Center.
Identifying and mitigating errors in satellite telemetry of polar bears
Arthur, Stephen M.; Garner, Gerald W.; Olson, Tamara L.
1998-01-01
Satellite radiotelemetry is a useful method of tracking movements of animals that travel long distances or inhabit remote areas. However, the logistical constraints that encourage the use of satellite telemetry also inhibit efforts to assess accuracy of the resulting data. To investigate effectiveness of methods that might be used to improve the reliability of these data, we compared 3 sets of criteria designed to select the most plausible locations of polar bears (Ursus maritimus) that were tracked using satellite radiotelemetry in the Bering, Chukchi, East Siberian, Laptev, and Kara seas during 1988-93. We also evaluated several indices of location accuracy. Our results suggested that, although indices could provide information useful in evaluating location accuracy, no index or set of criteria was sufficient to identify all the implausible locations. Thus, it was necessary to examine the data and make subjective decisions about which locations to accept or reject. However, by using a formal set of selection criteria, we simplified the task of evaluating locations and ensured that decisions were made consistently. This approach also enabled us to evaluate biases that may be introduced by the criteria used to identify location errors. For our study, the best set of selection criteria comprised: (1) rejecting locations for which the distance to the nearest other point from the same day was >50 km; (2) determining the highest accuracy code (NLOC) for a particular day and rejecting locations from that day with lesser values; and (3) from the remaining locations for each day, selecting the location closest to the location chosen for the previous transmission period. Although our selection criteria seemed unlikely to bias studies of habitat use or geographic distribution, basing selection decisions on distances between points might bias studies of movement rates or distances. It is unlikely that any set of criteria will be best for all situations; to make efficient use of data and minimize bias, these rules must be tailored to specific study objectives.
Rethinking Indoor Localization Solutions Towards the Future of Mobile Location-Based Services
NASA Astrophysics Data System (ADS)
Guney, C.
2017-11-01
Satellite navigation systems with GNSS-enabled devices, such as smartphones, car navigation systems, have changed the way users travel in outdoor environment. GNSS is generally not well suited for indoor location and navigation because of two reasons: First, GNSS does not provide a high level of accuracy although indoor applications need higher accuracies. Secondly, poor coverage of satellite signals for indoor environments decreases its accuracy. So rather than using GNSS satellites within closed environments, existing indoor navigation solutions rely heavily on installed sensor networks. There is a high demand for accurate positioning in wireless networks in GNSS-denied environments. However, current wireless indoor positioning systems cannot satisfy the challenging needs of indoor location-aware applications. Nevertheless, access to a user's location indoors is increasingly important in the development of context-aware applications that increases business efficiency. In this study, how can the current wireless location sensing systems be tailored and integrated for specific applications, like smart cities/grids/buildings/cars and IoT applications, in GNSS-deprived areas.
Espinosa, Felipe; Santos, Carlos; Marrón-Romera, Marta; Pizarro, Daniel; Valdés, Fernando; Dongil, Javier
2011-01-01
This paper describes a relative localization system used to achieve the navigation of a convoy of robotic units in indoor environments. This positioning system is carried out fusing two sensorial sources: (a) an odometric system and (b) a laser scanner together with artificial landmarks located on top of the units. The laser source allows one to compensate the cumulative error inherent to dead-reckoning; whereas the odometry source provides less pose uncertainty in short trajectories. A discrete Extended Kalman Filter, customized for this application, is used in order to accomplish this aim under real time constraints. Different experimental results with a convoy of Pioneer P3-DX units tracking non-linear trajectories are shown. The paper shows that a simple setup based on low cost laser range systems and robot built-in odometry sensors is able to give a high degree of robustness and accuracy to the relative localization problem of convoy units for indoor applications. PMID:22164079
Espinosa, Felipe; Santos, Carlos; Marrón-Romera, Marta; Pizarro, Daniel; Valdés, Fernando; Dongil, Javier
2011-01-01
This paper describes a relative localization system used to achieve the navigation of a convoy of robotic units in indoor environments. This positioning system is carried out fusing two sensorial sources: (a) an odometric system and (b) a laser scanner together with artificial landmarks located on top of the units. The laser source allows one to compensate the cumulative error inherent to dead-reckoning; whereas the odometry source provides less pose uncertainty in short trajectories. A discrete Extended Kalman Filter, customized for this application, is used in order to accomplish this aim under real time constraints. Different experimental results with a convoy of Pioneer P3-DX units tracking non-linear trajectories are shown. The paper shows that a simple setup based on low cost laser range systems and robot built-in odometry sensors is able to give a high degree of robustness and accuracy to the relative localization problem of convoy units for indoor applications.
NASA Astrophysics Data System (ADS)
Goldstein, Janna; Veitch, John; Sesana, Alberto; Vecchio, Alberto
2018-04-01
Super-massive black hole binaries are expected to produce a gravitational wave (GW) signal in the nano-Hertz frequency band which may be detected by pulsar timing arrays (PTAs) in the coming years. The signal is composed of both stochastic and individually resolvable components. Here we develop a generic Bayesian method for the analysis of resolvable sources based on the construction of `null-streams' which cancel the part of the signal held in common for each pulsar (the Earth-term). For an array of N pulsars there are N - 2 independent null-streams that cancel the GW signal from a particular sky location. This method is applied to the localisation of quasi-circular binaries undergoing adiabatic inspiral. We carry out a systematic investigation of the scaling of the localisation accuracy with signal strength and number of pulsars in the PTA. Additionally, we find that source sky localisation with the International PTA data release one is vastly superior than what is achieved by its constituent regional PTAs.
Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin
2013-11-13
A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.
Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin
2013-01-01
A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027
Bayesian Inference for Source Reconstruction: A Real-World Application
Yee, Eugene; Hoffman, Ian; Ungar, Kurt
2014-01-01
This paper applies a Bayesian probabilistic inferential methodology for the reconstruction of the location and emission rate from an actual contaminant source (emission from the Chalk River Laboratories medical isotope production facility) using a small number of activity concentration measurements of a noble gas (Xenon-133) obtained from three stations that form part of the International Monitoring System radionuclide network. The sampling of the resulting posterior distribution of the source parameters is undertaken using a very efficient Markov chain Monte Carlo technique that utilizes a multiple-try differential evolution adaptive Metropolis algorithm with an archive of past states. It is shown that the principal difficulty in the reconstruction lay in the correct specification of the model errors (both scale and structure) for use in the Bayesian inferential methodology. In this context, two different measurement models for incorporation of the model error of the predicted concentrations are considered. The performance of both of these measurement models with respect to their accuracy and precision in the recovery of the source parameters is compared and contrasted. PMID:27379292
ERIC Educational Resources Information Center
Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed
2013-01-01
Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clement, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and…
Application of Geodetic Techniques for Antenna Positioning in a Ground Penetrating Radar Method
NASA Astrophysics Data System (ADS)
Mazurkiewicz, Ewelina; Ortyl, Łukasz; Karczewski, Jerzy
2018-03-01
The accuracy of determining the location of detectable subsurface objects is related to the accuracy of the position of georadar traces in a given profile, which in turn depends on the precise assessment of the distance covered by an antenna. During georadar measurements the distance covered by an antenna can be determined with a variety of methods. Recording traces at fixed time intervals is the simplest of them. A method which allows for more precise location of georadar traces is recording them at fixed distance intervals, which can be performed with the use of distance triggers (such as a measuring wheel or a hip chain). The search for methods eliminating these discrepancies can be based on the measurement of spatial coordinates of georadar traces conducted with the use of modern geodetic techniques for 3-D location. These techniques include above all a GNSS satellite system and electronic tachymeters. Application of the above mentioned methods increases the accuracy of space location of georadar traces. The article presents the results of georadar measurements performed with the use of geodetic techniques in the test area of Mydlniki in Krakow. A satellite receiver Leica system 1200 and a electronic tachymeter Leica 1102 TCRA were integrated with the georadar equipment. The accuracy of locating chosen subsurface structures was compared.
NASA Astrophysics Data System (ADS)
Ziegler, A.; Balch, R. S.; van Wijk, J.
2015-12-01
Farnsworth Oil Field in North Texas hosts an ongoing carbon capture, utilization, and storage project. This study is focused on passive seismic monitoring at the carbon injection site to measure, locate, and catalog any induced seismic events. A Geometrics Geode system is being utilized for continuous recording of the passive seismic downhole bore array in a monitoring well. The array consists of 3-component dual Geospace OMNI-2400 15Hz geophones with a vertical spacing of 30.5m. Downhole temperature and pressure are also monitored. Seismic data is recorded continuously and is produced at a rate of over 900GB per month, which must be archived and reviewed. A Short Term Average/Long Term Average (STA/LTA) algorithm was evaluated for its ability to search for events, including identification and quantification of any false positive events. It was determined that the algorithm was not appropriate for event detection with the background level of noise at the field site and for the recording equipment as configured. Alternatives are being investigated. The final intended outcome of the passive seismic monitoring is to mine the continuous database and develop a catalog of microseismic events/locations and to determine if there is any relationship to CO2 injection in the field. Identifying the location of any microseismic events will allow for correlation with carbon injection locations and previously characterized geological and structural features such as faults and paleoslopes. Additionally, the borehole array has recorded over 1200 active sources with three sweeps at each source location that were acquired during a nearby 3D VSP. These data were evaluated for their usability and location within an effective radius of the array and were stacked to improve signal-noise ratio and are used to calibrate a full field velocity model to enhance event location accuracy. Funding for this project is provided by the U.S. Department of Energy under Award No. DE-FC26-05NT42591.
Accuracy and consistency of weights provided by home bathroom scales
2013-01-01
Background Self-reported body weight is often used for calculation of Body Mass Index because it is easy to collect. Little is known about sources of error introduced by using bathroom scales to measure weight at home. The objective of this study was to evaluate the accuracy and consistency of digital versus dial-type bathroom scales commonly used for self-reported weight. Methods Participants brought functioning bathroom scales (n = 18 dial-type, n = 43 digital-type) to a central location. Trained researchers assessed accuracy and consistency using certified calibration weights at 10 kg, 25 kg, 50 kg, 75 kg, 100 kg, and 110 kg. Data also were collected on frequency of calibration, age and floor surface beneath the scale. Results All participants reported using their scale on hard surface flooring. Before calibration, all digital scales displayed 0, but dial scales displayed a mean absolute initial weight of 0.95 (1.9 SD) kg. Digital scales accurately weighed test loads whereas dial-type scale weights differed significantly (p < 0.05). Imprecision of dial scales was significantly greater than that of digital scales at all weights (p < 0.05). Accuracy and precision did not vary by scale age. Conclusions Digital home bathroom scales provide sufficiently accurate and consistent weights for public health research. Reminders to zero scales before each use may further improve accuracy of self-reported weight. PMID:24341761
Linear estimation of coherent structures in wall-bounded turbulence at Re τ = 2000
NASA Astrophysics Data System (ADS)
Oehler, S.; Garcia–Gutiérrez, A.; Illingworth, S.
2018-04-01
The estimation problem for a fully-developed turbulent channel flow at Re τ = 2000 is considered. Specifically, a Kalman filter is designed using a Navier–Stokes-based linear model. The estimator uses time-resolved velocity measurements at a single wall-normal location (provided by DNS) to estimate the time-resolved velocity field at other wall-normal locations. The estimator is able to reproduce the largest scales with reasonable accuracy for a range of wavenumber pairs, measurement locations and estimation locations. Importantly, the linear model is also able to predict with reasonable accuracy the performance that will be achieved by the estimator when applied to the DNS. A more practical estimation scheme using the shear stress at the wall as measurement is also considered. The estimator is still able to estimate the largest scales with reasonable accuracy, although the estimator’s performance is reduced.
Modeling methods for merging computational and experimental aerodynamic pressure data
NASA Astrophysics Data System (ADS)
Haderlie, Jacob C.
This research describes a process to model surface pressure data sets as a function of wing geometry from computational and wind tunnel sources and then merge them into a single predicted value. The described merging process will enable engineers to integrate these data sets with the goal of utilizing the advantages of each data source while overcoming the limitations of both; this provides a single, combined data set to support analysis and design. The main challenge with this process is accurately representing each data source everywhere on the wing. Additionally, this effort demonstrates methods to model wind tunnel pressure data as a function of angle of attack as an initial step towards a merging process that uses both location on the wing and flow conditions (e.g., angle of attack, flow velocity or Reynold's number) as independent variables. This surrogate model of pressure as a function of angle of attack can be useful for engineers that need to predict the location of zero-order discontinuities, e.g., flow separation or normal shocks. Because, to the author's best knowledge, there is no published, well-established merging method for aerodynamic pressure data (here, the coefficient of pressure Cp), this work identifies promising modeling and merging methods, and then makes a critical comparison of these methods. Surrogate models represent the pressure data for both data sets. Cubic B-spline surrogate models represent the computational simulation results. Machine learning and multi-fidelity surrogate models represent the experimental data. This research compares three surrogates for the experimental data (sequential--a.k.a. online--Gaussian processes, batch Gaussian processes, and multi-fidelity additive corrector) on the merits of accuracy and computational cost. The Gaussian process (GP) methods employ cubic B-spline CFD surrogates as a model basis function to build a surrogate model of the WT data, and this usage of the CFD surrogate in building the WT data could serve as a "merging" because the resulting WT pressure prediction uses information from both sources. In the GP approach, this model basis function concept seems to place more "weight" on the Cp values from the wind tunnel (WT) because the GP surrogate uses the CFD to approximate the WT data values. Conversely, the computationally inexpensive additive corrector method uses the CFD B-spline surrogate to define the shape of the spanwise distribution of the Cp while minimizing prediction error at all spanwise locations for a given arc length position; this, too, combines information from both sources to make a prediction of the 2-D WT-based Cp distribution, but the additive corrector approach gives more weight to the CFD prediction than to the WT data. Three surrogate models of the experimental data as a function of angle of attack are also compared for accuracy and computational cost. These surrogates are a single Gaussian process model (a single "expert"), product of experts, and generalized product of experts. The merging approach provides a single pressure distribution that combines experimental and computational data. The batch Gaussian process method provides a relatively accurate surrogate that is computationally acceptable, and can receive wind tunnel data from port locations that are not necessarily parallel to a variable direction. On the other hand, the sequential Gaussian process and additive corrector methods must receive a sufficient number of data points aligned with one direction, e.g., from pressure port bands (tap rows) aligned with the freestream. The generalized product of experts best represents wind tunnel pressure as a function of angle of attack, but at higher computational cost than the single expert approach. The format of the application data from computational and experimental sources in this work precluded the merging process from including flow condition variables (e.g., angle of attack) in the independent variables, so the merging process is only conducted in the wing geometry variables of arc length and span. The merging process of Cp data allows a more "hands-off" approach to aircraft design and analysis, (i.e., not as many engineers needed to debate the Cp distribution shape) and generates Cp predictions at any location on the wing. However, the cost with these benefits are engineer time (learning how to build surrogates), computational time in constructing the surrogates, and surrogate accuracy (surrogates introduce error into data predictions). This dissertation effort used the Trap Wing / First AIAA CFD High-Lift Prediction Workshop as a relevant transonic wing with a multi-element high-lift system, and this work identified that the batch GP model for the WT data and the B-spline surrogate for the CFD might best be combined using expert belief weights to describe Cp as a function of location on the wing element surface. (Abstract shortened by ProQuest.).
Accuracy and Resolution in Micro-earthquake Tomographic Inversion Studies
NASA Astrophysics Data System (ADS)
Hutchings, L. J.; Ryan, J.
2010-12-01
Accuracy and resolution are complimentary properties necessary to interpret the results of earthquake location and tomography studies. Accuracy is the how close an answer is to the “real world”, and resolution is who small of node spacing or earthquake error ellipse one can achieve. We have modified SimulPS (Thurber, 1986) in several ways to provide a tool for evaluating accuracy and resolution of potential micro-earthquake networks. First, we provide synthetic travel times from synthetic three-dimensional geologic models and earthquake locations. We use this to calculate errors in earthquake location and velocity inversion results when we perturb these models and try to invert to obtain these models. We create as many stations as desired and can create a synthetic velocity model with any desired node spacing. We apply this study to SimulPS and TomoDD inversion studies. “Real” travel times are perturbed with noise and hypocenters are perturbed to replicate a starting location away from the “true” location, and inversion is performed by each program. We establish travel times with the pseudo-bending ray tracer and use the same ray tracer in the inversion codes. This, of course, limits our ability to test the accuracy of the ray tracer. We developed relationships for the accuracy and resolution expected as a function of the number of earthquakes and recording stations for typical tomographic inversion studies. Velocity grid spacing started at 1km, then was decreased to 500m, 100m, 50m and finally 10m to see if resolution with decent accuracy at that scale was possible. We considered accuracy to be good when we could invert a velocity model perturbed by 50% back to within 5% of the original model, and resolution to be the size of the grid spacing. We found that 100 m resolution could obtained by using 120 stations with 500 events, bu this is our current limit. The limiting factors are the size of computers needed for the large arrays in the inversion and a realistic number of stations and events needed to provide the data.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-07
... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 20 [PS Docket No. 07-114; WC Docket No. 05-196; FCC 10-177; DA 10-2267] Wireless E911 Location Accuracy Requirements; E911 Requirements for IP-Enabled Service Providers AGENCY: Federal Communications Commission. ACTION: Proposed rule; extension of comment...
Stakeholder needs for ground penetrating radar utility location
NASA Astrophysics Data System (ADS)
Thomas, A. M.; Rogers, C. D. F.; Chapman, D. N.; Metje, N.; Castle, J.
2009-04-01
In the UK alone there are millions of miles of underground utilities with often inaccurate, incomplete, or non-existent location records that cause significant health and safety problems for maintenance personnel, together with the potential for large, unnecessary, social and financial costs for their upkeep and repair. This has led to increasing use of Ground Penetrating Radar (GPR) for utility location, but without detailed consideration of the degree of location accuracy required by stakeholders — i.e. all those directly involved in streetworks ranging from utility owners to contractors and surveyors and government departments. In order to ensure that stakeholder requirements are incorporated into a major new UK study, entitled Mapping the Underworld, a questionnaire has been used to determine the current and future utility location accuracy requirements. The resulting data indicate that stakeholders generally require location tolerances better than 100 mm at depths usually extending down to 3 m, and more occasionally to 5 m, below surface level, providing significant challenges to GPR if their needs are to be met in all ground conditions. As well as providing much useful data on stakeholder needs, these data are also providing a methodology for assessment of GPR utility location in terms of the factor most important to them — the degree to which the equipment provides location within their own accuracy requirements.
Spectral triangulation: a 3D method for locating single-walled carbon nanotubes in vivo
NASA Astrophysics Data System (ADS)
Lin, Ching-Wei; Bachilo, Sergei M.; Vu, Michael; Beckingham, Kathleen M.; Bruce Weisman, R.
2016-05-01
Nanomaterials with luminescence in the short-wave infrared (SWIR) region are of special interest for biological research and medical diagnostics because of favorable tissue transparency and low autofluorescence backgrounds in that region. Single-walled carbon nanotubes (SWCNTs) show well-known sharp SWIR spectral signatures and therefore have potential for noninvasive detection and imaging of cancer tumours, when linked to selective targeting agents such as antibodies. However, such applications face the challenge of sensitively detecting and localizing the source of SWIR emission from inside tissues. A new method, called spectral triangulation, is presented for three dimensional (3D) localization using sparse optical measurements made at the specimen surface. Structurally unsorted SWCNT samples emitting over a range of wavelengths are excited inside tissue phantoms by an LED matrix. The resulting SWIR emission is sampled at points on the surface by a scanning fibre optic probe leading to an InGaAs spectrometer or a spectrally filtered InGaAs avalanche photodiode detector. Because of water absorption, attenuation of the SWCNT fluorescence in tissues is strongly wavelength-dependent. We therefore gauge the SWCNT-probe distance by analysing differential changes in the measured SWCNT emission spectra. SWCNT fluorescence can be clearly detected through at least 20 mm of tissue phantom, and the 3D locations of embedded SWCNT test samples are found with sub-millimeter accuracy at depths up to 10 mm. Our method can also distinguish and locate two embedded SWCNT sources at distinct positions.Nanomaterials with luminescence in the short-wave infrared (SWIR) region are of special interest for biological research and medical diagnostics because of favorable tissue transparency and low autofluorescence backgrounds in that region. Single-walled carbon nanotubes (SWCNTs) show well-known sharp SWIR spectral signatures and therefore have potential for noninvasive detection and imaging of cancer tumours, when linked to selective targeting agents such as antibodies. However, such applications face the challenge of sensitively detecting and localizing the source of SWIR emission from inside tissues. A new method, called spectral triangulation, is presented for three dimensional (3D) localization using sparse optical measurements made at the specimen surface. Structurally unsorted SWCNT samples emitting over a range of wavelengths are excited inside tissue phantoms by an LED matrix. The resulting SWIR emission is sampled at points on the surface by a scanning fibre optic probe leading to an InGaAs spectrometer or a spectrally filtered InGaAs avalanche photodiode detector. Because of water absorption, attenuation of the SWCNT fluorescence in tissues is strongly wavelength-dependent. We therefore gauge the SWCNT-probe distance by analysing differential changes in the measured SWCNT emission spectra. SWCNT fluorescence can be clearly detected through at least 20 mm of tissue phantom, and the 3D locations of embedded SWCNT test samples are found with sub-millimeter accuracy at depths up to 10 mm. Our method can also distinguish and locate two embedded SWCNT sources at distinct positions. Electronic supplementary information (ESI) available: Details concerning instrumental design, experimental procedures, related experiments, and triangulation computations, plus a video showing operation of the scanner. See DOI: 10.1039/c6nr01376g
The Influence of Age and Skull Conductivity on Surface and Subdermal Bipolar EEG Leads
Wendel, Katrina; Väisänen, Juho; Seemann, Gunnar; Hyttinen, Jari; Malmivuo, Jaakko
2010-01-01
Bioelectric source measurements are influenced by the measurement location as well as the conductive properties of the tissues. Volume conductor effects such as the poorly conducting bones or the moderately conducting skin are known to affect the measurement precision and accuracy of the surface electroencephalography (EEG) measurements. This paper investigates the influence of age via skull conductivity upon surface and subdermal bipolar EEG measurement sensitivity conducted on two realistic head models from the Visible Human Project. Subdermal electrodes (a.k.a. subcutaneous electrodes) are implanted on the skull beneath the skin, fat, and muscles. We studied the effect of age upon these two electrode types according to the scalp-to-skull conductivity ratios of 5, 8, 15, and 30 : 1. The effects on the measurement sensitivity were studied by means of the half-sensitivity volume (HSV) and the region of interest sensitivity ratio (ROISR). The results indicate that the subdermal implantation notably enhances the precision and accuracy of EEG measurements by a factor of eight compared to the scalp surface measurements. In summary, the evidence indicates that both surface and subdermal EEG measurements benefit better recordings in terms of precision and accuracy on younger patients. PMID:20130812
Generation of irradiance patterns using a semi-spherical meter of two degrees of freedom
NASA Astrophysics Data System (ADS)
Tecpoyotl-Torres, M.; Vera-Dimas, J. G.; Escobedo-Alatorre, J.; Sánchez-Mondragón, J.; Torres-Cisneros, M.; Cabello-Ruiz, R.; Varona, J.
2011-09-01
The meter device presented in this work consists of a photo-detector mounted on the mechanism of a mobile rectangular arc. One stepper motor located on the lateral axis of the device displaces the sensor along a semi-circular trajectory of 170°, almost half meridians. Another motor located at the base of the device enables 360° rotation of the illumination source under test. This arrangement effectively produces a semi-spherical volume for the sensor to move within. The number of measurement points is determined by programming the two stepper motors. Also, the use of a single photo-sensor ensures uniformity in the measurements. The mechanical structure provides enough rigidity for supporting the accuracy required by the data acquisition circuitry based on a DSPIC. Measurement of illumination sources of different sizes is possible by using adjustable lengths of the mobile base and the ring for a maximum lamp length of 0.16 m. Because this work is partially supported by a private entity interested in the characterization of its products, especial attention has been given to the luminaries based on LED technology with divergent beams. The received power by the detector is useful to obtain the irradiance profile of the lighting source under test. The meter device presented herein is a low-cost prototype designed and fabricated using recyclable materials only such as "electronic waste".
SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization
Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah
2014-01-01
Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431
Performance analysis of multiple Indoor Positioning Systems in a healthcare environment.
Van Haute, Tom; De Poorter, Eli; Crombez, Pieter; Lemic, Filip; Handziski, Vlado; Wirström, Niklas; Wolisz, Adam; Voigt, Thiemo; Moerman, Ingrid
2016-02-03
The combination of an aging population and nursing staff shortages implies the need for more advanced systems in the healthcare industry. Many key enablers for the optimization of healthcare systems require provisioning of location awareness for patients (e.g. with dementia), nurses, doctors, assets, etc. Therefore, many Indoor Positioning Systems (IPSs) will be indispensable in healthcare systems. However, although many IPSs have been proposed in literature, most of these have been evaluated in non-representative environments such as office buildings rather than in a hospital. To remedy this, the paper evaluates the performance of existing IPSs in an operational modern healthcare environment: the "Sint-Jozefs kliniek Izegem" hospital in Belgium. The evaluation (data-collecting and data-processing) is executed using a standardized methodology and evaluates the point accuracy, room accuracy and latency of multiple IPSs. To evaluate the solutions, the position of a stationary device was requested at 73 evaluation locations. By using the same evaluation locations for all IPSs the performance of all systems could objectively be compared. Several trends can be identified such as the fact that Wi-Fi based fingerprinting solutions have the best accuracy result (point accuracy of 1.21 m and room accuracy of 98%) however it requires calibration before use and needs 5.43 s to estimate the location. On the other hand, proximity based solutions (based on sensor nodes) are significantly cheaper to install, do not require calibration and still obtain acceptable room accuracy results. As a conclusion of this paper, Wi-Fi based solutions have the most potential for an indoor positioning service in case when accuracy is the most important metric. Applying the fingerprinting approach with an anchor installed in every two rooms is the preferred solution for a hospital environment.
NASA Astrophysics Data System (ADS)
Ranjeva, Minna; Thompson, Lee; Perlitz, Daniel; Bonness, William; Capone, Dean; Elbing, Brian
2011-11-01
Cavitation is a major concern for the US Navy since it can cause ship damage and produce unwanted noise. The ability to precisely locate cavitation onset in laboratory scale experiments is essential for proper design that will minimize this undesired phenomenon. Measuring the cavitation onset is more accurately determined acoustically than visually. However, if other parts of the model begin to cavitate prior to the component of interest the acoustic data is contaminated with spurious noise. Consequently, cavitation onset is widely determined by optically locating the event of interest. The current research effort aims at developing an acoustic localization scheme for reverberant environments such as water tunnels. Currently cavitation bubbles are being induced in a static water tank with a laser, allowing the localization techniques to be refined with the bubble at a known location. The source is located with the use of acoustic data collected with hydrophones and analyzed using signal processing techniques. To verify the accuracy of the acoustic scheme, the events are simultaneously monitored visually with the use of a high speed camera. Once refined testing will be conducted in a water tunnel. This research was sponsored by the Naval Engineering Education Center (NEEC).
An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising
Guo, Muran; Chen, Tao; Wang, Ben
2017-01-01
Co-prime arrays can estimate the directions of arrival (DOAs) of O(MN) sources with O(M+N) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach. PMID:28509886
An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising.
Guo, Muran; Chen, Tao; Wang, Ben
2017-05-16
Co-prime arrays can estimate the directions of arrival (DOAs) of O ( M N ) sources with O ( M + N ) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach.
The effects of aging on ERP correlates of source memory retrieval for self-referential information.
Dulas, Michael R; Newsome, Rachel N; Duarte, Audrey
2011-03-04
Numerous behavioral studies have suggested that normal aging negatively affects source memory accuracy for various kinds of associations. Neuroimaging evidence suggests that less efficient retrieval processing (temporally delayed and attenuated) may contribute to these impairments. Previous aging studies have not compared source memory accuracy and corresponding neural activity for different kinds of source details; namely, those that have been encoded via a more or less effective strategy. Thus, it is not yet known whether encoding source details in a self-referential manner, a strategy suggested to promote successful memory in the young and old, may enhance source memory accuracy and reduce the commonly observed age-related changes in neural activity associated with source memory retrieval. Here, we investigated these issues by using event-related potentials (ERPs) to measure the effects of aging on the neural correlates of successful source memory retrieval ("old-new effects") for objects encoded either self-referentially or self-externally. Behavioral results showed that both young and older adults demonstrated better source memory accuracy for objects encoded self-referentially. ERP results showed that old-new effects onsetted earlier for self-referentially encoded items in both groups and that age-related differences in the onset latency of these effects were reduced for self-referentially, compared to self-externally, encoded items. These results suggest that the implementation of an effective encoding strategy, like self-referential processing, may lead to more efficient retrieval, which in turn may improve source memory accuracy in both young and older adults. Published by Elsevier B.V.
Adaptive near-field beamforming techniques for sound source imaging.
Cho, Yong Thung; Roan, Michael J
2009-02-01
Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.
Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.
2014-01-01
Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.
Hadad, K; Zohrevand, M; Faghihi, R; Sedighi Pashaki, A
2015-03-01
HDR brachytherapy is one of the commonest methods of nasopharyngeal cancer treatment. In this method, depending on how advanced one tumor is, 2 to 6 Gy dose as intracavitary brachytherapy is prescribed. Due to high dose rate and tumor location, accuracy evaluation of treatment planning system (TPS) is particularly important. Common methods used in TPS dosimetry are based on computations in a homogeneous phantom. Heterogeneous phantoms, especially patient-specific voxel phantoms can increase dosimetric accuracy. In this study, using CT images taken from a patient and ctcreate-which is a part of the DOSXYZnrc computational code, patient-specific phantom was made. Dose distribution was plotted by DOSXYZnrc and compared with TPS one. Also, by extracting the voxels absorbed dose in treatment volume, dose-volume histograms (DVH) was plotted and compared with Oncentra™ TPS DVHs. The results from calculations were compared with data from Oncentra™ treatment planning system and it was observed that TPS calculation predicts lower dose in areas near the source, and higher dose in areas far from the source relative to MC code. Absorbed dose values in the voxels also showed that TPS reports D90 value is 40% higher than the Monte Carlo method. Today, most treatment planning systems use TG-43 protocol. This protocol may results in errors such as neglecting tissue heterogeneity, scattered radiation as well as applicator attenuation. Due to these errors, AAPM emphasized departing from TG-43 protocol and approaching new brachytherapy protocol TG-186 in which patient-specific phantom is used and heterogeneities are affected in dosimetry.
Hadad, K.; Zohrevand, M.; Faghihi, R.; Sedighi Pashaki, A.
2015-01-01
Background HDR brachytherapy is one of the commonest methods of nasopharyngeal cancer treatment. In this method, depending on how advanced one tumor is, 2 to 6 Gy dose as intracavitary brachytherapy is prescribed. Due to high dose rate and tumor location, accuracy evaluation of treatment planning system (TPS) is particularly important. Common methods used in TPS dosimetry are based on computations in a homogeneous phantom. Heterogeneous phantoms, especially patient-specific voxel phantoms can increase dosimetric accuracy. Materials and Methods In this study, using CT images taken from a patient and ctcreate-which is a part of the DOSXYZnrc computational code, patient-specific phantom was made. Dose distribution was plotted by DOSXYZnrc and compared with TPS one. Also, by extracting the voxels absorbed dose in treatment volume, dose-volume histograms (DVH) was plotted and compared with Oncentra™ TPS DVHs. Results The results from calculations were compared with data from Oncentra™ treatment planning system and it was observed that TPS calculation predicts lower dose in areas near the source, and higher dose in areas far from the source relative to MC code. Absorbed dose values in the voxels also showed that TPS reports D90 value is 40% higher than the Monte Carlo method. Conclusion Today, most treatment planning systems use TG-43 protocol. This protocol may results in errors such as neglecting tissue heterogeneity, scattered radiation as well as applicator attenuation. Due to these errors, AAPM emphasized departing from TG-43 protocol and approaching new brachytherapy protocol TG-186 in which patient-specific phantom is used and heterogeneities are affected in dosimetry. PMID:25973408
Robust Eye Center Localization through Face Alignment and Invariant Isocentric Patterns
Teng, Dongdong; Chen, Dihu; Tan, Hongzhou
2015-01-01
The localization of eye centers is a very useful cue for numerous applications like face recognition, facial expression recognition, and the early screening of neurological pathologies. Several methods relying on available light for accurate eye-center localization have been exploited. However, despite the considerable improvements that eye-center localization systems have undergone in recent years, only few of these developments deal with the challenges posed by the profile (non-frontal face). In this paper, we first use the explicit shape regression method to obtain the rough location of the eye centers. Because this method extracts global information from the human face, it is robust against any changes in the eye region. We exploit this robustness and utilize it as a constraint. To locate the eye centers accurately, we employ isophote curvature features, the accuracy of which has been demonstrated in a previous study. By applying these features, we obtain a series of eye-center locations which are candidates for the actual position of the eye-center. Among these locations, the estimated locations which minimize the reconstruction error between the two methods mentioned above are taken as the closest approximation for the eye centers locations. Therefore, we combine explicit shape regression and isophote curvature feature analysis to achieve robustness and accuracy, respectively. In practical experiments, we use BioID and FERET datasets to test our approach to obtaining an accurate eye-center location while retaining robustness against changes in scale and pose. In addition, we apply our method to non-frontal faces to test its robustness and accuracy, which are essential in gaze estimation but have seldom been mentioned in previous works. Through extensive experimentation, we show that the proposed method can achieve a significant improvement in accuracy and robustness over state-of-the-art techniques, with our method ranking second in terms of accuracy. According to our implementation on a PC with a Xeon 2.5Ghz CPU, the frame rate of the eye tracking process can achieve 38 Hz. PMID:26426929
Waldhauser, F.; Ellsworth, W.L.
2000-01-01
We have developed an efficient method to determine high-resolution hypocenter locations over large distances. The location method incorporates ordinary absolute travel-time measurements and/or cross-correlation P-and S-wave differential travel-time measurements. Residuals between observed and theoretical travel-time differences (or double-differences) are minimized for pairs of earthquakes at each station while linking together all observed event-station pairs. A least-squares solution is found by iteratively adjusting the vector difference between hypocentral pairs. The double-difference algorithm minimizes errors due to unmodeled velocity structure without the use of station corrections. Because catalog and cross-correlation data are combined into one system of equations, interevent distances within multiplets are determined to the accuracy of the cross-correlation data, while the relative locations between multiplets and uncorrelated events are simultaneously determined to the accuracy of the absolute travel-time data. Statistical resampling methods are used to estimate data accuracy and location errors. Uncertainties in double-difference locations are improved by more than an order of magnitude compared to catalog locations. The algorithm is tested, and its performance is demonstrated on two clusters of earthquakes located on the northern Hayward fault, California. There it colapses the diffuse catalog locations into sharp images of seismicity and reveals horizontal lineations of hypocenter that define the narrow regions on the fault where stress is released by brittle failure.
Mennis, Jeremy; Mason, Michael; Ambrus, Andreea; Way, Thomas; Henry, Kevin
2017-09-01
Geographic ecological momentary assessment (GEMA) combines ecological momentary assessment (EMA) with global positioning systems (GPS) and geographic information systems (GIS). This study evaluates the spatial accuracy of GEMA location data and bias due to subject and environmental data characteristics. Using data for 72 subjects enrolled in a study of urban adolescent substance use, we compared the GPS-based location of EMA responses in which the subject indicated they were at home to the geocoded home address. We calculated the percentage of EMA locations within a sixteenth, eighth, quarter, and half miles from the home, and the percentage within the same tract and block group as the home. We investigated if the accuracy measures were associated with subject demographics, substance use, and emotional dysregulation, as well as environmental characteristics of the home neighborhood. Half of all subjects had more than 88% of their EMA locations within a half mile, 72% within a quarter mile, 55% within an eighth mile, 50% within a sixteenth of a mile, 83% in the correct tract, and 71% in the correct block group. There were no significant associations with subject or environmental characteristics. Results support the use of GEMA for analyzing subjects' exposures to urban environments. Researchers should be aware of the issue of spatial accuracy inherent in GEMA, and interpret results accordingly. Understanding spatial accuracy is particularly relevant for the development of 'ecological momentary interventions' (EMI), which may depend on accurate location information, though issues of privacy protection remain a concern. Copyright © 2017 Elsevier B.V. All rights reserved.
Functional connectivity analysis in EEG source space: The choice of method
Knyazeva, Maria G.
2017-01-01
Functional connectivity (FC) is among the most informative features derived from EEG. However, the most straightforward sensor-space analysis of FC is unreliable owing to volume conductance effects. An alternative—source-space analysis of FC—is optimal for high- and mid-density EEG (hdEEG, mdEEG); however, it is questionable for widely used low-density EEG (ldEEG) because of inadequate surface sampling. Here, using simulations, we investigate the performance of the two source FC methods, the inverse-based source FC (ISFC) and the cortical partial coherence (CPC). To examine the effects of localization errors of the inverse method on the FC estimation, we simulated an oscillatory source with varying locations and SNRs. To compare the FC estimations by the two methods, we simulated two synchronized sources with varying between-source distance and SNR. The simulations were implemented for hdEEG, mdEEG, and ldEEG. We showed that the performance of both methods deteriorates for deep sources owing to their inaccurate localization and smoothing. The accuracy of both methods improves with the increasing between-source distance. The best ISFC performance was achieved using hd/mdEEG, while the best CPC performance was observed with ldEEG. In conclusion, with hdEEG, ISFC outperforms CPC and therefore should be the preferred method. In the studies based on ldEEG, the CPC is a method of choice. PMID:28727750
The Accuracy of GBM GRB Localizations
NASA Astrophysics Data System (ADS)
Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.
2010-03-01
We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.
Accuracy of piezoelectric pedometer and accelerometer step counts.
Cruz, Joana; Brooks, Dina; Marques, Alda
2017-04-01
This study aimed to assess step-count accuracy of a piezoeletric pedometer (Yamax PW/EX-510), when worn at different body parts, and a triaxial accelerometer (GT3X+), and to compare device accuracy; and identify the preferred location(s) to wear a pedometer. Sixty-three healthy adults (45.8±20.6 years old) wore 7 pedometers (neck, lateral right and left of the waist, front right and left of the waist, front pockets of the trousers) and 1 accelerometer (over the right hip), while walking 120 m at slow, self-preferred/normal and fast paces. Steps were recorded. Participants identified their preferred location(s) to wear the pedometer. Absolute percent error (APE) and Bland and Altman (BA) method were used to assess device accuracy (criterion measure: manual counts) and BA method for device comparisons. Pedometer APE was below 3% at normal and fast paces despite wearing location, but higher at slow pace (4.5-9.1%). Pedometers were more accurate at the front waist and inside the pockets. Accelerometer APE was higher than pedometer APE (P<0.05); nevertheless, limits of agreement between devices were relatively small. Preferred wearing locations were inside the front right (N.=25) and left (N.=20) pockets of the trousers. Yamax PW/EX-510 pedometers may be preferable than GT3X+ accelerometers to count steps, as they provide more accurate results. These pedometers should be worn at the front right or left positions of the waist or inside the front pockets of the trousers.
Sea ice motion measurements from Seasat SAR images
NASA Technical Reports Server (NTRS)
Leberl, F.; Raggam, J.; Elachi, C.; Campbell, W. J.
1983-01-01
Data from the Seasat synthetic aperture radar (SAR) experiment are analyzed in order to determine the accuracy of this information for mapping the distribution of sea ice and its motion. Data from observations of sea ice in the Beaufort Sea from seven sequential orbits of the satellite were selected to study the capabilities and limitations of spaceborne radar application to sea-ice mapping. Results show that there is no difficulty in identifying homologue ice features on sequential radar images and the accuracy is entirely controlled by the accuracy of the orbit data and the geometric calibration of the sensor. Conventional radargrammetric methods are found to serve well for satellite radar ice mapping, while ground control points can be used to calibrate the ice location and motion measurements in the cases where orbit data and sensor calibration are lacking. The ice motion was determined to be approximately 6.4 + or - 0.5 km/day. In addition, the accuracy of pixel location was found over land areas. The use of one control point in 10,000 sq km produced an accuracy of about + or 150 m, while with a higher density of control points (7 in 1000 sq km) the location accuracy improves to the image resolution of + or - 25 m. This is found to be applicable for both optical and digital data.
Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources.
Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter
2016-01-01
Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. EEG data were generated by simulating multiple cortical sources (2-4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms.
Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources
Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter
2016-01-01
Background Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. Methods EEG data were generated by simulating multiple cortical sources (2–4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. Results While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms. PMID:26809000
NASA Astrophysics Data System (ADS)
WANG, X.; Wei, S.; Bradley, K. E.
2017-12-01
Global earthquake catalogs provide important first-order constraints on the geometries of active faults. However, the accuracies of both locations and focal mechanisms in these catalogs are typically insufficient to resolve detailed fault geometries. This issue is particularly critical in subduction zones, where most great earthquakes occur. The Slab 1.0 model (Hayes et al. 2012), which was derived from global earthquake catalogs, has smooth fault geometries, and cannot adequately address local structural complexities that are critical for understanding earthquake rupture patterns, coseismic slip distributions, and geodetically monitored interseismic coupling. In this study, we conduct careful relocation and waveform modeling of earthquake source parameters to reveal fault geometries in greater detail. We take advantage of global data and conduct broadband waveform modeling for medium size earthquakes (M>4.5) to refine their source parameters, which include locations and fault plane solutions. The refined source parameters can greatly improve the imaging of fault geometry (e.g., Wang et al., 2017). We apply these approaches to earthquakes recorded since 1990 in the Mentawai region offshore of central Sumatra. Our results indicate that the uncertainty of the horizontal location, depth and dip angle estimation are as small as 5 km, 2 km and 5 degrees, respectively. The refined catalog shows that the 2005 and 2009 "back-thrust" sequences in Mentawai region actually occurred on a steeply landward-dipping fault, contradicting previous studies that inferred a seaward-dipping backthrust. We interpret these earthquakes as `unsticking' of the Sumatran accretionary wedge along a backstop fault that separates accreted material of the wedge from the strong Sunda lithosphere, or reactivation of an old normal fault buried beneath the forearc basin. We also find that the seismicity on the Sunda megathrust deviates in location from Slab 1.0 by up to 7 km, with along strike variation. The refined megathrust geometry will improve our understanding of the tectonic setting in this region, and place further constraints on rupture processes of the hazardous megathrust.
SU-E-J-107: The Impact of the Tumor Location to Deformable Image Registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugawara, Y; Tohoku University School of Medicine, Sendai, Miyagi; Tachibana, H
2015-06-15
Purpose: For four-dimensional planning and adaptive radiotherapy, the accuracy of deformable image registration (DIR) is essential. We evaluated the accuracy of an in-house program with the free-downloadable DIR software library package (NiftyReg) and two commercially available DIR software programs (MIM Maestro and Velocity AI) in lung SBRT cancer patients. In addition to it, the relationship between the tumor location and the accuracy of the DIRs was investigated. Methods: The free-form deformation was implemented in the in-house program and the MIM. The Velocity was based on the B-spline algorithm. The accuracy of the three programs was evaluated in comparison for themore » structures on 4DCT image datasets between at the peak-inhale and at the peak-exhale. The dice similarity coefficient (DSC) and normalized DSC (NDSC) were measured for the gross tumor volumes from 19 lung SBRT patients. Results: The DSC measurement showed the median values of the DSC were 0.885, 0.872 and 0.798 for the In-house program, the MIM and the Velocity, respectively. The Velocity showed significant difference compared to the others. The median NDSC values were 1.027, 1.005 and 0.946 for the In-house, the MIM and the Velocity, respectively. This indicated that the spatial overlap agreement between the reference and the deformed structure for the in-house and MIM was comparable with the accuracy within 1mm uncertainty. There was larger discrepancy within 1–2mm uncertainty for the Velocity. The In-house and the MIM showed the higher NDSC values than the median values when the GTV was not attached to the chest wall and diaphragm(p < 0.05). However, there is no relationship between the accuracy and the tumor location in the Velocity. Conclusion: The difference of the DIR program would affect different accuracy and the accuracy may be reduced when the tumor is located or attached to chest wall or diaphragm.« less
ERTS imagery for ground-water investigations
Moore, Gerald K.; Deutsch, Morris
1975-01-01
ERTS imagery offers the first opportunity to apply moderately high-resolution satellite data to the nationwide study of water resources. This imagery is both a tool and a form of basic data. Like other tools and basic data, it should be considered for use in ground-water investigations. The main advantage of its use will be to reduce the need for field work. In addition, however, broad regional features may be seen easily on ERTS imagery, whereas they would be difficult or impossible to see on the ground or on low-altitude aerial photographs. Some present and potential uses of ERTS imagery are to locate new aquifers, to study aquifer recharge and discharge, to estimate ground-water pumpage for irrigation, to predict the location and type of aquifer management problems, and to locate and monitor strip mines which commonly are sources for acid mine drainage. In many cases, boundaries which are gradational on the ground appear to be sharp on ERTS imagery. Initial results indicate that the accuracy of maps produced from ERTS imagery is completely adequate for some purposes.
Foong, Shaohui; Sun, Zhenglong
2016-08-12
In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.
This is a provisional dataset that contains point locations for all grants given out by the USEPA going back to the 1960s through today. There are many limitations to the data so it is advised that these metadata be read carefully before use. Although the records for these grant locations are drawn directly from the official EPA grants repository (IGMS ?? Integrated Grants Management System), it is important to know that the IGMS was designed for purposes that did not include accurately portraying the grant??s place of performance on a map. Instead, the IGMS grant recipient??s mailing address is the primary source for grant locations. Particularly for statewide grants that are administered via State and Regional headquarters, the grant location data should not be interpreted as the grant??s place of performance. In 2012, a policy was established to start to collect the place of performance as a pilot for newly awarded grants ?? that were deemed ??community-based?? in nature and for these the grant location depicted in this database will be a more reliable indicator of the actual place of performance. As for the locational accuracy of these points, there is no programmatic certification process, however, they are being entered by the Grant Project Officers who are most familiar with the details of the grants, apart from the grantees themselves. Limitations notwithstanding, this is a first-of-breed attempt to map all of the Agency??s grants, using the best
US EPA EJ Grants/IGD: PERF_EJ_GRANTS_INT_MV
This is a provisional dataset that contains point locations for all Environmental Justice (EJ) grants given out by the US EPA. There are many limitations to the data so it is advised that these metadata be read carefully before use. Although the records for these grant locations are drawn directly from the official EPA grants repository (IGMS fffd Integrated Grants Management System), it is important to know that the IGMS was designed for purposes that did not include accurately portraying the grantfffds place of performance on a map. Instead, the IGMS grant recipientfffds mailing address is the primary source for grant locations. Particularly for statewide grants that are administered via State and Regional headquarters, the grant location data should not be interpreted as the grantfffds place of performance. In 2012, a policy was established to start to collect the place of performance as a pilot for newly awarded grants fffd that were deemed fffdcommunity-basedfffd in nature and for these the grant location depicted in this database will be a more reliable indicator of the actual place of performance. As for the locational accuracy of these points, there is no programmatic certification process, however, they are being entered by the Grant Project Officers who are most familiar with the details of the grants, apart from the grantees themselves. Limitations notwithstanding, this is a first-of-breed attempt to map all of the Agencyfffds grants, using the
This is a provisional dataset that contains point locations for all Environmental Justice (EJ) grants given out by the US EPA. There are many limitations to the data so it is advised that these metadata be read carefully before use. Although the records for these grant locations are drawn directly from the official EPA grants repository (IGMS Integrated Grants Management System), it is important to know that the IGMS was designed for purposes that did not include accurately portraying the grants place of performance on a map. Instead, the IGMS grant recipients mailing address is the primary source for grant locations. Particularly for statewide grants that are administered via State and Regional headquarters, the grant location data should not be interpreted as the grants place of performance. In 2012, a policy was established to start to collect the place of performance as a pilot for newly awarded grants that were deemed community-based in nature and for these the grant location depicted in this database will be a more reliable indicator of the actual place of performance. As for the locational accuracy of these points, there is no programmatic certification process, however, they are being entered by the Grant Project Officers who are most familiar with the details of the grants, apart from the grantees themselves. Limitations notwithstanding, this is a first-of-breed attempt to map all of the Agencys grants, using the best internal geocoding algorithms avail
NASA Astrophysics Data System (ADS)
Alsudani, Ahlam
2018-05-01
In recent years, indoor positioning system (IPS) plays a very important role in several environments such as hospitals, airports, males, Etc. It is used to locate mobile stations such as human and robots inside buildings. Some of IPSs applications are: locating an elder or child needed for an urgent help in hospitals, emergency situations such as locating firefighters inside building on fire or policemen fitting terrorists inside building by a commander to help for expedite evacuation in case one of them need for help. In indoor positioning applications, the accuracy should be high as can as possible, in another word; the error should be less than 1 meter. The indoor environment is the major challenging to obtain such accuracy. In this paper, we present a novel algorithm to identify the line of sight (LOS) and non-line of sight (NLOS) channels and improve the positioning accuracy using ultra-wideband (UWB) technology implementing DW1000 devices.
Wave-equation migration velocity inversion using passive seismic sources
NASA Astrophysics Data System (ADS)
Witten, B.; Shragge, J. C.
2015-12-01
Seismic monitoring at injection sites (e.g., CO2 sequestration, waste water disposal, hydraulic fracturing) has become an increasingly important tool for hazard identification and avoidance. The information obtained from this data is often limited to seismic event properties (e.g., location, approximate time, moment tensor), the accuracy of which greatly depends on the estimated elastic velocity models. However, creating accurate velocity models from passive array data remains a challenging problem. Common techniques rely on picking arrivals or matching waveforms requiring high signal-to-noise data that is often not available for the magnitude earthquakes observed over injection sites. We present a new method for obtaining elastic velocity information from earthquakes though full-wavefield wave-equation imaging and adjoint-state tomography. The technique exploits the fact that the P- and S-wave arrivals originate at the same time and location in the subsurface. We generate image volumes by back-propagating P- and S-wave data through initial Earth models and then applying a correlation-based extended-imaging condition. Energy focusing away from zero lag in the extended image volume is used as a (penalized) residual in an adjoint-state tomography scheme to update the P- and S-wave velocity models. We use an acousto-elastic approximation to greatly reduce the computational cost. Because the method requires neither an initial source location or origin time estimate nor picking of arrivals, it is suitable for low signal-to-noise datasets, such as microseismic data. Synthetic results show that with a realistic distribution of microseismic sources, P- and S-velocity perturbations can be recovered. Although demonstrated at an oil and gas reservoir scale, the technique can be applied to problems of all scales from geologic core samples to global seismology.
Measuring and monitoring KIPT Neutron Source Facility Reactivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yan; Gohar, Yousry; Zhong, Zhaopeng
2015-08-01
Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on developing and constructing a neutron source facility at Kharkov, Ukraine. The facility consists of an accelerator-driven subcritical system. The accelerator has a 100 kW electron beam using 100 MeV electrons. The subcritical assembly has k eff less than 0.98. To ensure the safe operation of this neutron source facility, the reactivity of the subcritical core has to be accurately determined and continuously monitored. A technique which combines the area-ratio method and the flux-to-current ratio method is purposed to determine themore » reactivity of the KIPT subcritical assembly at various conditions. In particular, the area-ratio method can determine the absolute reactivity of the subcritical assembly in units of dollars by performing pulsed-neutron experiments. It provides reference reactivities for the flux-to-current ratio method to track and monitor the reactivity deviations from the reference state while the facility is at other operation modes. Monte Carlo simulations are performed to simulate both methods using the numerical model of the KIPT subcritical assembly. It is found that the reactivities obtained from both the area-ratio method and the flux-to-current ratio method are spatially dependent on the neutron detector locations and types. Numerical simulations also suggest optimal neutron detector locations to minimize the spatial effects in the flux-to-current ratio method. The spatial correction factors are calculated using Monte Carlo methods for both measuring methods at the selected neutron detector locations. Monte Carlo simulations are also performed to verify the accuracy of the flux-to-current ratio method in monitoring the reactivity swing during a fuel burnup cycle.« less
NASA Astrophysics Data System (ADS)
Nooshiri, Nima; Heimann, Sebastian; Saul, Joachim; Tilmann, Frederik; Dahm, Torsten
2015-04-01
Automatic earthquake locations are sometimes associated with very large residuals up to 10 s even for clear arrivals, especially for regional stations in subduction zones because of their strongly heterogeneous velocity structure associated. Although these residuals are most likely not related to measurement errors but unmodelled velocity heterogeneity, these stations are usually removed from or down-weighted in the location procedure. While this is possible for large events, it may not be useful if the earthquake is weak. In this case, implementation of travel-time station corrections may significantly improve the automatic locations. Here, the shrinking box source-specific station term method (SSST) [Lin and Shearer, 2005] has been applied to improve relative location accuracy of 1678 events that occurred in the Tonga subduction zone between 2010 and mid-2014. Picks were obtained from the GEOFON earthquake bulletin for all available station networks. We calculated a set of timing corrections for each station which vary as a function of source position. A separate time correction was computed for each source-receiver path at the given station by smoothing the residual field over nearby events. We begin with a very large smoothing radius essentially encompassing the whole event set and iterate by progressively shrinking the smoothing radius. In this way, we attempted to correct for the systematic errors, that are introduced into the locations by the inaccuracies in the assumed velocity structure, without solving for a new velocity model itself. One of the advantages of the SSST technique is that the event location part of the calculation is separate from the station term calculation and can be performed using any single event location method. In this study, we applied a non-linear, probabilistic, global-search earthquake location method using the software package NonLinLoc [Lomax et al., 2000]. The non-linear location algorithm implemented in NonLinLoc is less sensitive to the problem of local misfit minima in the model space. Moreover, the spatial errors estimated by NonLinLoc are much more reliable than those derived by linearized algorithms. According to the obtained results, the root-mean-square (RMS) residual decreased from 1.37 s for the original GEOFON catalog (using a global 1-D velocity model without station specific corrections) to 0.90 s for our SSST catalog. Our results show 45-70% reduction of the median absolute deviation (MAD) of the travel-time residuals at regional stations. Additionally, our locations exhibit less scatter in depth and a sharper image of the seismicity associated with the subducting slab compared to the initial locations.
Development of Murray Loop Bridge for High Induced Voltage
NASA Astrophysics Data System (ADS)
Isono, Shigeki; Kawasaki, Katsutoshi; Kobayashi, Shin-Ichi; Ishihara, Hayato; Chiyajo, Kiyonobu
In the case of the cable fault that ground fault resistance is less than 10MΩ, Murray Loop Bridge is excellent as a fault locator in location accuracy and the convenience. But, when the induction of several hundred V is taken from the single core cable which adjoins it, a fault location with the high voltage Murray Loop Bridge becomes difficult. Therefore, we developed Murray Loop Bridge, which could be applied even when the induced voltage of several hundred V occurs in the measurement cable. The evaluation of the fault location accuracy was done with the developed prototype by the actual line and the training equipment.
Hg-201 (+) CO-Magnetometer for HG-199(+) Trapped Ion Space Atomic Clocks
NASA Technical Reports Server (NTRS)
Burt, Eric A. (Inventor); Taghavi, Shervin (Inventor); Tjoelker, Robert L. (Inventor)
2011-01-01
Local magnetic field strength in a trapped ion atomic clock is measured in real time, with high accuracy and without degrading clock performance, and the measurement is used to compensate for ambient magnetic field perturbations. First and second isotopes of an element are co-located within the linear ion trap. The first isotope has a resonant microwave transition between two hyperfine energy states, and the second isotope has a resonant Zeeman transition. Optical sources emit ultraviolet light that optically pump both isotopes. A microwave radiation source simultaneously emits microwave fields resonant with the first isotope's clock transition and the second isotope's Zeeman transition, and an optical detector measures the fluorescence from optically pumping both isotopes. The second isotope's Zeeman transition provides the measure of magnetic field strength, and the measurement is used to compensate the first isotope's clock transition or to adjust the applied C-field to reduce the effects of ambient magnetic field perturbations.
Integrated system for automated financial document processing
NASA Astrophysics Data System (ADS)
Hassanein, Khaled S.; Wesolkowski, Slawo; Higgins, Ray; Crabtree, Ralph; Peng, Antai
1997-02-01
A system was developed that integrates intelligent document analysis with multiple character/numeral recognition engines in order to achieve high accuracy automated financial document processing. In this system, images are accepted in both their grayscale and binary formats. A document analysis module starts by extracting essential features from the document to help identify its type (e.g. personal check, business check, etc.). These features are also utilized to conduct a full analysis of the image to determine the location of interesting zones such as the courtesy amount and the legal amount. These fields are then made available to several recognition knowledge sources such as courtesy amount recognition engines and legal amount recognition engines through a blackboard architecture. This architecture allows all the available knowledge sources to contribute incrementally and opportunistically to the solution of the given recognition query. Performance results on a test set of machine printed business checks using the integrated system are also reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petiteau, Antoine; Shang Yu; Babak, Stanislav
Coalescing massive black hole binaries are the strongest and probably the most important gravitational wave sources in the LISA band. The spin and orbital precessions bring complexity in the waveform and make the likelihood surface richer in structure as compared to the nonspinning case. We introduce an extended multimodal genetic algorithm which utilizes the properties of the signal and the detector response function to analyze the data from the third round of mock LISA data challenge (MLDC3.2). The performance of this method is comparable, if not better, to already existing algorithms. We have found all five sources present in MLDC3.2more » and recovered the coalescence time, chirp mass, mass ratio, and sky location with reasonable accuracy. As for the orbital angular momentum and two spins of the black holes, we have found a large number of widely separated modes in the parameter space with similar maximum likelihood values.« less
Enhancing source location protection in wireless sensor networks
NASA Astrophysics Data System (ADS)
Chen, Juan; Lin, Zhengkui; Wu, Di; Wang, Bailing
2015-12-01
Wireless sensor networks are widely deployed in the internet of things to monitor valuable objects. Once the object is monitored, the sensor nearest to the object which is known as the source informs the base station about the object's information periodically. It is obvious that attackers can capture the object successfully by localizing the source. Thus, many protocols have been proposed to secure the source location. However, in this paper, we examine that typical source location protection protocols generate not only near but also highly localized phantom locations. As a result, attackers can trace the source easily from these phantom locations. To address these limitations, we propose a protocol to enhance the source location protection (SLE). With phantom locations far away from the source and widely distributed, SLE improves source location anonymity significantly. Theory analysis and simulation results show that our SLE provides strong source location privacy preservation and the average safety period increases by nearly one order of magnitude compared with existing work with low communication cost.
NASA Astrophysics Data System (ADS)
Carranza, V.; Frausto-Vicencio, I.; Rafiq, T.; Verhulst, K. R.; Hopkins, F. M.; Rao, P.; Duren, R. M.; Miller, C. E.
2016-12-01
Atmospheric methane (CH4) is the second most prevalent anthropogenic greenhouse gas. Improved estimates of CH4 emissions from cities is essential for carbon cycle science and climate mitigation efforts. Development of spatially-resolved carbon emissions data sets may offer significant advances in understanding and managing carbon emissions from cities. Urban CH4 emissions in particular require spatially resolved emission maps to help resolve uncertainties in the CH4 budget. This study presents a Geographic Information System (GIS)-based approach to mapping CH4 emissions using locations of infrastructure known to handle and emit methane. We constrain the spatial distribution of sources to the facility level for the major CH4 emitting sources in the South Coast Air Basin. GIS spatial modeling was combined with publicly available datasets to determine the distribution of potential CH4 sources. The datasets were processed and validated to ensure accuracy in the location of individual sources. This information was then used to develop the Vista emissions prior, which is a one-year long, spatially-resolved CH4 emissions estimate. Methane emissions were calculated and spatially allocated to produce 1 km x 1 km gridded CH4 emission map spanning the Los Angeles Basin. In future work, the Vista CH4 emissions prior will be compared with existing, coarser-resolution emissions estimates and will be evaluated in inverse modeling studies using atmospheric observations. The Vista CH4 emissions inventory presents the first detailed spatial maps of CH4 sources and emissions estimates in the Los Angeles Basin and is a critical step towards sectoral attribution of CH4 emissions at local to regional scales.
Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.
Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide anmore » estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.« less
NASA Astrophysics Data System (ADS)
Naren Athreyas, Kashyapa; Gunawan, Erry; Tay, Bee Kiat
2018-07-01
In recent years, the climate changes and weather have become a major concern which affects the daily life of a human being. Modelling and prediction of the complex atmospheric processes needs extensive theoretical studies and observational analyses to improve the accuracy of the prediction. The RADAGAST campaign was conducted by ARM climate research stationed at Niamey, Niger from January 2006 to January 2007, which was aimed to improve the west African climate studies have provided valuable data for research. In this paper, the characteristics and sources of inertia-gravity waves observed over Niamey during the campaign are investigated. The investigation focuses on highlighting the waves which are generated by thunderstorms which dominate the tropical region. The stratospheric energy densities spectrum is analysed for deriving the wave properties. The waves with Eulerian period from 20 to 50 h occupied most of the spectral power. It was found that the waves observed over Niamey had a dominant eastward propagation with horizontal wavelengths ranging from 350 to 1 400 km, and vertical wavelengths ranging from 0.9 to 3.6 km. GROGRAT model with ERA-Interim model data was used for establishing the background atmosphere to identify the source location of the waves. The waves generated by thunderstorms had propagation distances varying from 200 to 5 000 km and propagation duration from 2 to 4 days. The horizontal phase speeds varied from 2 to 20 m/s with wavelengths varying from 100 to 1 100 km, vertical phase speeds from 0.02 to 0.2 m/s and wavelengths from 2 to 15 km at the source point. The majority of sources were located in South Atlantic ocean and waves propagating towards northeast direction. This study demonstrated the complex large scale coupling in the atmosphere.
Accuracy-preserving source term quadrature for third-order edge-based discretization
NASA Astrophysics Data System (ADS)
Nishikawa, Hiroaki; Liu, Yi
2017-09-01
In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrero, A; Chen, B; Huang, A
Purpose: In order to investigate novel methods to more accurately estimate the mineral composition of kidney stones using dual energy CT, it is desirable to be able to combine digital stones of known composition with actual phantom and patient scan data. In this work, we developed and validated a method to insert digital kidney stones into projection data acquired on a dual-source, dual-energy CT system. Methods: Attenuation properties of stones of different mineral composition were computed using tabulated mass attenuation coefficients, the chemical formula for each stone type, and the effective beam energy at each evaluated tube potential. A previouslymore » developed method to insert lesions into x-ray CT projection data was extended to include simultaneous dual-energy CT projections acquired on a dual-source gantry (Siemens Somatom Flash). Digital stones were forward projected onto both detectors and the resulting projections added to the physically acquired sinogram data. To validate the accuracy of the technique, digital stones were inserted into different locations in the ACR CT accreditation phantom; low and high contrast resolution, CT number accuracy and noise properties were compared before and after stone insertion. The procedure was repeated for two dual-energy tube potential pairs in clinical use on the scanner, 80/Sn140 kV and 100/Sn140 kV, respectively. Results: The images reconstructed after the insertion of digital kidney stones were consistent with the images reconstructed from the scanner. The largest average CT number difference for the 4 insert in the CT number accuracy module of the phantom was 3 HU. Conclusion: A framework was developed and validated for the creation of digital kidney stones of known mineral composition, and their projection-domain insertion into commercial dual-source, dual-energy CT projection data. This will allow a systematic investigation of the impact of scan and reconstruction parameters on stone attenuation and dual-energy behavior under rigorously controlled conditions. Dr. McCollough receives research support from Siemens Healthcare.« less
Accuracy analysis and design of A3 parallel spindle head
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan
2016-03-01
As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.
Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine
2014-01-01
Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268
Monitoring forest dynamics with multi-scale and time series imagery.
Huang, Chunbo; Zhou, Zhixiang; Wang, Di; Dian, Yuanyong
2016-05-01
To learn the forest dynamics and evaluate the ecosystem services of forest effectively, a timely acquisition of spatial and quantitative information of forestland is very necessary. Here, a new method was proposed for mapping forest cover changes by combining multi-scale satellite remote-sensing imagery with time series data. Using time series Normalized Difference Vegetation Index products derived from the Moderate Resolution Imaging Spectroradiometer images (MODIS-NDVI) and Landsat Thematic Mapper/Enhanced Thematic Mapper Plus (TM/ETM+) images as data source, a hierarchy stepwise analysis from coarse scale to fine scale was developed for detecting the forest change area. At the coarse scale, MODIS-NDVI data with 1-km resolution were used to detect the changes in land cover types and a land cover change map was constructed using NDVI values at vegetation growing seasons. At the fine scale, based on the results at the coarse scale, Landsat TM/ETM+ data with 30-m resolution were used to precisely detect the forest change location and forest change trend by analyzing time series forest vegetation indices (IFZ). The method was tested using the data for Hubei Province, China. The MODIS-NDVI data from 2001 to 2012 were used to detect the land cover changes, and the overall accuracy was 94.02 % at the coarse scale. At the fine scale, the available TM/ETM+ images at vegetation growing seasons between 2001 and 2012 were used to locate and verify forest changes in the Three Gorges Reservoir Area, and the overall accuracy was 94.53 %. The accuracy of the two layer hierarchical monitoring results indicated that the multi-scale monitoring method is feasible and reliable.
NASA Astrophysics Data System (ADS)
Braunmiller, J.; Thompson, G.; McNutt, S. R.
2017-12-01
On 9 January 2014, a magnitude Mw=5.1 earthquake occurred along the Bahamas-Cuba suture at the northern coast of Cuba revealing a surprising seismic hazard source for both Cuba and southern Florida where it was widely felt. Due to its location, the event and its aftershocks (M>3.5) were recorded only at far distances (300+ km) resulting in high-detection thresholds, low location accuracy, and limited source parameter resolution. We use three-component regional seismic data to study the sequence. High-pass filtered seismograms at the closest site in southern Florida are similar in character suggesting a relatively tight event cluster and revealing additional, smaller aftershocks not included in the ANSS or ISC catalogs. Aligning on the P arrival and low-pass filtering (T>10 s) uncovers a surprise polarity flip of the large amplitude surface waves on vertical seismograms for some aftershocks relative to the main shock. We performed regional moment tensor inversions of the main shock and its largest aftershocks using complete three-component seismograms from stations distributed throughout the region to confirm the mechanism changes. Consistent with the GCMT solution, we find an E-W trending normal faulting mechanism for the main event and for one immediate aftershock. Two aftershocks indicate E-W trending reverse faulting with essentially flipped P- and T-axes relative to the normal faulting events (and the same B-axes). Within uncertainties, depths of the two event families are indistinguishable and indicate shallow faulting (<10 km). One intriguing possible interpretation is that both families ruptured the same fault with reverse mechanisms compensating for overshooting. However, activity could also be spatially separated either vertically (with reverse mechanisms possibly below extension) or laterally. The shallow source depth and the 200-km long uplifted chain of islands indicate that larger, shallow and thus potentially tsunamigenic earthquakes could occur just offshore of northern Cuba posing a potential hazard to Florida and the Bahamas.
Development of detection and recognition of orientation of geometric and real figures.
Stein, N L; Mandler, J M
1975-06-01
Black and white kindergarten and second-grade children were tested for accuracy of detection and recognition of orientation and location changes in pictures of real-world and geometric figures. No differences were found in accuracy of recognition between the 2 kinds of pictures, but patterns of verbalization differed on specific transformations. Although differences in accuracy were found between kindergarten and second grade on an initial recognition task, practice on a matching-to-sample task eliminated differences on a second recognition task. Few ethnic differences were found on accuracy of recognition, but significant differences were found in amount of verbal output on specific transformations. For both groups, mention of orientation changes was markedly reduced when location changes were present.
Optimizing Tsunami Forecast Model Accuracy
NASA Astrophysics Data System (ADS)
Whitmore, P.; Nyland, D. L.; Huang, P. Y.
2015-12-01
Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.
NASA Astrophysics Data System (ADS)
Cao, C.; Lee, X.; Xu, J.
2017-12-01
Unmanned Aerial Vehicles (UAVs) or drones have been widely used in environmental, ecological and engineering applications in recent years. These applications require assessment of positional and dimensional accuracy. In this study, positional accuracy refers to the accuracy of the latitudinal and longitudinal coordinates of locations on the mosaicked image in reference to the coordinates of the same locations measured by a Global Positioning System (GPS) in a ground survey, and dimensional accuracy refers to length and height of a ground target. Here, we investigate the effects of the number of Ground Control Points (GCPs) and the accuracy of the GPS used to measure the GCPs on positional and dimensional accuracy of a drone 3D model. Results show that using on-board GPS and a hand-held GPS produce a positional accuracy on the order of 2-9 meters. In comparison, using a differential GPS with high accuracy (30 cm) improves the positional accuracy of the drone model by about 40 %. Increasing the number of GCPs can compensate for the uncertainty brought by the GPS equipment with low accuracy. In terms of the dimensional accuracy of the drone model, even with the use of a low resolution GPS onboard the vehicle, the mean absolute errors are only 0.04 m for height and 0.10 m for length, which are well suited for some applications in precision agriculture and in land survey studies.
Parameter Estimation for Gravitational-wave Bursts with the BayesWave Pipeline
NASA Technical Reports Server (NTRS)
Becsy, Bence; Raffai, Peter; Cornish, Neil; Essick, Reed; Kanner, Jonah; Katsavounidis, Erik; Littenberg, Tyson B.; Millhouse, Margaret; Vitale, Salvatore
2017-01-01
We provide a comprehensive multi-aspect study of the performance of a pipeline used by the LIGO-Virgo Collaboration for estimating parameters of gravitational-wave bursts. We add simulated signals with four different morphologies (sine-Gaussians (SGs), Gaussians, white-noise bursts, and binary black hole signals) to simulated noise samples representing noise of the two Advanced LIGO detectors during their first observing run. We recover them with the BayesWave (BW) pipeline to study its accuracy in sky localization, waveform reconstruction, and estimation of model-independent waveform parameters. BW localizes sources with a level of accuracy comparable for all four morphologies, with the median separation of actual and estimated sky locations ranging from 25.1deg to30.3deg. This is a reasonable accuracy in the two-detector case, and is comparable to accuracies of other localization methods studied previously. As BW reconstructs generic transient signals with SG wavelets, it is unsurprising that BW performs best in reconstructing SG and Gaussian waveforms. The BW accuracy in waveform reconstruction increases steeply with the network signal-to-noise ratio (S/N(sub net), reaching a 85% and 95% match between the reconstructed and actual waveform below S/N(sub net) approx. = 20 and S/N(sub net) approx. = 50, respectively, for all morphologies. The BW accuracy in estimating central moments of waveforms is only limited by statistical errors in the frequency domain, and is also affected by systematic errors in the time domain as BW cannot reconstruct low-amplitude parts of signals that are overwhelmed by noise. The figures of merit we introduce can be used in future characterizations of parameter estimation pipelines.
Accuracy of vertical radial plume mapping technique in measuring lagoon gas emissions.
Viguria, Maialen; Ro, Kyoung S; Stone, Kenneth C; Johnson, Melvin H
2015-04-01
Recently, the U.S. Environmental Protection Agency (EPA) posted a ground-based optical remote sensing method on its Web site called Other Test Method (OTM) 10 for measuring fugitive gas emission flux from area sources such as closed landfills. The OTM 10 utilizes the vertical radial plume mapping (VRPM) technique to calculate fugitive gas emission mass rates based on measured wind speed profiles and path-integrated gas concentrations (PICs). This study evaluates the accuracy of the VRPM technique in measuring gas emission from animal waste treatment lagoons. A field trial was designed to evaluate the accuracy of the VRPM technique. Control releases of methane (CH4) were made from a 45 m×45 m floating perforated pipe network located on an irrigation pond that resembled typical treatment lagoon environments. The accuracy of the VRPM technique was expressed by the ratio of the calculated emission rates (QVRPM) to actual emission rates (Q). Under an ideal condition of having mean wind directions mostly normal to a downwind vertical plane, the average VRPM accuracy was 0.77±0.32. However, when mean wind direction was mostly not normal to the downwind vertical plane, the emission plume was not adequately captured resulting in lower accuracies. The accuracies of these nonideal wind conditions could be significantly improved if we relaxed the VRPM wind direction criteria and combined the emission rates determined from two adjacent downwind vertical planes surrounding the lagoon. With this modification, the VRPM accuracy improved to 0.97±0.44, whereas the number of valid data sets also increased from 113 to 186. The need for developing accurate and feasible measuring techniques for fugitive gas emission from animal waste lagoons is vital for livestock gas inventories and implementation of mitigation strategies. This field lagoon gas emission study demonstrated that the EPA's vertical radial plume mapping (VRPM) technique can be used to accurately measure lagoon gas emission with two downwind vertical concentration planes surrounding the lagoon.
Pujades-Claumarchirant, Ma Carmen; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo; Melhus, Christopher; Rivard, Mark
2010-03-01
The aim of this work was to determine dose distributions for high-energy brachytherapy sources at spatial locations not included in the radial dose function g L ( r ) and 2D anisotropy function F ( r , θ ) table entries for radial distance r and polar angle θ . The objectives of this study are as follows: 1) to evaluate interpolation methods in order to accurately derive g L ( r ) and F ( r , θ ) from the reported data; 2) to determine the minimum number of entries in g L ( r ) and F ( r , θ ) that allow reproduction of dose distributions with sufficient accuracy. Four high-energy photon-emitting brachytherapy sources were studied: 60 Co model Co0.A86, 137 Cs model CSM-3, 192 Ir model Ir2.A85-2, and 169 Yb hypothetical model. The mesh used for r was: 0.25, 0.5, 0.75, 1, 1.5, 2-8 (integer steps) and 10 cm. Four different angular steps were evaluated for F ( r , θ ): 1°, 2°, 5° and 10°. Linear-linear and logarithmic-linear interpolation was evaluated for g L ( r ). Linear-linear interpolation was used to obtain F ( r , θ ) with resolution of 0.05 cm and 1°. Results were compared with values obtained from the Monte Carlo (MC) calculations for the four sources with the same grid. Linear interpolation of g L ( r ) provided differences ≤ 0.5% compared to MC for all four sources. Bilinear interpolation of F ( r , θ ) using 1° and 2° angular steps resulted in agreement ≤ 0.5% with MC for 60 Co, 192 Ir, and 169 Yb, while 137 Cs agreement was ≤ 1.5% for θ < 15°. The radial mesh studied was adequate for interpolating g L ( r ) for high-energy brachytherapy sources, and was similar to commonly found examples in the published literature. For F ( r , θ ) close to the source longitudinal-axis, polar angle step sizes of 1°-2° were sufficient to provide 2% accuracy for all sources.
A trial of reliable estimation of non-double-couple component of microearthquakes
NASA Astrophysics Data System (ADS)
Imanishi, K.; Uchide, T.
2017-12-01
Although most tectonic earthquakes are caused by shear failure, it has been reported that injection-induced seismicity and earthquakes occurring in volcanoes and geothermal areas contain non double couple (non-DC) components (e.g, Dreger et al., 2000). Also in the tectonic earthquakes, small non-DC components are beginning to be detected (e.g, Ross et al., 2015). However, it is generally limited to relatively large earthquakes that the non-DC component can be estimated with sufficient accuracy. In order to gain further understanding of fluid-driven earthquakes and fault zone properties, it is important to estimate full moment tensor of many microearthquakes with high precision. In the last AGU meeting, we proposed a method that iteratively applies the relative moment tensor inversion (RMTI) (Dahm, 1996) to source clusters improving each moment tensor as well as their relative accuracy. This new method overcomes the problem of RMTI that errors in the mechanism of reference events lead to biased solutions for other events, while taking advantage of RMTI that the source mechanisms can be determined without a computation of Green's function. The procedure is briefly summarized as follows: (1) Sample co-located multiple earthquakes with focal mechanisms, as initial solutions, determined by an ordinary method. (2) Apply the RMTI to estimate the source mechanism of each event relative to those of the other events. (3) Repeat the step 2 for the modified source mechanisms until the reduction of total residual converges. In order to confirm whether the method can resolve non-DC components, we conducted numerical tests on synthetic data. Amplitudes were computed assuming non-DC sources, amplifying by factor between 0.2 and 4 as site effects, and adding 10% random noise. As initial solutions in the step 1, we gave DC sources with arbitrary strike, dip and rake angle. In a test with eight sources at 12 stations, for example, all solutions were successively improved by iteration. Non-DC components were successfully resolved in spite of the fact that we gave DC sources as initial solutions. The application of the method to microearthquakes in geothermal area in Japan will be presented.
Location of acoustic emission sources generated by air flow
Kosel; Grabec; Muzic
2000-03-01
The location of continuous acoustic emission sources is a difficult problem of non-destructive testing. This article describes one-dimensional location of continuous acoustic emission sources by using an intelligent locator. The intelligent locator solves a location problem based on learning from examples. To verify whether continuous acoustic emission caused by leakage air flow can be located accurately by the intelligent locator, an experiment on a thin aluminum band was performed. Results show that it is possible to determine an accurate location by using a combination of a cross-correlation function with an appropriate bandpass filter. By using this combination, discrete and continuous acoustic emission sources can be located by using discrete acoustic emission sources for locator learning.
NASA Astrophysics Data System (ADS)
Kocyigit, Ilker; Liu, Hongyu; Sun, Hongpeng
2013-04-01
In this paper, we consider invisibility cloaking via the transformation optics approach through a ‘blow-up’ construction. An ideal cloak makes use of singular cloaking material. ‘Blow-up-a-small-region’ construction and ‘truncation-of-singularity’ construction are introduced to avoid the singular structure, however, giving only near-cloaks. The study in the literature is to develop various mechanisms in order to achieve high-accuracy approximate near-cloaking devices, and also from a practical viewpoint to nearly cloak an arbitrary content. We study the problem from a different viewpoint. It is shown that for those regularized cloaking devices, the corresponding scattering wave fields due to an incident plane wave have regular patterns. The regular patterns are both a curse and a blessing. On the one hand, the regular wave pattern betrays the location of a cloaking device which is an intrinsic defect due to the ‘blow-up’ construction, and this is particularly the case for the construction by employing a high-loss layer lining. Indeed, our numerical experiments show robust reconstructions of the location, even by implementing the phaseless cross-section data. The construction by employing a high-density layer lining shows a certain promising feature. On the other hand, it is shown that one can introduce an internal point source to produce the canceling scattering pattern to achieve a near-cloak of an arbitrary order of accuracy.
NASA Astrophysics Data System (ADS)
Munafo, I.; Malagnini, L.; Tinti, E.; Chiaraluce, L.; Di Stefano, R.; Valoroso, L.
2014-12-01
The Alto Tiberina Fault (ATF) is a 60 km long east-dipping low-angle normal fault, located in a sector of the Northern Apennines (Italy) undergoing active extension since the Quaternary. The ATF has been imaged by analyzing the active source seismic reflection profiles, and the instrumentally recorded persistent background seismicity. The present study is an attempt to separate the contributions of source, site, and crustal attenuation, in order to focus on the mechanics of the seismic sources on the ATF, as well on the synthetic and the antithetic structures within the ATF hanging-wall (i.e. Colfiorito fault, Gubbio fault and Umbria Valley fault). In order to compute source spectra, we perform a set of regressions over the seismograms of 2000 small earthquakes (-0.8 < ML< 4) recorded between 2010 and 2014 at 50 permanent seismic stations deployed in the framework of the Alto Tiberina Near Fault Observatory project (TABOO) and equipped with three-components seismometers, three of which located in shallow boreholes. Because we deal with some very small earthquakes, we maximize the signal to noise ratio (SNR) with a technique based on the analysis of peak values of bandpass-filtered time histories, in addition to the same processing performed on Fourier amplitudes. We rely on a tool called Random Vibration Theory (RVT) to completely switch from peak values in the time domain to Fourier spectral amplitudes. Low-frequency spectral plateau of the source terms are used to compute moment magnitudes (Mw) of all the events, whereas a source spectral ratio technique is used to estimate the corner frequencies (Brune spectral model) of a subset of events chosen over the analysis of the noise affecting the spectral ratios. So far, the described approach provides high accuracy over the spectral parameters of earthquakes of localized seismicity, and may be used to gain insights into the underlying mechanics of faulting and the earthquake processes.
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-10-21
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum dose difference within 1.7%. The maximum relative difference of output factors was within 0.5%. Over 98.5% passing rate was achieved in 3D gamma-index tests with 2%/2 mm criteria in both an IMRT prostate patient case and a head-and-neck case. These results demonstrated the efficacy of our model in terms of accurately representing a reference phase-space file. We have also tested the efficiency gain of our source model over our previously developed phase-space-let file source model. The overall efficiency of dose calculation was found to be improved by ~1.3-2.2 times in water and patient cases using our analytical model.
NASA Astrophysics Data System (ADS)
Woeger, Friedrich; Rimmele, Thomas
2009-10-01
We analyze the effect of anisoplanatic atmospheric turbulence on the measurement accuracy of an extended-source Hartmann-Shack wavefront sensor (HSWFS). We have numerically simulated an extended-source HSWFS, using a scenery of the solar surface that is imaged through anisoplanatic atmospheric turbulence and imaging optics. Solar extended-source HSWFSs often use cross-correlation algorithms in combination with subpixel shift finding algorithms to estimate the wavefront gradient, two of which were tested for their effect on the measurement accuracy. We find that the measurement error of an extended-source HSWFS is governed mainly by the optical geometry of the HSWFS, employed subpixel finding algorithm, and phase anisoplanatism. Our results show that effects of scintillation anisoplanatism are negligible when cross-correlation algorithms are used.
Evaluation of the locations of Kentucky's traffic crash data.
DOT National Transportation Integrated Search
2010-11-01
An evaluation of a random sample of crashes from 2009 was performed to assess the current accuracy of the crash data's location information.The location of the crash was compared to the presumed location using several report data elements such as nea...
The Effect of Contraceptive Knowledge Source upon Knowledge Accuracy and Contraceptive Behavior.
ERIC Educational Resources Information Center
Pope, A. J.; And Others
1985-01-01
The purpose of this investigation was to determine the relationship of the source of contraceptive knowledge to contraceptive knowledge accuracy and contraceptive behavior of college freshmen. Results and implications for health educators are discussed. (MT)
STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.
Bossuyt, Patrick M; Reitsma, Johannes B; Bruns, David E; Gatsonis, Constantine A; Glasziou, Paul P; Irwig, Les; Lijmer, Jeroen G; Moher, David; Rennie, Drummond; de Vet, Henrica C W; Kressel, Herbert Y; Rifai, Nader; Golub, Robert M; Altman, Douglas G; Hooft, Lotty; Korevaar, Daniël A; Cohen, Jérémie F
2015-12-01
Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies.
Numerical simulation of seismic wave propagation from land-excited large volume air-gun source
NASA Astrophysics Data System (ADS)
Cao, W.; Zhang, W.
2017-12-01
The land-excited large volume air-gun source can be used to study regional underground structures and to detect temporal velocity changes. The air-gun source is characterized by rich low frequency energy (from bubble oscillation, 2-8Hz) and high repeatability. It can be excited in rivers, reservoirs or man-made pool. Numerical simulation of the seismic wave propagation from the air-gun source helps to understand the energy partitioning and characteristics of the waveform records at stations. However, the effective energy recorded at a distance station is from the process of bubble oscillation, which can not be approximated by a single point source. We propose a method to simulate the seismic wave propagation from the land-excited large volume air-gun source by finite difference method. The process can be divided into three parts: bubble oscillation and source coupling, solid-fluid coupling and the propagation in the solid medium. For the first part, the wavelet of the bubble oscillation can be simulated by bubble model. We use wave injection method combining the bubble wavelet with elastic wave equation to achieve the source coupling. Then, the solid-fluid boundary condition is implemented along the water bottom. And the last part is the seismic wave propagation in the solid medium, which can be readily implemented by the finite difference method. Our method can get accuracy waveform of land-excited large volume air-gun source. Based on the above forward modeling technology, we analysis the effect of the excited P wave and the energy of converted S wave due to different water shapes. We study two land-excited large volume air-gun fields, one is Binchuan in Yunnan, and the other is Hutubi in Xinjiang. The station in Binchuan, Yunnan is located in a large irregular reservoir, the waveform records have a clear S wave. Nevertheless, the station in Hutubi, Xinjiang is located in a small man-made pool, the waveform records have very weak S wave. Better understanding of the characteristics of land-excited large volume air-gun can help to better use of the air-gun source.
Rifai Chai; Naik, Ganesh R; Tran, Yvonne; Sai Ho Ling; Craig, Ashley; Nguyen, Hung T
2015-08-01
An electroencephalography (EEG)-based counter measure device could be used for fatigue detection during driving. This paper explores the classification of fatigue and alert states using power spectral density (PSD) as a feature extractor and fuzzy swarm based-artificial neural network (ANN) as a classifier. An independent component analysis of entropy rate bound minimization (ICA-ERBM) is investigated as a novel source separation technique for fatigue classification using EEG analysis. A comparison of the classification accuracy of source separator versus no source separator is presented. Classification performance based on 43 participants without the inclusion of the source separator resulted in an overall sensitivity of 71.67%, a specificity of 75.63% and an accuracy of 73.65%. However, these results were improved after the inclusion of a source separator module, resulting in an overall sensitivity of 78.16%, a specificity of 79.60% and an accuracy of 78.88% (p <; 0.05).
Mokhtari, Negar; Shirazi, Alireza-Sarraf
2017-01-01
Background Techniques with adequate accuracy of working length determination along with shorter duration of treatment in pulpectomy procedure seems to be essential in pediatric dentistry. The aim of the present study was to evaluate the accuracy of root canal length measurement with Root ZX II apex locator and rotary system in pulpectomy of primary teeth. Material and Methods In this randomized control clinical trial complete pulpectomy was performed on 80 mandibular primary molars in 80, 4-6-year-old children. The study population was randomly divided into case and control groups. In control group conventional pulpectomy was performed and in the case group working length was determined by electronic apex locator Root ZXII and instrumented with Mtwo rotary files. Statistical evaluation was performed using Mann-Whitney and Chi-Square tests (P<0.05). Results There were no significant differences between electronic apex locator Root ZXII and conventional method in accuracy of root canal length determination. However significantly less time was needed for instrumenting with rotary files (P=0.000). Conclusions Considering the comparable results in accuracy of root canal length determination and the considerably shorter instrumentation time in Root ZXII apex locator and rotary system, it may be suggested for pulpectomy in primary molar teeth. Key words:Rotary technique, conventional technique, pulpectomy, primary teeth. PMID:29302280
Mokhtari, Negar; Shirazi, Alireza-Sarraf; Ebrahimi, Masoumeh
2017-11-01
Techniques with adequate accuracy of working length determination along with shorter duration of treatment in pulpectomy procedure seems to be essential in pediatric dentistry. The aim of the present study was to evaluate the accuracy of root canal length measurement with Root ZX II apex locator and rotary system in pulpectomy of primary teeth. In this randomized control clinical trial complete pulpectomy was performed on 80 mandibular primary molars in 80, 4-6-year-old children. The study population was randomly divided into case and control groups. In control group conventional pulpectomy was performed and in the case group working length was determined by electronic apex locator Root ZXII and instrumented with Mtwo rotary files. Statistical evaluation was performed using Mann-Whitney and Chi-Square tests ( P <0.05). There were no significant differences between electronic apex locator Root ZXII and conventional method in accuracy of root canal length determination. However significantly less time was needed for instrumenting with rotary files ( P =0.000). Considering the comparable results in accuracy of root canal length determination and the considerably shorter instrumentation time in Root ZXII apex locator and rotary system, it may be suggested for pulpectomy in primary molar teeth. Key words: Rotary technique, conventional technique, pulpectomy, primary teeth.
NASA Astrophysics Data System (ADS)
Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron
2017-05-01
This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.
Cluster Detection Tests in Spatial Epidemiology: A Global Indicator for Performance Assessment
Guttmann, Aline; Li, Xinran; Feschet, Fabien; Gaudart, Jean; Demongeot, Jacques; Boire, Jean-Yves; Ouchchane, Lemlih
2015-01-01
In cluster detection of disease, the use of local cluster detection tests (CDTs) is current. These methods aim both at locating likely clusters and testing for their statistical significance. New or improved CDTs are regularly proposed to epidemiologists and must be subjected to performance assessment. Because location accuracy has to be considered, performance assessment goes beyond the raw estimation of type I or II errors. As no consensus exists for performance evaluations, heterogeneous methods are used, and therefore studies are rarely comparable. A global indicator of performance, which assesses both spatial accuracy and usual power, would facilitate the exploration of CDTs behaviour and help between-studies comparisons. The Tanimoto coefficient (TC) is a well-known measure of similarity that can assess location accuracy but only for one detected cluster. In a simulation study, performance is measured for many tests. From the TC, we here propose two statistics, the averaged TC and the cumulated TC, as indicators able to provide a global overview of CDTs performance for both usual power and location accuracy. We evidence the properties of these two indicators and the superiority of the cumulated TC to assess performance. We tested these indicators to conduct a systematic spatial assessment displayed through performance maps. PMID:26086911
Tremor Hypocenters Form a Narrow Zone at the Plate Interface in Two Areas of SW Japan
NASA Astrophysics Data System (ADS)
Armbruster, J. G.
2015-12-01
The tremor detectors developed for accurately locating tectonic tremor in Cascadia [Armbruster et al., JGR 2014] have been applied to data from the HINET seismic network in Japan. In the overview by Obara [Science 2002] there are three strong sources of tectonic tremor in southwest Japan: Shikoku, Kii Pen. and Tokai. The daily epicentral distributions of tremor on the HINET web site allow the identification of days when tremor in each source is active. The worst results were obtained in Shikoku, in spite of the high level of tremor activity observed there by others. This method requires a clear direct arrival of the S and P waves at the stations for coherence to be seen, so scattering and shear wave splitting are possible reasons for poor results there. Relatively wide station spacing, 19-30 km, is another possible reason. The best results were obtained in Tokai with stations STR, HRY and TYE spacing 18-19 km, and Kii Pen. with stations KRT, HYS and KAW spacing 15-22 km. In both of those areas the three station detectors see strong episodes of tremor. If detections with three stations are located by constraining them to the plate interface, a pattern of persistent sources is seen, with some intense sources. This is similar to what was seen in Cascadia. Detections with four stations give S and P arrival times of high accuracy. In Tokai the hypocenters form a narrow, 2-3 km thick, zone dipping to the north, consistent with the plate interface there. In Kii Pen. the hypocenters dip to the northwest in a thin, 2-3 km thick, zone but approximately 5 km shallower than a plate interface model for this area [Yoshioka and Murakami, GJI 2007]. The overlap of tremor sources in the 12 years analyzed here suggests relative hypocentral location errors as small as 2-3 km. We conclude that the methods developed in Cascadia will work in Japan but the typical spacing of HINET stations, ~20 km, is greater than the optimum distance found in analysis of data from Cascadia, 8 to 15 km.
Algorithms for System Identification and Source Location.
NASA Astrophysics Data System (ADS)
Nehorai, Arye
This thesis deals with several topics in least squares estimation and applications to source location. It begins with a derivation of a mapping between Wiener theory and Kalman filtering for nonstationary autoregressive moving average (ARMO) processes. Applying time domain analysis, connections are found between time-varying state space realizations and input-output impulse response by matrix fraction description (MFD). Using these connections, the whitening filters are derived by the two approaches, and the Kalman gain is expressed in terms of Wiener theory. Next, fast estimation algorithms are derived in a unified way as special cases of the Conjugate Direction Method. The fast algorithms included are the block Levinson, fast recursive least squares, ladder (or lattice) and fast Cholesky algorithms. The results give a novel derivation and interpretation for all these methods, which are efficient alternatives to available recursive system identification algorithms. Multivariable identification algorithms are usually designed only for left MFD models. In this work, recursive multivariable identification algorithms are derived for right MFD models with diagonal denominator matrices. The algorithms are of prediction error and model reference type. Convergence analysis results obtained by the Ordinary Differential Equation (ODE) method are presented along with simulations. Sources of energy can be located by estimating time differences of arrival (TDOA's) of waves between the receivers. A new method for TDOA estimation is proposed for multiple unknown ARMA sources and additive correlated receiver noise. The method is based on a formula that uses only the receiver cross-spectra and the source poles. Two algorithms are suggested that allow tradeoffs between computational complexity and accuracy. A new time delay model is derived and used to show the applicability of the methods for non -integer TDOA's. Results from simulations illustrate the performance of the algorithms. The last chapter analyzes the response of exact least squares predictors for enhancement of sinusoids with additive colored noise. Using the matrix inversion lemma and the Christoffel-Darboux formula, the frequency response and amplitude gain of the sinusoids are expressed as functions of the signal and noise characteristics. The results generalize the available white noise case.
NASA Astrophysics Data System (ADS)
Saeedimoghaddam, M.; Kim, C.
2017-10-01
Understanding individual travel behavior is vital in travel demand management as well as in urban and transportation planning. New data sources including mobile phone data and location-based social media (LBSM) data allow us to understand mobility behavior on an unprecedented level of details. Recent studies of trip purpose prediction tend to use machine learning (ML) methods, since they generally produce high levels of predictive accuracy. Few studies used LSBM as a large data source to extend its potential in predicting individual travel destination using ML techniques. In the presented research, we created a spatio-temporal probabilistic model based on an ensemble ML framework named "Random Forests" utilizing the travel extracted from geotagged Tweets in 419 census tracts of Greater Cincinnati area for predicting the tract ID of an individual's travel destination at any time using the information of its origin. We evaluated the model accuracy using the travels extracted from the Tweets themselves as well as the travels from household travel survey. The Tweets and survey based travels that start from same tract in the south western parts of the study area is more likely to select same destination compare to the other parts. Also, both Tweets and survey based travels were affected by the attraction points in the downtown of Cincinnati and the tracts in the north eastern part of the area. Finally, both evaluations show that the model predictions are acceptable, but it cannot predict destination using inputs from other data sources as precise as the Tweets based data.
Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate
NASA Astrophysics Data System (ADS)
Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.
2008-08-01
The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.
Inverse modeling of April 2013 radioxenon detections
NASA Astrophysics Data System (ADS)
Hofman, Radek; Seibert, Petra; Philipp, Anne
2014-05-01
Significant concentrations of radioactive xenon isotopes (radioxenon) were detected by the International Monitoring System (IMS) for verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) in April 2013 in Japan. Particularly, three detections of Xe-133 made between 2013-04-07 18:00 UTC and 2013-04-09 06:00 UTC at the station JPX38 are quite notable with respect to the measurement history of the station. Our goal is to analyze the data and perform inverse modeling under different assumptions. This work is useful with respect to nuclear test monitoring as well as for the analysis of and response to nuclear emergencies. Two main scenarios will be pursued: (i) Source location is assumed to be known (DPRK test site). (ii) Source location is considered unknown. We attempt to estimate the source strength and the source strength along with its plausible location compatible with the data in scenario (i) and (ii), respectively. We are considering also the possibility of a vertically distributed source. Calculations of source-receptor sensitivity (SRS) fields and the subsequent inversion are aimed at going beyond routine calculations performed by the CTBTO. For SRS calculations, we employ the Lagrangian particle dispersion model FLEXPART with high resolution ECMWF meteorological data (grid cell sizes of 0.5, 0.25 and ca. 0.125 deg). This is important in situations where receptors or sources are located in complex terrain which is the case of the likely source of detections-the DPRK test site. SRS will be calculated with convection enabled in FLEXPART which will also increase model accuracy. In the variational inversion procedure attention will be paid not only to all significant detections and their uncertainties but also to non-detections which can have a large impact on inversion quality. We try to develop and implement an objective algorithm for inclusion of relevant data where samples from temporal and spatial vicinity of significant detections are added in an iterative manner and the inversion is recalculated in each iteration. This procedure should gradually narrow down the set of hypotheses on the source term, where the source term is here understood as an emission in both spatial and temporal domains. Especially in scenario (ii) we expect a strong impact of non-detections for the reduction of possible solutions. For these and also other purposes like statistical quantification of typical background values, measurements from all IMS noble gas stations north of 30 deg S for a period from January to June 2013 were extracted from vDEC platform. We would like to acknowledge the Preparatory Commission for the CTBTO for kindly providing limited access to the IMS data. This work contains only opinions of the authors, which can not in any case establish legal engagement of the Provisional Technical Secretariat of the CTBTO. This work is partially financed through the project "PREPARE: Innovative integrated tools and platforms for radiological emergency preparedness and post-accident response in Europe" (FP7, Grant 323287).
Improving Kinematic Accuracy of Soft Wearable Data Gloves by Optimizing Sensor Locations
Kim, Dong Hyun; Lee, Sang Wook; Park, Hyung-Soon
2016-01-01
Bending sensors enable compact, wearable designs when used for measuring hand configurations in data gloves. While existing data gloves can accurately measure angular displacement of the finger and distal thumb joints, accurate measurement of thumb carpometacarpal (CMC) joint movements remains challenging due to crosstalk between the multi-sensor outputs required to measure the degrees of freedom (DOF). To properly measure CMC-joint configurations, sensor locations that minimize sensor crosstalk must be identified. This paper presents a novel approach to identifying optimal sensor locations. Three-dimensional hand surface data from ten subjects was collected in multiple thumb postures with varied CMC-joint flexion and abduction angles. For each posture, scanned CMC-joint contours were used to estimate CMC-joint flexion and abduction angles by varying the positions and orientations of two bending sensors. Optimal sensor locations were estimated by the least squares method, which minimized the difference between the true CMC-joint angles and the joint angle estimates. Finally, the resultant optimal sensor locations were experimentally validated. Placing sensors at the optimal locations, CMC-joint angle measurement accuracies improved (flexion, 2.8° ± 1.9°; abduction, 1.9° ± 1.2°). The proposed method for improving the accuracy of the sensing system can be extended to other types of soft wearable measurement devices. PMID:27240364
The Effect of Timbre and Vibrato on Vocal Pitch Matching Accuracy
NASA Astrophysics Data System (ADS)
Duvvuru, Sirisha
Research has shown that singers are better able to match pitch when the target stimulus has a timbre close to their own voice. This study seeks to answer the following questions: (1) Do classically trained female singers more accurately match pitch when the target stimulus is more similar to their own timbre? (2) Does the ability to match pitch vary with increasing pitch? (3) Does the ability to match pitch differ depending on whether the target stimulus is produced with or without vibrato? (4) Are mezzo sopranos less accurate than sopranos?
Altimeter error sources at the 10-cm performance level
NASA Technical Reports Server (NTRS)
Martin, C. F.
1977-01-01
Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.
NASA Astrophysics Data System (ADS)
Botha, J. D. M.; Shahroki, A.; Rice, H.
2017-12-01
This paper presents an enhanced method for predicting aerodynamically generated broadband noise produced by a Vertical Axis Wind Turbine (VAWT). The method improves on existing work for VAWT noise prediction and incorporates recently developed airfoil noise prediction models. Inflow-turbulence and airfoil self-noise mechanisms are both considered. Airfoil noise predictions are dependent on aerodynamic input data and time dependent Computational Fluid Dynamics (CFD) calculations are carried out to solve for the aerodynamic solution. Analytical flow methods are also benchmarked against the CFD informed noise prediction results to quantify errors in the former approach. Comparisons to experimental noise measurements for an existing turbine are encouraging. A parameter study is performed and shows the sensitivity of overall noise levels to changes in inflow velocity and inflow turbulence. Noise sources are characterised and the location and mechanism of the primary sources is determined, inflow-turbulence noise is seen to be the dominant source. The use of CFD calculations is seen to improve the accuracy of noise predictions when compared to the analytic flow solution as well as showing that, for inflow-turbulence noise sources, blade generated turbulence dominates the atmospheric inflow turbulence.
A Robust Sound Source Localization Approach for Microphone Array with Model Errors
NASA Astrophysics Data System (ADS)
Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong
In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.
Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank
2017-01-01
Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.
Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank
2017-01-01
Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory. PMID:29059237
The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control
ERIC Educational Resources Information Center
Page, A.; Moreno, R.; Candelas, P.; Belmar, F.
2008-01-01
In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…
Impact source localisation in aerospace composite structures
NASA Astrophysics Data System (ADS)
De Simone, Mario Emanuele; Ciampa, Francesco; Boccardi, Salvatore; Meo, Michele
2017-12-01
The most commonly encountered type of damage in aircraft composite structures is caused by low-velocity impacts due to foreign objects such as hail stones, tool drops and bird strikes. Often these events can cause severe internal material damage that is difficult to detect and may lead to a significant reduction of the structure’s strength and fatigue life. For this reason there is an urgent need to develop structural health monitoring systems able to localise low-velocity impacts in both metallic and composite components as they occur. This article proposes a novel monitoring system for impact localisation in aluminium and composite structures, which is able to determine the impact location in real-time without a-priori knowledge of the mechanical properties of the material. This method relies on an optimal configuration of receiving sensors, which allows linearization of well-known nonlinear systems of equations for the estimation of the impact location. The proposed algorithm is based on the time of arrival identification of the elastic waves generated by the impact source using the Akaike Information Criterion. The proposed approach was demonstrated successfully on both isotropic and orthotropic materials by using a network of closely spaced surface-bonded piezoelectric transducers. The results obtained show the validity of the proposed algorithm, since the impact sources were detected with a high level of accuracy. The proposed impact detection system overcomes current limitations of other methods and can be retrofitted easily on existing aerospace structures allowing timely detection of an impact event.
Cross-beam coherence of infrasonic signals at local and regional ranges.
Alberts, W C Kirkpatrick; Tenney, Stephen M
2017-11-01
Signals collected by infrasound arrays require continuous analysis by skilled personnel or by automatic algorithms in order to extract useable information. Typical pieces of information gained by analysis of infrasonic signals collected by multiple sensor arrays are arrival time, line of bearing, amplitude, and duration. These can all be used, often with significant accuracy, to locate sources. A very important part of this chain is associating collected signals across multiple arrays. Here, a pairwise, cross-beam coherence method of signal association is described that allows rapid signal association for high signal-to-noise ratio events captured by multiple infrasound arrays at ranges exceeding 150 km. Methods, test cases, and results are described.
Experimental evaluation of the performance of pulsed two-color laser-ranging systems
NASA Technical Reports Server (NTRS)
Im, Kwaifong E.; Gardner, Chester S.; Abshire, James B.; Mcgarry, Jan F.
1987-01-01
Two-color laser-ranging systems can be used to estimate the atmospheric delay by measuring the difference in propagation times between two optical pulses transmitted at different wavelengths. This paper describes horizontal-path ranging experiments that were conducted using flat diffuse targets and cube-corner reflector arrays. Measurements of the timing accuracy of the cross-correlation estimator, atmospheric delay, received pulse shapes, and signal power spectra are presented. The results are in general agreement with theory and indicate that target speckle can be the dominant noise source when the target is small and is located far from the ranging system or when the target consists of a small number of cube-corner reflectors.
NASA Astrophysics Data System (ADS)
Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Nex, Francesco; Vosselman, George
2018-06-01
Oblique aerial images offer views of both building roofs and façades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds - an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional improvement of about 3% (i.e. an average classification accuracy of 94%). The significance of 3D point cloud features becomes more evident in the model transferability scenario (i.e., training and testing samples from different sites that vary slightly in the aforementioned characteristics), where the integration of CNN and 3D point cloud features significantly improved the model transferability accuracy up to a maximum of 7% compared with the accuracy achieved by CNN features alone. Overall, an average accuracy of 85% was achieved for the model transferability scenario across all experiments. Our main conclusion is that such an approach qualifies for practical use.
Into the deep: Evaluation of SourceTracker for assessment of faecal contamination of coastal waters.
Henry, Rebekah; Schang, Christelle; Coutts, Scott; Kolotelo, Peter; Prosser, Toby; Crosbie, Nick; Grant, Trish; Cottam, Darren; O'Brien, Peter; Deletic, Ana; McCarthy, David
2016-04-15
Faecal contamination of recreational waters is an increasing global health concern. Tracing the source of the contaminant is a vital step towards mitigation and disease prevention. Total 16S rRNA amplicon data for a specific environment (faeces, water, soil) and computational tools such as the Markov-Chain Monte Carlo based SourceTracker can be applied to microbial source tracking (MST) and attribution studies. The current study applied artificial and in-laboratory derived bacterial communities to define the potential and limitations associated with the use of SourceTracker, prior to its application for faecal source tracking at three recreational beaches near Port Phillip Bay (Victoria, Australia). The results demonstrated that at minimum multiple model runs of the SourceTracker modelling tool (i.e. technical replicates) were required to identify potential false positive predictions. The calculation of relative standard deviations (RSDs) for each attributed source improved overall predictive confidence in the results. In general, default parameter settings provided high sensitivity, specificity, accuracy and precision. Application of SourceTracker to recreational beach samples identified treated effluent as major source of human-derived faecal contamination, present in 69% of samples. Site-specific sources, such as raw sewage, stormwater and bacterial populations associated with the Yarra River estuary were also identified. Rainfall and associated sand resuspension at each location correlated with observed human faecal indicators. The results of the optimised SourceTracker analysis suggests that local sources of contamination have the greatest effect on recreational coastal water quality. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Khuluse-Makhanya, Sibusisiwe; Stein, Alfred; Breytenbach, André; Gxumisa, Athi; Dudeni-Tlhone, Nontembeko; Debba, Pravesh
2017-10-01
In urban areas the deterioration of air quality as a result of fugitive dust receives less attention than the more prominent traffic and industrial emissions. We assessed whether fugitive dust emission sources in the neighbourhood of an air quality monitor are predictors of ambient PM10 concentrations on days characterized by strong local winds. An ensemble maximum likelihood method is developed for land cover mapping in the vicinity of an air quality station using SPOT 6 multi-spectral images. The ensemble maximum likelihood classifier is developed through multiple training iterations for improved accuracy of the bare soil class. Five primary land cover classes are considered, namely built-up areas, vegetation, bare soil, water and 'mixed bare soil' which denotes areas where soil is mixed with either vegetation or synthetic materials. Preliminary validation of the ensemble classifier for the bare soil class results in an accuracy range of 65-98%. Final validation of all classes results in an overall accuracy of 78%. Next, cluster analysis and a varying intercepts regression model are used to assess the statistical association between land cover, a fugitive dust emissions proxy and observed PM10. We found that land cover patterns in the neighbourhood of an air quality station are significant predictors of observed average PM10 concentrations on days when wind speeds are conducive for dust emissions. This study concludes that in the absence of an emissions inventory for ambient particulate matter, PM10 emitted from dust reservoirs can be statistically accounted for by land cover characteristics. This supports the use of land cover data for improved prediction of PM10 at locations without air quality monitoring stations.
Performance Assessment and Geometric Calibration of RESOURCESAT-2
NASA Astrophysics Data System (ADS)
Radhadevi, P. V.; Solanki, S. S.; Akilan, A.; Jyothi, M. V.; Nagasubramanian, V.
2016-06-01
Resourcesat-2 (RS-2) has successfully completed five years of operations in its orbit. This satellite has multi-resolution and multi-spectral capabilities in a single platform. A continuous and autonomous co-registration, geo-location and radiometric calibration of image data from different sensors with widely varying view angles and resolution was one of the challenges of RS-2 data processing. On-orbit geometric performance of RS-2 sensors has been widely assessed and calibrated during the initial phase operations. Since then, as an ongoing activity, various geometric performance data are being generated periodically. This is performed with sites of dense ground control points (GCPs). These parameters are correlated to the direct geo-location accuracy of the RS-2 sensors and are monitored and validated to maintain the performance. This paper brings out the geometric accuracy assessment, calibration and validation done for about 500 datasets of RS-2. The objectives of this study are to ensure the best absolute and relative location accuracy of different cameras, location performance with payload steering and co-registration of multiple bands. This is done using a viewing geometry model, given ephemeris and attitude data, precise camera geometry and datum transformation. In the model, the forward and reverse transformations between the coordinate systems associated with the focal plane, payload, body, orbit and ground are rigorously and explicitly defined. System level tests using comparisons to ground check points have validated the operational geo-location accuracy performance and the stability of the calibration parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne
Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less
Validation of luminescent source reconstruction using spectrally resolved bioluminescence images
NASA Astrophysics Data System (ADS)
Virostko, John M.; Powers, Alvin C.; Jansen, E. D.
2008-02-01
This study examines the accuracy of the Living Image® Software 3D Analysis Package (Xenogen, Alameda, CA) in reconstruction of light source depth and intensity. Constant intensity light sources were placed in an optically homogeneous medium (chicken breast). Spectrally filtered images were taken at 560, 580, 600, 620, 640, and 660 nanometers. The Living Image® Software 3D Analysis Package was employed to reconstruct source depth and intensity using these spectrally filtered images. For sources shallower than the mean free path of light there was proportionally higher inaccuracy in reconstruction. For sources deeper than the mean free path, the average error in depth and intensity reconstruction was less than 4% and 12%, respectively. The ability to distinguish multiple sources decreased with increasing source depth and typically required a spatial separation of twice the depth. The constant intensity light sources were also implanted in mice to examine the effect of optical inhomogeneity. The reconstruction accuracy suffered in inhomogeneous tissue with accuracy influenced by the choice of optical properties used in reconstruction.
Iida, M.; Miyatake, T.; Shimazaki, K.
1990-01-01
We develop general rules for a strong-motion array layout on the basis of our method of applying a prediction analysis to a source inversion scheme. A systematic analysis is done to obtain a relationship between fault-array parameters and the accuracy of a source inversion. Our study of the effects of various physical waves indicates that surface waves at distant stations contribute significantly to the inversion accuracy for the inclined fault plane, whereas only far-field body waves at both small and large distances contribute to the inversion accuracy for the vertical fault, which produces more phase interference. These observations imply the adequacy of the half-space approximation used throughout our present study and suggest rules for actual array designs. -from Authors
Schorer, Jörg; Rienhoff, Rebecca; Fischer, Lennart; Baker, Joseph
2013-09-01
The importance of perceptual-cognitive expertise in sport has been repeatedly demonstrated. In this study we examined the role of different sources of visual information (i.e., foveal versus peripheral) in anticipating volleyball attack positions. Expert (n = 11), advanced (n = 13) and novice (n = 16) players completed an anticipation task that involved predicting the location of volleyball attacks. Video clips of volleyball attacks (n = 72) were spatially and temporally occluded to provide varying amounts of information to the participant. In addition, participants viewed the attacks under three visual conditions: full vision, foveal vision only, and peripheral vision only. Analysis of variance revealed significant between group differences in prediction accuracy with higher skilled players performing better than lower skilled players. Additionally, we found significant differences between temporal and spatial occlusion conditions. Both of those factors interacted separately, but not combined with expertise. Importantly, for experts the sum of both fields of vision was superior to either source in isolation. Our results suggest different sources of visual information work collectively to facilitate expert anticipation in time-constrained sports and reinforce the complexity of expert perception.
Koo, Choongwan; Hong, Taehoon; Lee, Minhyun; Park, Hyo Seon
2013-05-07
The photovoltaic (PV) system is considered an unlimited source of clean energy, whose amount of electricity generation changes according to the monthly average daily solar radiation (MADSR). It is revealed that the MADSR distribution in South Korea has very diverse patterns due to the country's climatic and geographical characteristics. This study aimed to develop a MADSR estimation model for the location without the measured MADSR data, using an advanced case based reasoning (CBR) model, which is a hybrid methodology combining CBR with artificial neural network, multiregression analysis, and genetic algorithm. The average prediction accuracy of the advanced CBR model was very high at 95.69%, and the standard deviation of the prediction accuracy was 3.67%, showing a significant improvement in prediction accuracy and consistency. A case study was conducted to verify the proposed model. The proposed model could be useful for owner or construction manager in charge of determining whether or not to introduce the PV system and where to install it. Also, it would benefit contractors in a competitive bidding process to accurately estimate the electricity generation of the PV system in advance and to conduct an economic and environmental feasibility study from the life cycle perspective.
Precise GNSS Positioning Using Smart Devices
Caldera, Stefano; Pertusini, Lisa
2017-01-01
The recent access to GNSS (Global Navigation Satellite System) phase observations on smart devices, enabled by Google through its Android operating system, opens the possibility to apply precise positioning techniques using off-the-shelf, mass-market devices. The target of this work is to evaluate whether this is feasible, and which positioning accuracy can be achieved by relative positioning of the smart device with respect to a base station. Positioning of a Google/HTC Nexus 9 tablet was performed by means of batch least-squares adjustment of L1 phase double-differenced observations, using the open source goGPS software, over baselines ranging from approximately 10 m to 8 km, with respect to both physical (geodetic or low-cost) and virtual base stations. The same positioning procedure was applied also to a co-located u-blox low-cost receiver, to compare the performance between the receiver and antenna embedded in the Nexus 9 and a standard low-cost single-frequency receiver with external patch antenna. The results demonstrate that with a smart device providing raw GNSS phase observations, like the Nexus 9, it is possible to reach decimeter-level accuracy through rapid-static surveys, without phase ambiguity resolution. It is expected that sub-centimeter accuracy could be achieved, as demonstrated for the u-blox case, if integer phase ambiguities were correctly resolved. PMID:29064417
Standardized assessment of infrared thermographic fever screening system performance
NASA Astrophysics Data System (ADS)
Ghassemi, Pejhman; Pfefer, Joshua; Casamento, Jon; Wang, Quanzeng
2017-03-01
Thermal modalities represent the only currently viable mass fever screening approach for outbreaks of infectious disease pandemics such as Ebola and SARS. Non-contact infrared thermometers (NCITs) and infrared thermographs (IRTs) have been previously used for mass fever screening in transportation hubs such as airports to reduce the spread of disease. While NCITs remain a more popular choice for fever screening in the field and at fixed locations, there has been increasing evidence in the literature that IRTs can provide greater accuracy in estimating core body temperature if appropriate measurement practices are applied - including the use of technically suitable thermographs. Therefore, the purpose of this study was to develop a battery of evaluation test methods for standardized, objective and quantitative assessment of thermograph performance characteristics critical to assessing suitability for clinical use. These factors include stability, drift, uniformity, minimum resolvable temperature difference, and accuracy. Two commercial IRT models were characterized. An external temperature reference source with high temperature accuracy was utilized as part of the screening thermograph. Results showed that both IRTs are relatively accurate and stable (<1% error of reading with stability of +/-0.05°C). Overall, results of this study may facilitate development of standardized consensus test methods to enable consistent and accurate use of IRTs for fever screening.
Precise GNSS Positioning Using Smart Devices.
Realini, Eugenio; Caldera, Stefano; Pertusini, Lisa; Sampietro, Daniele
2017-10-24
The recent access to GNSS (Global Navigation Satellite System) phase observations on smart devices, enabled by Google through its Android operating system, opens the possibility to apply precise positioning techniques using off-the-shelf, mass-market devices. The target of this work is to evaluate whether this is feasible, and which positioning accuracy can be achieved by relative positioning of the smart device with respect to a base station. Positioning of a Google/HTC Nexus 9 tablet was performed by means of batch least-squares adjustment of L1 phase double-differenced observations, using the open source goGPS software, over baselines ranging from approximately 10 m to 8 km, with respect to both physical (geodetic or low-cost) and virtual base stations. The same positioning procedure was applied also to a co-located u-blox low-cost receiver, to compare the performance between the receiver and antenna embedded in the Nexus 9 and a standard low-cost single-frequency receiver with external patch antenna. The results demonstrate that with a smart device providing raw GNSS phase observations, like the Nexus 9, it is possible to reach decimeter-level accuracy through rapid-static surveys, without phase ambiguity resolution. It is expected that sub-centimeter accuracy could be achieved, as demonstrated for the u-blox case, if integer phase ambiguities were correctly resolved.
Lu, Huancai; Wu, Sean F
2009-03-01
The vibroacoustic responses of a highly nonspherical vibrating object are reconstructed using Helmholtz equation least-squares (HELS) method. The objectives of this study are to examine the accuracy of reconstruction and the impacts of various parameters involved in reconstruction using HELS. The test object is a simply supported and baffled thin plate. The reason for selecting this object is that it represents a class of structures that cannot be exactly described by the spherical Hankel functions and spherical harmonics, which are taken as the basis functions in the HELS formulation, yet the analytic solutions to vibroacoustic responses of a baffled plate are readily available so the accuracy of reconstruction can be checked accurately. The input field acoustic pressures for reconstruction are generated by the Rayleigh integral. The reconstructed normal surface velocities are validated against the benchmark values, and the out-of-plane vibration patterns at several natural frequencies are compared with the natural modes of a simply supported plate. The impacts of various parameters such as number of measurement points, measurement distance, location of the origin of the coordinate system, microphone spacing, and ratio of measurement aperture size to the area of source surface of reconstruction on the resultant accuracy of reconstruction are examined.
Mundi, Manpreet S; Edakkanambeth Varayil, Jithinraj; McMahon, Megan T; Okano, Akiko; Vallumsetla, Nishanth; Bonnes, Sara L; Andrews, James C; Hurt, Ryan T
2016-04-01
Parenteral nutrition (PN) is a life-saving therapy for patients with intestinal failure. Safe delivery of hyperosmotic solution requires a central venous catheter (CVC) with tip in the lower superior vena cava (SVC) or at the SVC-right atrium (RA) junction. To reduce cost and delay in use of CVC, new techniques such as intravascular electrocardiogram (ECG) are being used for tip confirmation in place of chest x-ray (CXR). The present study assessed for accuracy of ECG confirmation in home PN (HPN). Records for all patients consulted for HPN from December 17, 2014, to June 16, 2015, were reviewed for patient demographics, diagnosis leading to HPN initiation, and ECG and CXR confirmation. CXRs were subsequently reviewed by a radiologist to reassess location of the CVC tip and identify those that should be adjusted. Seventy-three patients were eligible, and after assessment for research authorization and postplacement CXR, 17 patients (30% male) with an age of 54 ± 14 years were reviewed. In all patients, postplacement intravascular ECG reading stated tip in the SVC. However, based on CXR, the location of the catheter tip was satisfactory (low SVC or SVC-RA junction) in 10 of 17 patients (59%). Due to the high osmolality of PN, CVC tip location is of paramount importance. After radiology review of CXR, we noted that 7 of 17 (41%) peripherally inserted central catheter lines were in an unsatisfactory position despite ECG confirmation. With current data available, intravenous ECG confirmation should not be used as the sole source of tip confirmation in patients receiving HPN. © 2016 American Society for Parenteral and Enteral Nutrition.
NASA Astrophysics Data System (ADS)
Ortiz-León, Gisela N.; Loinard, Laurent; Kounkel, Marina A.; Dzib, Sergio A.; Mioduszewski, Amy J.; Rodríguez, Luis F.; Torres, Rosa M.; González-Lópezlira, Rosa A.; Pech, Gerardo; Rivera, Juana L.; Hartmann, Lee; Boden, Andrew F.; Evans, Neal J., II; Briceño, Cesar; Tobin, John J.; Galli, Phillip A. B.; Gudehus, Donald
2017-01-01
We present the first results of the Gould’s Belt Distances Survey (GOBELINS), a project aimed at measuring the proper motion and trigonometric parallax of a large sample of young stars in nearby regions using multi-epoch Very Long Baseline Array (VLBA) radio observations. Enough VLBA detections have now been obtained for 16 stellar systems in Ophiuchus to derive their parallax and proper motion. This leads to distance determinations for individual stars with an accuracy of 0.3 to a few percent. In addition, the orbits of six multiple systems were modelled by combining absolute positions with VLBA (and, in some cases, near-infrared) angular separations. Twelve stellar systems are located in the dark cloud Lynds 1688 the individual distances for this sample are highly consistent with one another and yield a mean parallax for Lynds 1688 of \\varpi =7.28+/- 0.06 mas, corresponding to a distance d=137.3+/- 1.2 pc. This represents an accuracy greater than 1%. Three systems for which astrometric elements could be measured are located in the eastern streamer (Lynds 1689) and yield an estimate of \\varpi =6.79+/- 0.16 mas, corresponding to a distance d=147.3+/- 3.4 pc. This suggests that the eastern streamer is located about 10 pc farther than the core, but this conclusion needs to be confirmed by observations of additional sources in the eastern streamer (currently being collected). From the measured proper motions, we estimate the one-dimensional velocity dispersion in Lynds 1688 to be 2.8 ± 1.8 and 3.0 ± 2.0 km s-1, in R.A. and decl., respectively; these are larger than, but still consistent within 1σ of, those found in other studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schinzel, Frank K.; Petrov, Leonid; Taylor, Gregory B.
The third Fermi Large Area Telescope γ -ray source catalog (3FGL) contains over 1000 objects for which there is no known counterpart at other wavelengths. The physical origin of the γ -ray emission from those objects is unknown. Such objects are commonly referred to as unassociated and mostly do not exhibit significant γ -ray flux variability. We performed a survey of all unassociated γ -ray sources found in 3FGL using the Australia Telescope Compact Array and Very Large Array in the range 4.0–10.0 GHz. We found 2097 radio candidates for association with γ -ray sources. The follow-up with very longmore » baseline interferometry for a subset of those candidates yielded 142 new associations with active galactic nuclei that are γ -ray sources, provided alternative associations for seven objects, and improved positions for another 144 known associations to the milliarcsecond level of accuracy. In addition, for 245 unassociated γ -ray sources we did not find a single compact radio source above 2 mJy within 3 σ of their γ -ray localization. A significant fraction of these empty fields, 39%, are located away from the Galactic plane. We also found 36 extended radio sources that are candidates for association with a corresponding γ -ray object, 19 of which are most likely supernova remnants or H ii regions, whereas 17 could be radio galaxies.« less
NASA Astrophysics Data System (ADS)
Cannavo', Flavio; Scandura, Danila; Palano, Mimmo; Musumeci, Carla
2014-05-01
Seismicity and ground deformation represent the principal geophysical methods for volcano monitoring and provide important constraints on subsurface magma movements. The occurrence of migrating seismic swarms, as observed at several volcanoes worldwide, are commonly associated with dike intrusions. In addition, on active volcanoes, (de)pressurization and/or intrusion of magmatic bodies stress and deform the surrounding crustal rocks, often causing earthquakes randomly distributed in time within a volume extending about 5-10 km from the wall of the magmatic bodies. Despite advances in space-based, geodetic and seismic networks have significantly improved volcano monitoring in the last decades on an increasing worldwide number of volcanoes, quantitative models relating deformation and seismicity are not common. The observation of several episodes of volcanic unrest throughout the world, where the movement of magma through the shallow crust was able to produce local rotation of the ambient stress field, introduces an opportunity to improve the estimate of the parameters of a deformation source. In particular, during these episodes of volcanic unrest a radial pattern of P-axes of the focal mechanism solutions, similar to that of ground deformation, has been observed. Therefore, taking into account additional information from focal mechanisms data, we propose a novel approach to volcanic source modeling based on the joint inversion of deformation and focal plane solutions assuming that both observations are due to the same source. The methodology is first verified against a synthetic dataset of surface deformation and strain within the medium, and then applied to real data from an unrest episode occurred before the May 13th 2008 eruption at Mt. Etna (Italy). The main results clearly indicate as the joint inversion improves the accuracy of the estimated source parameters of about 70%. The statistical tests indicate that the source depth is the parameter with the highest increment of accuracy. In addition a sensitivity analysis confirms that displacements data are more useful to constrain the pressure and the horizontal location of the source than its depth, while the P-axes better constrain the depth estimation.
Infrasound signals from the underground nuclear explosions of North Korea
NASA Astrophysics Data System (ADS)
Che, Il-Young; Park, Junghyun; Kim, Inho; Kim, Tae Sung; Lee, Hee-Il
2014-07-01
We investigated the infrasound signals from seismic ground motions induced by North Korea's underground nuclear explosions, including the recent third explosion on 2013 February 12. For the third explosion, the epicentral infrasound signals were detected not only by three infrasound network stations (KSGAR, ULDAR and YAGAR) in South Korea but also by two nearby International Monitoring System infrasound stations, IS45 and IS30. The detectability of the signals was limited at stations located on the relatively east side of the epicentre, with large azimuth deviations due to very favourable atmospheric conditions for eastward propagation at stratospheric height in 2013. The stratospheric wind direction was the reverse of that when the second explosion was conducted in 2009 May. The source location of the epicentral infrasound with wave parameters determined at the multiple stations has an offset by about 16.6 km from the reference seismic location. It was possible to determine the infrasonic location with moderate accuracy by the correction of the azimuth deviation due to the eastward winds in the stratosphere. In addition to the epicentral infrasonic signals, diffracted infrasound signals were observed from the second underground nuclear explosion in 2009. The exceptional detectability of the diffracted infrasound was a consequence of the temporal formation of a thin atmospheric inversion layer over the ocean surface when the event occurred.
NASA Astrophysics Data System (ADS)
Kubota, T.; Saito, T.; Suzuki, W.; Hino, R.
2017-12-01
When an earthquake occurs in offshore region, ocean bottom pressure gauges (OBP) observe the low-frequency (> 400s) pressure change due to tsunami and also high-frequency (< 200 s) pressure change due to seismic waves (e.g. Filloux 1983; Matsumoto et al. 2012). When the period of the seafloor motion is sufficiently long (> 20 s), the relation between seafloor dynamic pressure change p and seafloor vertical acceleration az is approximately given as p=ρ0h0az (ρ0: seawater density, h0: sea depth) (e.g., Bolshakova et al. 2011; Matsumoto et al.,2012; Saito and Tsushima, 2016, JGR; Saito, 2017, GJI). Based on this relation, it is expected that OBP can be used as vertical accelerometers. If we use OBP deployed in offshore region as seismometer, the station coverage is improved and then the accuracy of the earthquake location is also improved. In this study, we analyzed seismograms together with seafloor dynamic pressure change records to estimate the CMT of the interplate earthquakes occurred at off the coast of Tohoku on 9 March, 2011 (Mw 7.3 and 6.5) (Kubota et al., 2017, EPSL), and discussed the estimation accuracy of the centroid horizontal location. When the dynamic pressure change recorded by OBP is used in addition to the seismograms, the horizontal location of CMT was reliably constrained. The centroid was located in the center of the rupture area estimated by the tsunami inversion analysis (Kubota et al., 2017). These CMTs had reverse-fault mechanisms consistent with the interplate earthquakes and well reproduces the dynamic pressure signals in the OBP records. Meanwhile, when we used only the inland seismometers, the centroids were estimated to be outside the rupture area. This study proved that the dynamic pressure change in OBP records are available as seismic-wave records, which greatly helped to investigate the source process of offshore earthquakes far from the coast.
NASA Astrophysics Data System (ADS)
Kubota, T.; Saito, T.; Suzuki, W.; Hino, R.
2016-12-01
When an earthquake occurs in offshore region, ocean bottom pressure gauges (OBP) observe the low-frequency (> 400s) pressure change due to tsunami and also high-frequency (< 200 s) pressure change due to seismic waves (e.g. Filloux 1983; Matsumoto et al. 2012). When the period of the seafloor motion is sufficiently long (> 20 s), the relation between seafloor dynamic pressure change p and seafloor vertical acceleration az is approximately given as p=ρ0h0az (ρ0: seawater density, h0: sea depth) (e.g., Bolshakova et al. 2011; Matsumoto et al.,2012; Saito and Tsushima, 2016, JGR; Saito, 2017, GJI). Based on this relation, it is expected that OBP can be used as vertical accelerometers. If we use OBP deployed in offshore region as seismometer, the station coverage is improved and then the accuracy of the earthquake location is also improved. In this study, we analyzed seismograms together with seafloor dynamic pressure change records to estimate the CMT of the interplate earthquakes occurred at off the coast of Tohoku on 9 March, 2011 (Mw 7.3 and 6.5) (Kubota et al., 2017, EPSL), and discussed the estimation accuracy of the centroid horizontal location. When the dynamic pressure change recorded by OBP is used in addition to the seismograms, the horizontal location of CMT was reliably constrained. The centroid was located in the center of the rupture area estimated by the tsunami inversion analysis (Kubota et al., 2017). These CMTs had reverse-fault mechanisms consistent with the interplate earthquakes and well reproduces the dynamic pressure signals in the OBP records. Meanwhile, when we used only the inland seismometers, the centroids were estimated to be outside the rupture area. This study proved that the dynamic pressure change in OBP records are available as seismic-wave records, which greatly helped to investigate the source process of offshore earthquakes far from the coast.
Automated classification and quantitative analysis of arterial and venous vessels in fundus images
NASA Astrophysics Data System (ADS)
Alam, Minhaj; Son, Taeyoon; Toslak, Devrim; Lim, Jennifer I.; Yao, Xincheng
2018-02-01
It is known that retinopathies may affect arteries and veins differently. Therefore, reliable differentiation of arteries and veins is essential for computer-aided analysis of fundus images. The purpose of this study is to validate one automated method for robust classification of arteries and veins (A-V) in digital fundus images. We combine optical density ratio (ODR) analysis and blood vessel tracking algorithm to classify arteries and veins. A matched filtering method is used to enhance retinal blood vessels. Bottom hat filtering and global thresholding are used to segment the vessel and skeleton individual blood vessels. The vessel tracking algorithm is used to locate the optic disk and to identify source nodes of blood vessels in optic disk area. Each node can be identified as vein or artery using ODR information. Using the source nodes as starting point, the whole vessel trace is then tracked and classified as vein or artery using vessel curvature and angle information. 50 color fundus images from diabetic retinopathy patients were used to test the algorithm. Sensitivity, specificity, and accuracy metrics were measured to assess the validity of the proposed classification method compared to ground truths created by two independent observers. The algorithm demonstrated 97.52% accuracy in identifying blood vessels as vein or artery. A quantitative analysis upon A-V classification showed that average A-V ratio of width for NPDR subjects with hypertension decreased significantly (43.13%).
AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell, Sean A.; Murphy, Tara; Lo, Kitty K., E-mail: s.farrell@physics.usyd.edu.au
In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of amore » random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.« less
Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.
Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael
2015-08-01
In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.
NASA Astrophysics Data System (ADS)
Borisov, A. A.; Deryabina, N. A.; Markovskij, D. V.
2017-12-01
Instant power is a key parameter of the ITER. Its monitoring with an accuracy of a few percent is an urgent and challenging aspect of neutron diagnostics. In a series of works published in Problems of Atomic Science and Technology, Series: Thermonuclear Fusion under a common title, the step-by-step neutronics analysis was given to substantiate a calibration technique for the DT and DD modes of the ITER. A Gauss quadrature scheme, optimal for processing "expensive" experiments, is used for numerical integration of 235U and 238U detector responses to the point sources of 14-MeV neutrons. This approach allows controlling the integration accuracy in relation to the number of coordinate mesh points and thus minimizing the number of irradiations at the given uncertainty of the full monitor response. In the previous works, responses of the divertor and blanket monitors to the isotropic point sources of DT and DD neutrons in the plasma profile and to the models of real sources were calculated within the ITER model using the MCNP code. The neutronics analyses have allowed formulating the basic principles of calibration that are optimal for having the maximum accuracy at the minimum duration of in situ experiments at the reactor. In this work, scenarios of the preliminary and basic experimental ITER runs are suggested on the basis of those principles. It is proposed to calibrate the monitors only with DT neutrons and use correction factors to the DT mode calibration for the DD mode. It is reasonable to perform full calibration only with 235U chambers and calibrate 238U chambers by responses of the 235U chambers during reactor operation (cross-calibration). The divertor monitor can be calibrated using both direct measurement of responses at the Gauss positions of a point source and simplified techniques based on the concepts of equivalent ring sources and inverse response distributions, which will considerably reduce the amount of measurements. It is shown that the monitor based on the average responses of the horizontal and vertical neutron chambers remains spatially stable as the source moves and can be used in addition to the staff monitor at neutron fluxes in the detectors four orders of magnitude lower than on the first wall, where staff detectors are located. Owing to low background, detectors of neutron chambers do not need calibration in the reactor because it is actually determination of the absolute detector efficiency for 14-MeV neutrons, which is a routine out-of-reactor procedure.
STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.
Bossuyt, Patrick M; Reitsma, Johannes B; Bruns, David E; Gatsonis, Constantine A; Glasziou, Paul P; Irwig, Les; Lijmer, Jeroen G; Moher, David; Rennie, Drummond; de Vet, Henrica C W; Kressel, Herbert Y; Rifai, Nader; Golub, Robert M; Altman, Douglas G; Hooft, Lotty; Korevaar, Daniël A; Cohen, Jérémie F
2015-12-01
Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies. © 2015 American Association for Clinical Chemistry.
Paans, Wolter; Sermeus, Walter; Nieweg, Roos Mb; Krijnen, Wim P; van der Schans, Cees P
2012-08-01
This paper reports a study about the effect of knowledge sources, such as handbooks, an assessment format and a predefined record structure for diagnostic documentation, as well as the influence of knowledge, disposition toward critical thinking and reasoning skills, on the accuracy of nursing diagnoses.Knowledge sources can support nurses in deriving diagnoses. A nurse's disposition toward critical thinking and reasoning skills is also thought to influence the accuracy of his or her nursing diagnoses. A randomised factorial design was used in 2008-2009 to determine the effect of knowledge sources. We used the following instruments to assess the influence of ready knowledge, disposition, and reasoning skills on the accuracy of diagnoses: (1) a knowledge inventory, (2) the California Critical Thinking Disposition Inventory, and (3) the Health Science Reasoning Test. Nurses (n = 249) were randomly assigned to one of four factorial groups, and were instructed to derive diagnoses based on an assessment interview with a simulated patient/actor. The use of a predefined record structure resulted in a significantly higher accuracy of nursing diagnoses. A regression analysis reveals that almost half of the variance in the accuracy of diagnoses is explained by the use of a predefined record structure, a nurse's age and the reasoning skills of `deduction' and `analysis'. Improving nurses' dispositions toward critical thinking and reasoning skills, and the use of a predefined record structure, improves accuracy of nursing diagnoses.
2012-01-01
Background This paper reports a study about the effect of knowledge sources, such as handbooks, an assessment format and a predefined record structure for diagnostic documentation, as well as the influence of knowledge, disposition toward critical thinking and reasoning skills, on the accuracy of nursing diagnoses. Knowledge sources can support nurses in deriving diagnoses. A nurse’s disposition toward critical thinking and reasoning skills is also thought to influence the accuracy of his or her nursing diagnoses. Method A randomised factorial design was used in 2008–2009 to determine the effect of knowledge sources. We used the following instruments to assess the influence of ready knowledge, disposition, and reasoning skills on the accuracy of diagnoses: (1) a knowledge inventory, (2) the California Critical Thinking Disposition Inventory, and (3) the Health Science Reasoning Test. Nurses (n = 249) were randomly assigned to one of four factorial groups, and were instructed to derive diagnoses based on an assessment interview with a simulated patient/actor. Results The use of a predefined record structure resulted in a significantly higher accuracy of nursing diagnoses. A regression analysis reveals that almost half of the variance in the accuracy of diagnoses is explained by the use of a predefined record structure, a nurse’s age and the reasoning skills of `deduction’ and `analysis’. Conclusions Improving nurses’ dispositions toward critical thinking and reasoning skills, and the use of a predefined record structure, improves accuracy of nursing diagnoses. PMID:22852577
Karnowski, Thomas P [Knoxville, TN; Tobin, Jr., Kenneth W.; Muthusamy Govindasamy, Vijaya Priya [Knoxville, TN; Chaum, Edward [Memphis, TN
2012-07-10
A method for assigning a confidence metric for automated determination of optic disc location that includes analyzing a retinal image and determining at least two sets of coordinates locating an optic disc in the retinal image. The sets of coordinates can be determined using first and second image analysis techniques that are different from one another. An accuracy parameter can be calculated and compared to a primary risk cut-off value. A high confidence level can be assigned to the retinal image if the accuracy parameter is less than the primary risk cut-off value and a low confidence level can be assigned to the retinal image if the accuracy parameter is greater than the primary risk cut-off value. The primary risk cut-off value being selected to represent an acceptable risk of misdiagnosis of a disease having retinal manifestations by the automated technique.
High Order Numerical Methods for LES of Turbulent Flows with Shocks
NASA Technical Reports Server (NTRS)
Kotov, D. V.; Yee, H. C.; Hadjadj, A.; Wray, A.; Sjögreen, B.
2014-01-01
Simulation of turbulent flows with shocks employing explicit subgrid-scale (SGS) filtering may encounter a loss of accuracy in the vicinity of a shock. In this work we perform a comparative study of different approaches to reduce this loss of accuracy within the framework of the dynamic Germano SGS model. One of the possible approaches is to apply Harten's subcell resolution procedure to locate and sharpen the shock, and to use a one-sided test filter at the grid points adjacent to the exact shock location. The other considered approach is local disabling of the SGS terms in the vicinity of the shock location. In this study we use a canonical shock-turbulence interaction problem for comparison of the considered modifications of the SGS filtering procedure. For the considered test case both approaches show a similar improvement in the accuracy near the shock.
Investigation of practical and theoretical accuracy of wireless indoor-positioning system UBISENSE
NASA Astrophysics Data System (ADS)
Wozniak, Marek; Odziemczyk, Waldemar; Nagorski, Kamil
2013-04-01
The development of Real Time Locating Systems has become an important add-on to many existing location aware systems. While Global Navigation Satelite System has solved most of the outdoor problems, it fails to repeat this success indoors. Wireless indoor positioning systems have become very popular in recent years. One of them is UBISENSE system. This system requires to carry an identity tag that is detected by sensors, which typically use triangulation to determine location. This paper presents the results of the investigation of accuracy of tag position using precise geodetic measurements and geometric analysis. Experimental measurements were carried out on the field polygon using precise tacheometer TCRP 1201+ and complete equipment of Ubisense. Results of experimental measurements were analyzed and presented graphically using Surfer 8. The paper presents the results of the investigation the teoretical and practical positioning accuracy according to the various working conditions.
Analysis of the accuracy and readability of herbal supplement information on Wikipedia.
Phillips, Jennifer; Lam, Connie; Palmisano, Lisa
2014-01-01
To determine the completeness and readability of information found in Wikipedia for leading dietary supplements and assess the accuracy of this information with regard to safety (including use during pregnancy/lactation), contraindications, drug interactions, therapeutic uses, and dosing. Cross-sectional analysis of Wikipedia articles. The contents of Wikipedia articles for the 19 top-selling herbal supplements were retrieved on July 24, 2012, and evaluated for organization, content, accuracy (as compared with information in two leading dietary supplement references) and readability. Accuracy of Wikipedia articles. No consistency was noted in how much information was included in each Wikipedia article, how the information was organized, what major categories were used, and where safety and therapeutic information was located in the article. All articles in Wikipedia contained information on therapeutic uses and adverse effects but several lacked information on drug interactions, pregnancy, and contraindications. Wikipedia articles had 26%-75% of therapeutic uses and 76%-100% of adverse effects listed in the Natural Medicines Comprehensive Database and/or Natural Standard. Overall, articles were written at a 13.5-grade level, and all were at a ninth-grade level or above. Articles in Wikipedia in mid-2012 for the 19 top-selling herbal supplements were frequently incomplete, of variable quality, and sometimes inconsistent with reputable sources of information on these products. Safety information was particularly inconsistent among the articles. Patients and health professionals should not rely solely on Wikipedia for information on these herbal supplements when treatment decisions are being made.
Deep Learning to Classify Radiology Free-Text Reports.
Chen, Matthew C; Ball, Robyn L; Yang, Lingyao; Moradzadeh, Nathaniel; Chapman, Brian E; Larson, David B; Langlotz, Curtis P; Amrhein, Timothy J; Lungren, Matthew P
2018-03-01
Purpose To evaluate the performance of a deep learning convolutional neural network (CNN) model compared with a traditional natural language processing (NLP) model in extracting pulmonary embolism (PE) findings from thoracic computed tomography (CT) reports from two institutions. Materials and Methods Contrast material-enhanced CT examinations of the chest performed between January 1, 1998, and January 1, 2016, were selected. Annotations by two human radiologists were made for three categories: the presence, chronicity, and location of PE. Classification of performance of a CNN model with an unsupervised learning algorithm for obtaining vector representations of words was compared with the open-source application PeFinder. Sensitivity, specificity, accuracy, and F1 scores for both the CNN model and PeFinder in the internal and external validation sets were determined. Results The CNN model demonstrated an accuracy of 99% and an area under the curve value of 0.97. For internal validation report data, the CNN model had a statistically significant larger F1 score (0.938) than did PeFinder (0.867) when classifying findings as either PE positive or PE negative, but no significant difference in sensitivity, specificity, or accuracy was found. For external validation report data, no statistical difference between the performance of the CNN model and PeFinder was found. Conclusion A deep learning CNN model can classify radiology free-text reports with accuracy equivalent to or beyond that of an existing traditional NLP model. © RSNA, 2017 Online supplemental material is available for this article.
Cover estimation and payload location using Markov random fields
NASA Astrophysics Data System (ADS)
Quach, Tu-Thach
2014-02-01
Payload location is an approach to find the message bits hidden in steganographic images, but not necessarily their logical order. Its success relies primarily on the accuracy of the underlying cover estimators and can be improved if more estimators are used. This paper presents an approach based on Markov random field to estimate the cover image given a stego image. It uses pairwise constraints to capture the natural two-dimensional statistics of cover images and forms a basis for more sophisticated models. Experimental results show that it is competitive against current state-of-the-art estimators and can locate payload embedded by simple LSB steganography and group-parity steganography. Furthermore, when combined with existing estimators, payload location accuracy improves significantly.
Ex vivo accuracy of an apex locator using digital signal processing in primary teeth.
Leonardo, Mário Roberto; da Silva, Lea Assed Bezerra; Nelson-Filho, Paulo; da Silva, Raquel Assed Bezerra; Lucisano, Marília Pacífico
2009-01-01
The purpose of this study was to evaluate ex vivo the accuracy an electronic apex locator during root canal length determination in primary molars. One calibrated examiner determined the root canal length in 15 primary molars (total=34 root canals) with different stages of root resorption. Root canal length was measured both visually with the placement of a K-file 1 mm short of the apical foramen or the apical resorption bevel, and electronically using an electronic apex locator (Digital Signal Processing). Data were analyzed statistically using the intraclass correlation (ICC) test. Comparing the actual and electronic root canal length measurements in the primary teeth showed a high correlation (ICC=0.95). The Digital Signal Processing apex locator is useful and accurate for apex foramen location during root canal length measurement in primary molars.
Schmidt, Robert L; Walker, Brandon S; Cohen, Michael B
2015-03-01
Reliable estimates of accuracy are important for any diagnostic test. Diagnostic accuracy studies are subject to unique sources of bias. Verification bias and classification bias are 2 sources of bias that commonly occur in diagnostic accuracy studies. Statistical methods are available to estimate the impact of these sources of bias when they occur alone. The impact of interactions when these types of bias occur together has not been investigated. We developed mathematical relationships to show the combined effect of verification bias and classification bias. A wide range of case scenarios were generated to assess the impact of bias components and interactions on total bias. Interactions between verification bias and classification bias caused overestimation of sensitivity and underestimation of specificity. Interactions had more effect on sensitivity than specificity. Sensitivity was overestimated by at least 7% in approximately 6% of the tested scenarios. Specificity was underestimated by at least 7% in less than 0.1% of the scenarios. Interactions between verification bias and classification bias create distortions in accuracy estimates that are greater than would be predicted from each source of bias acting independently. © 2014 American Cancer Society.
Image Processing Occupancy Sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Image Processing Occupancy Sensor, or IPOS, is a novel sensor technology developed at the National Renewable Energy Laboratory (NREL). The sensor is based on low-cost embedded microprocessors widely used by the smartphone industry and leverages mature open-source computer vision software libraries. Compared to traditional passive infrared and ultrasonic-based motion sensors currently used for occupancy detection, IPOS has shown the potential for improved accuracy and a richer set of feedback signals for occupant-optimized lighting, daylighting, temperature setback, ventilation control, and other occupancy and location-based uses. Unlike traditional passive infrared (PIR) or ultrasonic occupancy sensors, which infer occupancy based only onmore » motion, IPOS uses digital image-based analysis to detect and classify various aspects of occupancy, including the presence of occupants regardless of motion, their number, location, and activity levels of occupants, as well as the illuminance properties of the monitored space. The IPOS software leverages the recent availability of low-cost embedded computing platforms, computer vision software libraries, and camera elements.« less
An Autonomous Distributed Fault-Tolerant Local Positioning System
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2017-01-01
We describe a fault-tolerant, GPS-independent (Global Positioning System) distributed autonomous positioning system for static/mobile objects and present solutions for providing highly-accurate geo-location data for the static/mobile objects in dynamic environments. The reliability and accuracy of a positioning system fundamentally depends on two factors; its timeliness in broadcasting signals and the knowledge of its geometry, i.e., locations and distances of the beacons. Existing distributed positioning systems either synchronize to a common external source like GPS or establish their own time synchrony using a scheme similar to a master-slave by designating a particular beacon as the master and other beacons synchronize to it, resulting in a single point of failure. Another drawback of existing positioning systems is their lack of addressing various fault manifestations, in particular, communication link failures, which, as in wireless networks, are increasingly dominating the process failures and are typically transient and mobile, in the sense that they typically affect different messages to/from different processes over time.
a Performance Comparison of Feature Detectors for Planetary Rover Mapping and Localization
NASA Astrophysics Data System (ADS)
Wan, W.; Peng, M.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Teng, B.; Mao, X.; Zhao, Q.; Xin, X.; Jia, M.
2017-07-01
Feature detection and matching are key techniques in computer vision and robotics, and have been successfully implemented in many fields. So far there is no performance comparison of feature detectors and matching methods for planetary mapping and rover localization using rover stereo images. In this research, we present a comprehensive evaluation and comparison of six feature detectors, including Moravec, Förstner, Harris, FAST, SIFT and SURF, aiming for optimal implementation of feature-based matching in planetary surface environment. To facilitate quantitative analysis, a series of evaluation criteria, including distribution evenness of matched points, coverage of detected points, and feature matching accuracy, are developed in the research. In order to perform exhaustive evaluation, stereo images, simulated under different baseline, pitch angle, and interval of adjacent rover locations, are taken as experimental data source. The comparison results show that SIFT offers the best overall performance, especially it is less sensitive to changes of image taken at adjacent locations.
Jabar, Syaheed B; Filipowicz, Alex; Anderson, Britt
2017-11-01
When a location is cued, targets appearing at that location are detected more quickly. When a target feature is cued, targets bearing that feature are detected more quickly. These attentional cueing effects are only superficially similar. More detailed analyses find distinct temporal and accuracy profiles for the two different types of cues. This pattern parallels work with probability manipulations, where both feature and spatial probability are known to affect detection accuracy and reaction times. However, little has been done by way of comparing these effects. Are probability manipulations on space and features distinct? In a series of five experiments, we systematically varied spatial probability and feature probability along two dimensions (orientation or color). In addition, we decomposed response times into initiation and movement components. Targets appearing at the probable location were reported more quickly and more accurately regardless of whether the report was based on orientation or color. On the other hand, when either color probability or orientation probability was manipulated, response time and accuracy improvements were specific for that probable feature dimension. Decomposition of the response time benefits demonstrated that spatial probability only affected initiation times, whereas manipulations of feature probability affected both initiation and movement times. As detection was made more difficult, the two effects further diverged, with spatial probability disproportionally affecting initiation times and feature probability disproportionately affecting accuracy. In conclusion, all manipulations of probability, whether spatial or featural, affect detection. However, only feature probability affects perceptual precision, and precision effects are specific to the probable attribute.
A Learning-Based Approach for IP Geolocation
NASA Astrophysics Data System (ADS)
Eriksson, Brian; Barford, Paul; Sommers, Joel; Nowak, Robert
The ability to pinpoint the geographic location of IP hosts is compelling for applications such as on-line advertising and network attack diagnosis. While prior methods can accurately identify the location of hosts in some regions of the Internet, they produce erroneous results when the delay or topology measurement on which they are based is limited. The hypothesis of our work is that the accuracy of IP geolocation can be improved through the creation of a flexible analytic framework that accommodates different types of geolocation information. In this paper, we describe a new framework for IP geolocation that reduces to a machine-learning classification problem. Our methodology considers a set of lightweight measurements from a set of known monitors to a target, and then classifies the location of that target based on the most probable geographic region given probability densities learned from a training set. For this study, we employ a Naive Bayes framework that has low computational complexity and enables additional environmental information to be easily added to enhance the classification process. To demonstrate the feasibility and accuracy of our approach, we test IP geolocation on over 16,000 routers given ping measurements from 78 monitors with known geographic placement. Our results show that the simple application of our method improves geolocation accuracy for over 96% of the nodes identified in our data set, with on average accuracy 70 miles closer to the true geographic location versus prior constraint-based geolocation. These results highlight the promise of our method and indicate how future expansion of the classifier can lead to further improvements in geolocation accuracy.
Modeling Finite Faults Using the Adjoint Wave Field
NASA Astrophysics Data System (ADS)
Hjörleifsdóttir, V.; Liu, Q.; Tromp, J.
2004-12-01
Time-reversal acoustics, a technique in which an acoustic signal is recorded by an array of transducers, time-reversed, and retransmitted, is used, e.g., in medical therapy to locate and destroy gallstones (for a review see Fink, 1997). As discussed by Tromp et al. (2004), time-reversal techniques for locating sources are closely linked to so-called `adjoint methods' (Talagrand and Courtier, 1987), which may be used to evaluate the gradient of a misfit function. Tromp et al. (2004) illustrate how a (finite) source inversion may be implemented based upon the adjoint wave field by writing the change in the misfit function, δ χ, due to a change in the moment-density tensor, δ m, as an integral of the adjoint strain field ɛ x,t) over the fault plane Σ : δ χ = ∫ 0T∫_Σ ɛ x,T-t) :δ m(x,t) d2xdt. We find that if the real fault plane is located at a distance δ h in the direction of the fault normal hat n, then to first order an additional factor of ∫ 0T∫_Σ δ h (x) ∂ n ɛ x,T-t):m(x,t) d2xdt is added to the change in the misfit function. The adjoint strain is computed by using the time-reversed difference between data and synthetics recorded at all receivers as simultaneous sources and recording the resulting strain on the fault plane. In accordance with time-reversal acoustics, all the resulting waves will constructively interfere at the position of the original source in space and time. The level of convergence will be deterimined by factors such as the source-receiver geometry, the frequency of the recorded data and synthetics, and the accuracy of the velocity structure used when back propagating the wave field. The terms ɛ x,T-t) and ∂ n ɛ x,T-t):m(x,t) can be viewed as sensitivity kernels for the moment density and the faultplane location respectively. By looking at these quantities we can make an educated choice of fault parametrization given the data in hand. The process can then be repeated to invert for the best source model, as demonstrated by Tromp et al. (2004) for the magnitude of a point force. In this presentation we explore the applicability of adjoint methods to estimating finite source parameters. Fink, M. (1997), Time reversed acoustics, Physics Today, 50(3), 34--40. Talagrand, O., and P.~Courtier (1987), Variational assimilation of meteorological observations with the adjoint vorticity equatuation. I: Theory, Q. J. R. Meteorol. Soc., 113, 1311--1328. Tromp, J., C.~Tape, and Q.~Liu (2004), Waveform tomography, adjoint methods, time reversal, and banana-doughnut kernels, Geophys. Jour. Int., in press
Elgethun, Kai; Fenske, Richard A; Yost, Michael G; Palcisko, Gary J
2003-01-01
Global positioning system (GPS) technology is used widely for business and leisure activities and offers promise for human time-location studies to evaluate potential exposure to environmental contaminants. In this article we describe the development of a novel GPS instrument suitable for tracking the movements of young children. Eleven children in the Seattle area (2-8 years old) wore custom-designed data-logging GPS units integrated into clothing. Location data were transferred into geographic information systems software for map overlay, visualization, and tabular analysis. Data were grouped into five location categories (in vehicle, inside house, inside school, inside business, and outside) to determine time spent and percentage reception in each location. Additional experiments focused on spatial resolution, reception efficiency in typical environments, and sources of signal interference. Significant signal interference occurred only inside concrete/steel-frame buildings and inside a power substation. The GPS instruments provided adequate spatial resolution (typically about 2-3 m outdoors and 4-5 m indoors) to locate subjects within distinct microenvironments and distinguish a variety of human activities. Reception experiments showed that location could be tracked outside, proximal to buildings, and inside some buildings. Specific location information could identify movement in a single room inside a home, on a playground, or along a fence line. The instrument, worn in a vest or in bib overalls, was accepted by children and parents. Durability of the wiring was improved early in the study to correct breakage problems. The use of GPS technology offers a new level of accuracy for direct quantification of time-location activity patterns in exposure assessment studies. PMID:12515689
Shrem, Talia; Murray, Micah M; Deouell, Leon Y
2017-11-01
Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.
Hiding the Source Based on Limited Flooding for Sensor Networks.
Chen, Juan; Lin, Zhengkui; Hu, Ying; Wang, Bailing
2015-11-17
Wireless sensor networks are widely used to monitor valuable objects such as rare animals or armies. Once an object is detected, the source, i.e., the sensor nearest to the object, generates and periodically sends a packet about the object to the base station. Since attackers can capture the object by localizing the source, many protocols have been proposed to protect source location. Instead of transmitting the packet to the base station directly, typical source location protection protocols first transmit packets randomly for a few hops to a phantom location, and then forward the packets to the base station. The problem with these protocols is that the generated phantom locations are usually not only near the true source but also close to each other. As a result, attackers can easily trace a route back to the source from the phantom locations. To address the above problem, we propose a new protocol for source location protection based on limited flooding, named SLP. Compared with existing protocols, SLP can generate phantom locations that are not only far away from the source, but also widely distributed. It improves source location security significantly with low communication cost. We further propose a protocol, namely SLP-E, to protect source location against more powerful attackers with wider fields of vision. The performance of our SLP and SLP-E are validated by both theoretical analysis and simulation results.
Wald, D.J.; Graves, R.W.
2001-01-01
Using numerical tests for a prescribed heterogeneous earthquake slip distribution, we examine the importance of accurate Green's functions (GF) for finite fault source inversions which rely on coseismic GPS displacements and leveling line uplift alone and in combination with near-source strong ground motions. The static displacements, while sensitive to the three-dimensional (3-D) structure, are less so than seismic waveforms and thus are an important contribution, particularly when used in conjunction with waveform inversions. For numerical tests of an earthquake source and data distribution modeled after the 1994 Northridge earthquake, a joint geodetic and seismic inversion allows for reasonable recovery of the heterogeneous slip distribution on the fault. In contrast, inaccurate 3-D GFs or multiple 1-D GFs allow only partial recovery of the slip distribution given strong motion data alone. Likewise, using just the GPS and leveling line data requires significant smoothing for inversion stability, and hence, only a blurred vision of the prescribed slip is recovered. Although the half-space approximation for computing the surface static deformation field is no longer justifiable based on the high level of accuracy for current GPS data acquisition and the computed differences between 3-D and half-space surface displacements, a layered 1-D approximation to 3-D Earth structure provides adequate representation of the surface displacement field. However, even with the half-space approximation, geodetic data can provide additional slip resolution in the joint seismic and geodetic inversion provided a priori fault location and geometry are correct. Nevertheless, the sensitivity of the static displacements to the Earth structure begs caution for interpretation of surface displacements, particularly those recorded at monuments located in or near basin environments. Copyright 2001 by the American Geophysical Union.
Discretizing singular point sources in hyperbolic wave propagation problems
Petersson, N. Anders; O'Reilly, Ossian; Sjogreen, Bjorn; ...
2016-06-01
Here, we develop high order accurate source discretizations for hyperbolic wave propagation problems in first order formulation that are discretized by finite difference schemes. By studying the Fourier series expansions of the source discretization and the finite difference operator, we derive sufficient conditions for achieving design accuracy in the numerical solution. Only half of the conditions in Fourier space can be satisfied through moment conditions on the source discretization, and we develop smoothness conditions for satisfying the remaining accuracy conditions. The resulting source discretization has compact support in physical space, and is spread over as many grid points as themore » number of moment and smoothness conditions. In numerical experiments we demonstrate high order of accuracy in the numerical solution of the 1-D advection equation (both in the interior and near a boundary), the 3-D elastic wave equation, and the 3-D linearized Euler equations.« less
On the Use of Human Mobility Proxies for Modeling Epidemics
Tizzoni, Michele; Bajardi, Paolo; Decuyper, Adeline; Kon Kam King, Guillaume; Schneider, Christian M.; Blondel, Vincent; Smoreda, Zbigniew; González, Marta C.; Colizza, Vittoria
2014-01-01
Human mobility is a key component of large-scale spatial-transmission models of infectious diseases. Correctly modeling and quantifying human mobility is critical for improving epidemic control, but may be hindered by data incompleteness or unavailability. Here we explore the opportunity of using proxies for individual mobility to describe commuting flows and predict the diffusion of an influenza-like-illness epidemic. We consider three European countries and the corresponding commuting networks at different resolution scales, obtained from (i) official census surveys, (ii) proxy mobility data extracted from mobile phone call records, and (iii) the radiation model calibrated with census data. Metapopulation models defined on these countries and integrating the different mobility layers are compared in terms of epidemic observables. We show that commuting networks from mobile phone data capture the empirical commuting patterns well, accounting for more than 87% of the total fluxes. The distributions of commuting fluxes per link from mobile phones and census sources are similar and highly correlated, however a systematic overestimation of commuting traffic in the mobile phone data is observed. This leads to epidemics that spread faster than on census commuting networks, once the mobile phone commuting network is considered in the epidemic model, however preserving to a high degree the order of infection of newly affected locations. Proxies' calibration affects the arrival times' agreement across different models, and the observed topological and traffic discrepancies among mobility sources alter the resulting epidemic invasion patterns. Results also suggest that proxies perform differently in approximating commuting patterns for disease spread at different resolution scales, with the radiation model showing higher accuracy than mobile phone data when the seed is central in the network, the opposite being observed for peripheral locations. Proxies should therefore be chosen in light of the desired accuracy for the epidemic situation under study. PMID:25010676
MapSentinel: Can the Knowledge of Space Use Improve Indoor Tracking Further?
Jia, Ruoxi; Jin, Ming; Zou, Han; Yesilata, Yigitcan; Xie, Lihua; Spanos, Costas
2016-01-01
Estimating an occupant’s location is arguably the most fundamental sensing task in smart buildings. The applications for fine-grained, responsive building operations require the location sensing systems to provide location estimates in real time, also known as indoor tracking. Existing indoor tracking systems require occupants to carry specialized devices or install programs on their smartphone to collect inertial sensing data. In this paper, we propose MapSentinel, which performs non-intrusive location sensing based on WiFi access points and ultrasonic sensors. MapSentinel combines the noisy sensor readings with the floormap information to estimate locations. One key observation supporting our work is that occupants exhibit distinctive motion characteristics at different locations on the floormap, e.g., constrained motion along the corridor or in the cubicle zones, and free movement in the open space. While extensive research has been performed on using a floormap as a tool to obtain correct walking trajectories without wall-crossings, there have been few attempts to incorporate the knowledge of space use available from the floormap into the location estimation. This paper argues that the knowledge of space use as an additional information source presents new opportunities for indoor tracking. The fusion of heterogeneous information is theoretically formulated within the Factor Graph framework, and the Context-Augmented Particle Filtering algorithm is developed to efficiently solve real-time walking trajectories. Our evaluation in a large office space shows that the MapSentinel can achieve accuracy improvement of 31.3% compared with the purely WiFi-based tracking system. PMID:27049387
MapSentinel: Can the Knowledge of Space Use Improve Indoor Tracking Further?
Jia, Ruoxi; Jin, Ming; Zou, Han; Yesilata, Yigitcan; Xie, Lihua; Spanos, Costas
2016-04-02
Estimating an occupant's location is arguably the most fundamental sensing task in smart buildings. The applications for fine-grained, responsive building operations require the location sensing systems to provide location estimates in real time, also known as indoor tracking. Existing indoor tracking systems require occupants to carry specialized devices or install programs on their smartphone to collect inertial sensing data. In this paper, we propose MapSentinel, which performs non-intrusive location sensing based on WiFi access points and ultrasonic sensors. MapSentinel combines the noisy sensor readings with the floormap information to estimate locations. One key observation supporting our work is that occupants exhibit distinctive motion characteristics at different locations on the floormap, e.g., constrained motion along the corridor or in the cubicle zones, and free movement in the open space. While extensive research has been performed on using a floormap as a tool to obtain correct walking trajectories without wall-crossings, there have been few attempts to incorporate the knowledge of space use available from the floormap into the location estimation. This paper argues that the knowledge of space use as an additional information source presents new opportunities for indoor tracking. The fusion of heterogeneous information is theoretically formulated within the Factor Graph framework, and the Context-Augmented Particle Filtering algorithm is developed to efficiently solve real-time walking trajectories. Our evaluation in a large office space shows that the MapSentinel can achieve accuracy improvement of 31.3% compared with the purely WiFi-based tracking system.
Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array.
Zhang, Yankui; Ba, Bin; Wang, Daming; Geng, Wei; Xu, Haiyun
2018-05-08
Direct position determination (DPD) is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer⁻Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.
NASA Technical Reports Server (NTRS)
Bowen, Howard S.; Cunningham, Douglas M.
2007-01-01
The contents include: 1) Brief history of related events; 2) Overview of original method used to establish absolute radiometric accuracy of remote sensing instruments using stellar sources; and 3) Considerations to improve the stellar calibration approach.
Moore, D F; Harwood, V J; Ferguson, D M; Lukasik, J; Hannah, P; Getrich, M; Brownell, M
2005-01-01
The accuracy of ribotyping and antibiotic resistance analysis (ARA) for prediction of sources of faecal bacterial pollution in an urban southern California watershed was determined using blinded proficiency samples. Antibiotic resistance patterns and HindIII ribotypes of Escherichia coli (n = 997), and antibiotic resistance patterns of Enterococcus spp. (n = 3657) were used to construct libraries from sewage samples and from faeces of seagulls, dogs, cats, horses and humans within the watershed. The three libraries were analysed to determine the accuracy of host source prediction. The internal accuracy of the libraries (average rate of correct classification, ARCC) with six source categories was 44% for E. coli ARA, 69% for E. coli ribotyping and 48% for Enterococcus ARA. Each library's predictive ability towards isolates that were not part of the library was determined using a blinded proficiency panel of 97 E. coli and 99 Enterococcus isolates. Twenty-eight per cent (by ARA) and 27% (by ribotyping) of the E. coli proficiency isolates were assigned to the correct source category. Sixteen per cent were assigned to the same source category by both methods, and 6% were assigned to the correct category. Addition of 2480 E. coli isolates to the ARA library did not improve the ARCC or proficiency accuracy. In contrast, 45% of Enterococcus proficiency isolates were correctly identified by ARA. None of the methods performed well enough on the proficiency panel to be judged ready for application to environmental samples. Most microbial source tracking (MST) studies published have demonstrated library accuracy solely by the internal ARCC measurement. Low rates of correct classification for E. coli proficiency isolates compared with the ARCCs of the libraries indicate that testing of bacteria from samples that are not represented in the library, such as blinded proficiency samples, is necessary to accurately measure predictive ability. The library-based MST methods used in this study may not be suited for determination of the source(s) of faecal pollution in large, urban watersheds.
Sato, Masashi; Yamashita, Okito; Sato, Masa-Aki; Miyawaki, Yoichi
2018-01-01
To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of "information spreading" may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined.
Sato, Masashi; Yamashita, Okito; Sato, Masa-aki
2018-01-01
To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of “information spreading” may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined. PMID:29912968
Enhancements to the Bayesian Infrasound Source Location Method
2012-09-01
ENHANCEMENTS TO THE BAYESIAN INFRASOUND SOURCE LOCATION METHOD Omar E. Marcillo, Stephen J. Arrowsmith, Rod W. Whitaker, and Dale N. Anderson Los...ABSTRACT We report on R&D that is enabling enhancements to the Bayesian Infrasound Source Location (BISL) method for infrasound event location...the Bayesian Infrasound Source Location Method 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wehrschuetz, M., E-mail: martin.wehrschuetz@klinikum-graz.at; Aschauer, M.; Portugaller, H.
The purpose of this study was to assess interobserver variability and accuracy in the evaluation of renal artery stenosis (RAS) with gadolinium-enhanced MR angiography (MRA) and digital subtraction angiography (DSA) in patients with hypertension. The authors found that source images are more accurate than maximum intensity projection (MIP) for depicting renal artery stenosis. Two independent radiologists reviewed MRA and DSA from 38 patients with hypertension. Studies were postprocessed to display images in MIP and source images. DSA was the standard for comparison in each patient. For each main renal artery, percentage stenosis was estimated for any stenosis detected by themore » two radiologists. To calculate sensitivity, specificity and accuracy, MRA studies and stenoses were categorized as normal, mild (1-39%), moderate (40-69%) or severe ({>=}70%), or occluded. DSA stenosis estimates of 70% or greater were considered hemodynamically significant. Analysis of variance demonstrated that MIP estimates of stenosis were greater than source image estimates for both readers. Differences in estimates for MIP versus DSA reached significance in one reader. The interobserver variance for MIP, source images and DSA was excellent (0.80< {kappa}{<=} 0.90). The specificity of source images was high (97%) but less for MIP (87%); average accuracy was 92% for MIP and 98% for source images. In this study, source images are significantly more accurate than MIP images in one reader with a similar trend was observed in the second reader. The interobserver variability was excellent. When renal artery stenosis is a consideration, high accuracy can only be obtained when source images are examined.« less
Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration
NASA Technical Reports Server (NTRS)
Sen, S. K.; Agarwal, Ravi P.; Shaykhian, Gholam Ali
2007-01-01
We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned.
NASA Astrophysics Data System (ADS)
Wu, Binlin
New near-infrared (NIR) diffuse optical tomography (DOT) approaches were developed to detect, locate, and image small targets embedded in highly scattering turbid media. The first approach, referred to as time reversal optical tomography (TROT), is based on time reversal (TR) imaging and multiple signal classification (MUSIC). The second approach uses decomposition methods of non-negative matrix factorization (NMF) and principal component analysis (PCA) commonly used in blind source separation (BSS) problems, and compare the outcomes with that of optical imaging using independent component analysis (OPTICA). The goal is to develop a safe, affordable, noninvasive imaging modality for detection and characterization of breast tumors in early growth stages when those are more amenable to treatment. The efficacy of the approaches was tested using simulated data, and experiments involving model media and absorptive, scattering, and fluorescent targets, as well as, "realistic human breast model" composed of ex vivo breast tissues with embedded tumors. The experimental arrangements realized continuous wave (CW) multi-source probing of samples and multi-detector acquisition of diffusely transmitted signal in rectangular slab geometry. A data matrix was generated using the perturbation in the transmitted light intensity distribution due to the presence of absorptive or scattering targets. For fluorescent targets the data matrix was generated using the diffusely transmitted fluorescence signal distribution from the targets. The data matrix was analyzed using different approaches to detect and characterize the targets. The salient features of the approaches include ability to: (a) detect small targets; (b) provide three-dimensional location of the targets with high accuracy (~within a millimeter or 2); and (c) assess optical strength of the targets. The approaches are less computation intensive and consequently are faster than other inverse image reconstruction methods that attempt to reconstruct the optical properties of every voxel of the sample volume. The location of a target was estimated to be the weighted center of the optical property of the target. Consequently, the locations of small targets were better specified than those of the extended targets. It was more difficult to retrieve the size and shape of a target. The fluorescent measurements seemed to provide better accuracy than the transillumination measurements. In the case of ex vivo detection of tumors embedded in human breast tissue, measurements using multiple wavelengths provided more robust results, and helped suppress artifacts (false positives) than that from single wavelength measurements. The ability to detect and locate small targets, speedier reconstruction, combined with fluorophore-specific multi-wavelength probing has the potential to make these approaches suitable for breast cancer detection and diagnosis.
A frameless stereotaxic operating microscope for neurosurgery.
Friets, E M; Strohbehn, J W; Hatch, J F; Roberts, D W
1989-06-01
A new system, which we call the frameless stereotaxic operating microscope, is discussed. Its purpose is to display CT or other image data in the operating microscope in the correct scale, orientation, and position without the use of a stereotaxic frame. A nonimaging ultrasonic rangefinder allows the position of the operating microscope and the position of the patient to be determined. Discrete fiducial points on the patient's external anatomy are located in both image space and operating room space, linking the image data and the operating room. Physician-selected image information, e.g., tumor contours or guidance to predetermined targets, is projected through the optics of the operating microscope using a miniature cathode ray tube and a beam splitter. Projected images superpose the surgical field, reconstructed from image data to match the focal plane of the operating microscope. The algorithms on which the system is based are described, and the sources and effects of errors are discussed. The system's performance is simulated, providing an estimate of accuracy. Two phantoms are used to measure accuracy experimentally. Clinical results and observations are given.
Localization of virtual sound at 4 Gz.
Sandor, Patrick M B; McAnally, Ken I; Pellieux, Lionel; Martin, Russell L
2005-02-01
Acceleration directed along the body's z-axis (Gz) leads to misperception of the elevation of visual objects (the "elevator illusion"), most probably as a result of errors in the transformation from eye-centered to head-centered coordinates. We have investigated whether the location of sound sources is misperceived under increased Gz. Visually guided localization responses were made, using a remotely controlled laser pointer, to virtual auditory targets under conditions of 1 and 4 Gz induced in a human centrifuge. As these responses would be expected to be affected by the elevator illusion, we also measured the effect of Gz on the accuracy with which subjects could point to the horizon. Horizon judgments were lower at 4 Gz than at 1 Gz, so sound localization responses at 4 Gz were corrected for this error in the transformation from eye-centered to head-centered coordinates. We found that the accuracy and bias of sound localization are not significantly affected by increased Gz. The auditory modality is likely to provide a reliable means of conveying spatial information to operators in dynamic environments in which Gz can vary.
Neutron-Star Radius from a Population of Binary Neutron Star Mergers.
Bose, Sukanta; Chakravarti, Kabir; Rezzolla, Luciano; Sathyaprakash, B S; Takami, Kentaro
2018-01-19
We show how gravitational-wave observations with advanced detectors of tens to several tens of neutron-star binaries can measure the neutron-star radius with an accuracy of several to a few percent, for mass and spatial distributions that are realistic, and with none of the sources located within 100 Mpc. We achieve such an accuracy by combining measurements of the total mass from the inspiral phase with those of the compactness from the postmerger oscillation frequencies. For estimating the measurement errors of these frequencies, we utilize analytical fits to postmerger numerical relativity waveforms in the time domain, obtained here for the first time, for four nuclear-physics equations of state and a couple of values for the mass. We further exploit quasiuniversal relations to derive errors in compactness from those frequencies. Measuring the average radius to well within 10% is possible for a sample of 100 binaries distributed uniformly in volume between 100 and 300 Mpc, so long as the equation of state is not too soft or the binaries are not too heavy. We also give error estimates for the Einstein Telescope.
Recent developments in guided wave travel time tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zon, Tim van; Volker, Arno
The concept of predictive maintenance using permanent sensors that monitor the integrity of an installation is an interesting addition to the current method of periodic inspections. Guided wave tomography had been developed to create a map of the wall thickness using the travel times of guided waves. It can be used for both monitoring and for inspection of pipe-segments that are difficult to access, for instance at the location of pipe-supports. An important outcome of the tomography is the minimum remaining wall thickness, as this is critical in the scheduling of a replacement of the pipe-segment. In order to improvemore » the sizing accuracy we have improved the tomography scheme. A number of major improvements have been realized allowing to extend the application envelope to pipes with a larger wall thickness and to larger distances between the transducer rings. Simulation results indicate that the sizing accuracy has improved and that is now possible to have a spacing of 8 meter between the source-ring and the receiver-ring. Additionally a reduction of the number of sensors required might be possible as well.« less
The Filament Sensor for Near Real-Time Detection of Cytoskeletal Fiber Structures
Eltzner, Benjamin; Wollnik, Carina; Gottschlich, Carsten; Huckemann, Stephan; Rehfeldt, Florian
2015-01-01
A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source. PMID:25996921
Visually-guided attention enhances target identification in a complex auditory scene.
Best, Virginia; Ozmeral, Erol J; Shinn-Cunningham, Barbara G
2007-06-01
In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors.
Visually-guided Attention Enhances Target Identification in a Complex Auditory Scene
Ozmeral, Erol J.; Shinn-Cunningham, Barbara G.
2007-01-01
In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors. PMID:17453308
Direct Aerosol Forcing Uncertainty
Mccomiskey, Allison
2008-01-15
Understanding sources of uncertainty in aerosol direct radiative forcing (DRF), the difference in a given radiative flux component with and without aerosol, is essential to quantifying changes in Earth's radiation budget. We examine the uncertainty in DRF due to measurement uncertainty in the quantities on which it depends: aerosol optical depth, single scattering albedo, asymmetry parameter, solar geometry, and surface albedo. Direct radiative forcing at the top of the atmosphere and at the surface as well as sensitivities, the changes in DRF in response to unit changes in individual aerosol or surface properties, are calculated at three locations representing distinct aerosol types and radiative environments. The uncertainty in DRF associated with a given property is computed as the product of the sensitivity and typical measurement uncertainty in the respective aerosol or surface property. Sensitivity and uncertainty values permit estimation of total uncertainty in calculated DRF and identification of properties that most limit accuracy in estimating forcing. Total uncertainties in modeled local diurnally averaged forcing range from 0.2 to 1.3 W m-2 (42 to 20%) depending on location (from tropical to polar sites), solar zenith angle, surface reflectance, aerosol type, and aerosol optical depth. The largest contributor to total uncertainty in DRF is usually single scattering albedo; however decreasing measurement uncertainties for any property would increase accuracy in DRF. Comparison of two radiative transfer models suggests the contribution of modeling error is small compared to the total uncertainty although comparable to uncertainty arising from some individual properties.
Effect of Target Location on Dynamic Visual Acuity During Passive Horizontal Rotation
NASA Technical Reports Server (NTRS)
Appelbaum, Meghan; DeDios, Yiri; Kulecz, Walter; Peters, Brian; Wood, Scott
2010-01-01
The vestibulo-ocular reflex (VOR) generates eye rotation to compensate for potential retinal slip in the specific plane of head movement. Dynamic visual acuity (DVA) has been utilized as a functional measure of the VOR. The purpose of this study was to examine changes in accuracy and reaction time when performing a DVA task with targets offset from the plane of rotation, e.g. offset vertically during horizontal rotation. Visual acuity was measured in 12 healthy subjects as they moved a hand-held joystick to indicate the orientation of a computer-generated Landolt C "as quickly and accurately as possible." Acuity thresholds were established with optotypes presented centrally on a wall-mounted LCD screen at 1.3 m distance, first without motion (static condition) and then while oscillating at 0.8 Hz (DVA, peak velocity 60 deg/s). The effect of target location was then measured during horizontal rotation with the optotypes randomly presented in one of nine different locations on the screen (offset up to 10 deg). The optotype size (logMar 0, 0.2 or 0.4, corresponding to Snellen range 20/20 to 20/50) and presentation duration (150, 300 and 450 ms) were counter-balanced across five trials, each utilizing horizontal rotation at 0.8 Hz. Dynamic acuity was reduced relative to static acuity in 7 of 12 subjects by one step size. During the random target trials, both accuracy and reaction time improved proportional to optotype size. Accuracy and reaction time also improved between 150 ms and 300 ms presentation durations. The main finding was that both accuracy and reaction time varied as a function of target location, with greater performance decrements when acquiring vertical targets. We conclude that dynamic visual acuity varies with target location, with acuity optimized for targets in the plane of motion. Both reaction time and accuracy are functionally relevant DVA parameters of VOR function.
Accuracy in planar cutting of bones: an ISO-based evaluation.
Cartiaux, Olivier; Paul, Laurent; Docquier, Pierre-Louis; Francq, Bernard G; Raucent, Benoît; Dombre, Etienne; Banse, Xavier
2009-03-01
Computer- and robot-assisted technologies are capable of improving the accuracy of planar cutting in orthopaedic surgery. This study is a first step toward formulating and validating a new evaluation methodology for planar bone cutting, based on the standards from the International Organization for Standardization. Our experimental test bed consisted of a purely geometrical model of the cutting process around a simulated bone. Cuts were performed at three levels of surgical assistance: unassisted, computer-assisted and robot-assisted. We measured three parameters of the standard ISO1101:2004: flatness, parallelism and location of the cut plane. The location was the most relevant parameter for assessing cutting errors. The three levels of assistance were easily distinguished using the location parameter. Our ISO methodology employs the location to obtain all information about translational and rotational cutting errors. Location may be used on any osseous structure to compare the performance of existing assistance technologies.
Location estimation in wireless sensor networks using spring-relaxation technique.
Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M
2010-01-01
Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.
Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.
2015-01-01
We present a new method to locate low-frequency earthquakes (LFEs) within tectonic tremor episodes based on time-reverse imaging techniques. The modified time-reverse imaging technique presented here is the first method that locates individual LFEs within tremor episodes within 5 km uncertainty without relying on high-amplitude P-wave arrivals and that produces similar hypocentral locations to methods that locate events by stacking hundreds of LFEs without having to assume event co-location. In contrast to classic time-reverse imaging algorithms, we implement a modification to the method that searches for phase coherence over a short time period rather than identifying the maximum amplitude of a superpositioned wavefield. The method is independent of amplitude and can help constrain event origin time. The method uses individual LFE origin times, but does not rely on a priori information on LFE templates and families.We apply the method to locate 34 individual LFEs within tremor episodes that occur between 2010 and 2011 on the San Andreas Fault, near Cholame, California. Individual LFE location accuracies range from 2.6 to 5 km horizontally and 4.8 km vertically. Other methods that have been able to locate individual LFEs with accuracy of less than 5 km have mainly used large-amplitude events where a P-phase arrival can be identified. The method described here has the potential to locate a larger number of individual low-amplitude events with only the S-phase arrival. Location accuracy is controlled by the velocity model resolution and the wavelength of the dominant energy of the signal. Location results are also dependent on the number of stations used and are negligibly correlated with other factors such as the maximum gap in azimuthal coverage, source–station distance and signal-to-noise ratio.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Anthony; Ravi, Ananth
2014-08-15
High dose rate (HDR) remote afterloading brachytherapy involves sending a small, high-activity radioactive source attached to a cable to different positions within a hollow applicator implanted in the patient. It is critical that the source position within the applicator and the dwell time of the source are accurate. Daily quality assurance (QA) tests of the positional and dwell time accuracy are essential to ensure that the accuracy of the remote afterloader is not compromised prior to patient treatment. Our centre has developed an automated, video-based QA system for HDR brachytherapy that is dramatically superior to existing diode or film QAmore » solutions in terms of cost, objectivity, positional accuracy, with additional functionalities such as being able to determine source dwell time and transit time of the source. In our system, a video is taken of the brachytherapy source as it is sent out through a position check ruler, with the source visible through a clear window. Using a proprietary image analysis algorithm, the source position is determined with respect to time as it moves to different positions along the check ruler. The total material cost of the video-based system was under $20, consisting of a commercial webcam and adjustable stand. The accuracy of the position measurement is ±0.2 mm, and the time resolution is 30 msec. Additionally, our system is capable of robustly verifying the source transit time and velocity (a test required by the AAPM and CPQR recommendations), which is currently difficult to perform accurately.« less
Mollison, Matthew V; Curran, Tim
2012-09-01
Familiarity and recollection are thought to be separate processes underlying recognition memory. Event-related potentials (ERPs) dissociate these processes, with an early (approximately 300-500ms) frontal effect relating to familiarity (the FN400) and a later (500-800ms) parietal old/new effect relating to recollection. It has been debated whether source information for a studied item (i.e., contextual associations from when the item was previously encountered) is only accessible through recollection, or whether familiarity can contribute to successful source recognition. It has been shown that familiarity can assist in perceptual source monitoring when the source attribute is an intrinsic property of the item (e.g., an object's surface color), but few studies have examined its contribution to recognizing extrinsic source associations. Extrinsic source associations were examined in three experiments involving memory judgments for pictures of common objects. In Experiment 1, source information was spatial and results suggested that familiarity contributed to accurate source recognition: the FN400 ERP component showed a source accuracy effect, and source accuracy was above chance for items judged to only feel familiar. Source information in Experiment 2 was an extrinsic color association; source accuracy was at chance for familiar items and the FN400 did not differ between correct and incorrect source judgments. Experiment 3 replicated the results using a within-subjects manipulation of spatial vs. color source. Overall, the results suggest that familiarity's contribution to extrinsic source monitoring depends on the type of source information being remembered. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ilieva, Tamara; Gekov, Svetoslav
2017-04-01
The Precise Point Positioning (PPP) method gives the users the opportunity to determine point locations using a single GNSS receiver. The accuracy of the determined by PPP point locations is better in comparison to the standard point positioning, due to the precise satellite orbit and clock corrections that are developed and maintained by the International GNSS Service (IGS). The aim of our current research is the accuracy assessment of the PPP method applied for surveys and tracking moving objects in GIS environment. The PPP data is collected by using preliminary developed by us software application that allows different sets of attribute data for the measurements and their accuracy to be used. The results from the PPP measurements are directly compared within the geospatial database to different other sets of terrestrial data - measurements obtained by total stations, real time kinematic and static GNSS.
Fidan, Barış; Umay, Ilknur
2015-01-01
Accurate signal-source and signal-reflector target localization tasks via mobile sensory units and wireless sensor networks (WSNs), including those for environmental monitoring via sensory UAVs, require precise knowledge of specific signal propagation properties of the environment, which are permittivity and path loss coefficients for the electromagnetic signal case. Thus, accurate estimation of these coefficients has significant importance for the accuracy of location estimates. In this paper, we propose a geometric cooperative technique to instantaneously estimate such coefficients, with details provided for received signal strength (RSS) and time-of-flight (TOF)-based range sensors. The proposed technique is integrated to a recursive least squares (RLS)-based adaptive localization scheme and an adaptive motion control law, to construct adaptive target localization and adaptive target tracking algorithms, respectively, that are robust to uncertainties in aforementioned environmental signal propagation coefficients. The efficiency of the proposed adaptive localization and tracking techniques are both mathematically analysed and verified via simulation experiments. PMID:26690441
NASA Technical Reports Server (NTRS)
Stoll, John C.
1995-01-01
The performance of an unaided attitude determination system based on GPS interferometry is examined using linear covariance analysis. The modelled system includes four GPS antennae onboard a gravity gradient stabilized spacecraft, specifically the Air Force's RADCAL satellite. The principal error sources are identified and modelled. The optimal system's sensitivities to these error sources are examined through an error budget and by varying system parameters. The effects of two satellite selection algorithms, Geometric and Attitude Dilution of Precision (GDOP and ADOP, respectively) are examined. The attitude performance of two optimal-suboptimal filters is also presented. Based on this analysis, the limiting factors in attitude accuracy are the knowledge of the relative antenna locations, the electrical path lengths from the antennae to the receiver, and the multipath environment. The performance of the system is found to be fairly insensitive to torque errors, orbital inclination, and the two satellite geometry figures-of-merit tested.
Automated strip-mine and reclamation mapping from ERTS
NASA Technical Reports Server (NTRS)
Rogers, R. H. (Principal Investigator); Reed, L. E.; Pettyjohn, W. A.
1974-01-01
The author has identified the following significant results. Computer processing techniques were applied to ERTS-1 computer-compatible tape (CCT) data acquired in August 1972 on the Ohio Power Company's coal mining operation in Muskingum County, Ohio. Processing results succeeded in automatically classifying, with an accuracy greater than 90%: (1) stripped earth and major sources of erosion; (2) partially reclaimed areas and minor sources of erosion; (3) water with sedimentation; (4) water without sedimentation; and (5) vegetation. Computer-generated tables listing the area in acres and square kilometers were produced for each target category. Processing results also included geometrically corrected map overlays, one for each target category, drawn on a transparent material by a pen under computer control. Each target category is assigned a distinctive color on the overlay to facilitate interpretation. The overlays, drawn at a scale of 1:250,000 when placed over an AMS map of the same area, immediately provided map locations for each target. These mapping products were generated at a tenth of the cost of conventional mapping techniques.
Taking a look at the calibration of a CCD detector with a fiber-optic taper
Alkire, R. W.; Rotella, F. J.; Duke, N. E. C.; Otwinowski, Zbyszek; Borek, Dominika
2016-01-01
At the Structural Biology Center beamline 19BM, located at the Advanced Photon Source, the operational characteristics of the equipment are routinely checked to ensure they are in proper working order. After performing a partial flat-field calibration for the ADSC Quantum 210r CCD detector, it was confirmed that the detector operates within specifications. However, as a secondary check it was decided to scan a single reflection across one-half of a detector module to validate the accuracy of the calibration. The intensities from this single reflection varied by more than 30% from the module center to the corner of the module. Redistribution of light within bent fibers of the fiber-optic taper was identified to be a source of this variation. The degree to which the diffraction intensities are corrected to account for characteristics of the fiber-optic tapers depends primarily upon the experimental strategy of data collection, approximations made by the data processing software during scaling, and crystal symmetry. PMID:27047303
Estimation of PV energy production based on satellite data
NASA Astrophysics Data System (ADS)
Mazurek, G.
2015-09-01
Photovoltaic (PV) technology is an attractive source of power for systems without connection to power grid. Because of seasonal variations of solar radiation, design of such a power system requires careful analysis in order to provide required reliability. In this paper we present results of three-year measurements of experimental PV system located in Poland and based on polycrystalline silicon module. Irradiation values calculated from results of ground measurements have been compared with data from solar radiation databases employ calculations from of satellite observations. Good convergence level of both data sources has been shown, especially during summer. When satellite data from the same time period is available, yearly and monthly production of PV energy can be calculated with 2% and 5% accuracy, respectively. However, monthly production during winter seems to be overestimated, especially in January. Results of this work may be helpful in forecasting performance of similar PV systems in Central Europe and allow to make more precise forecasts of PV system performance than based only on tables with long time averaged values.
Based on the CSI regional segmentation indoor localization algorithm
NASA Astrophysics Data System (ADS)
Zeng, Xi; Lin, Wei; Lan, Jingwei
2017-08-01
To solve the problem of high cost and low accuracy, the method of Channel State Information (CSI) regional segmentation are proposed in the indoor positioning. Because Channel State Information (CSI) stability, and effective against multipath effect, we used the Channel State Information (CSI) to segment location area. The method Acquisition CSI the influence of different link to pinpoint the location of the area. Then the method can improve the accuracy of positioning, and reduce the cost of the fingerprint localization algorithm.
NASA Technical Reports Server (NTRS)
Sellers, B.; Hunerwadel, J. L.; Hanser, F. A.
1972-01-01
An alpha particle densitometer was developed for possible application to measurement of the atmospheric density-altitude profile on Martian entry. The device uses an Am-241 radioactive-foil source, which emits a distributed energy spectrum, located about 25 to 75 cm from a semiconductor detector. System response - defined as the number of alphas per second reaching the detector with energy above a fixed threshold - is given for Ar and CO2. The altitude profile of density measurement accuracy is given for a pure CO2 atmosphere with 5 mb surface pressure. The entire unit, including dc-dc converters, requires less than 350 milliwatts of power from +28 volts, weighs about 0.85 lb and occupies less than 15 cubic inches volume.
Enhanced orbit determination filter: Inclusion of ground system errors as filter parameters
NASA Technical Reports Server (NTRS)
Masters, W. C.; Scheeres, D. J.; Thurman, S. W.
1994-01-01
The theoretical aspects of an orbit determination filter that incorporates ground-system error sources as model parameters for use in interplanetary navigation are presented in this article. This filter, which is derived from sequential filtering theory, allows a systematic treatment of errors in calibrations of transmission media, station locations, and earth orientation models associated with ground-based radio metric data, in addition to the modeling of the spacecraft dynamics. The discussion includes a mathematical description of the filter and an analytical comparison of its characteristics with more traditional filtering techniques used in this application. The analysis in this article shows that this filter has the potential to generate navigation products of substantially greater accuracy than more traditional filtering procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, Khalil J.; Rim, Jung Ho; Porterfield, Donivan R.
2015-06-29
In this study, we re-analyzed late-1940’s, Manhattan Project era Plutonium-rich sludge samples recovered from the ''General’s Tanks'' located within the nation’s oldest Plutonium processing facility, Technical Area 21. These samples were initially characterized by lower accuracy, and lower precision mass spectrometric techniques. We report here information that was previously not discernable: the two tanks contain isotopically distinct Pu not only for the major (i.e., 240Pu, 239Pu) but trace ( 238Pu , 241Pu, 242Pu) isotopes. Revised isotopics slightly changed the calculated 241Am- 241Pu model ages and interpretations.
40 CFR 51.50 - What definitions apply to this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude data for point sources. The six codes and their definitions are: (1) Coordinate Data Source Code: The... physical piece of or a closely related set of equipment. The EPA's reporting format for a given inventory...
Locating the source of diffusion in complex networks by time-reversal backward spreading.
Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene
2016-03-01
Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.
Locating the source of diffusion in complex networks by time-reversal backward spreading
NASA Astrophysics Data System (ADS)
Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H. Eugene
2016-03-01
Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.
Medial prefrontal cortex supports source memory accuracy for self-referenced items.
Leshikar, Eric D; Duarte, Audrey
2012-01-01
Previous behavioral work suggests that processing information in relation to the self enhances subsequent item recognition. Neuroimaging evidence further suggests that regions along the cortical midline, particularly those of the medial prefrontal cortex (PFC), underlie this benefit. There has been little work to date, however, on the effects of self-referential encoding on source memory accuracy or whether the medial PFC might contribute to source memory for self-referenced materials. In the current study, we used fMRI to measure neural activity while participants studied and subsequently retrieved pictures of common objects superimposed on one of two background scenes (sources) under either self-reference or self-external encoding instructions. Both item recognition and source recognition were better for objects encoded self-referentially than self-externally. Neural activity predictive of source accuracy was observed in the medial PFC (Brodmann area 10) at the time of study for self-referentially but not self-externally encoded objects. The results of this experiment suggest that processing information in relation to the self leads to a mnemonic benefit for source level features, and that activity in the medial PFC contributes to this source memory benefit. This evidence expands the purported role that the medial PFC plays in self-referencing.
Propagation of the velocity model uncertainties to the seismic event location
NASA Astrophysics Data System (ADS)
Gesret, A.; Desassis, N.; Noble, M.; Romary, T.; Maisons, C.
2015-01-01
Earthquake hypocentre locations are crucial in many domains of application (academic and industrial) as seismic event location maps are commonly used to delineate faults or fractures. The interpretation of these maps depends on location accuracy and on the reliability of the associated uncertainties. The largest contribution to location and uncertainty errors is due to the fact that the velocity model errors are usually not correctly taken into account. We propose a new Bayesian formulation that integrates properly the knowledge on the velocity model into the formulation of the probabilistic earthquake location. In this work, the velocity model uncertainties are first estimated with a Bayesian tomography of active shot data. We implement a sampling Monte Carlo type algorithm to generate velocity models distributed according to the posterior distribution. In a second step, we propagate the velocity model uncertainties to the seismic event location in a probabilistic framework. This enables to obtain more reliable hypocentre locations as well as their associated uncertainties accounting for picking and velocity model uncertainties. We illustrate the tomography results and the gain in accuracy of earthquake location for two synthetic examples and one real data case study in the context of induced microseismicity.
Predictions of Experimentally Observed Stochastic Ground Vibrations Induced by Blasting
Kostić, Srđan; Perc, Matjaž; Vasović, Nebojša; Trajković, Slobodan
2013-01-01
In the present paper, we investigate the blast induced ground motion recorded at the limestone quarry “Suva Vrela” near Kosjerić, which is located in the western part of Serbia. We examine the recorded signals by means of surrogate data methods and a determinism test, in order to determine whether the recorded ground velocity is stochastic or deterministic in nature. Longitudinal, transversal and the vertical ground motion component are analyzed at three monitoring points that are located at different distances from the blasting source. The analysis reveals that the recordings belong to a class of stationary linear stochastic processes with Gaussian inputs, which could be distorted by a monotonic, instantaneous, time-independent nonlinear function. Low determinism factors obtained with the determinism test further confirm the stochastic nature of the recordings. Guided by the outcome of time series analysis, we propose an improved prediction model for the peak particle velocity based on a neural network. We show that, while conventional predictors fail to provide acceptable prediction accuracy, the neural network model with four main blast parameters as input, namely total charge, maximum charge per delay, distance from the blasting source to the measuring point, and hole depth, delivers significantly more accurate predictions that may be applicable on site. We also perform a sensitivity analysis, which reveals that the distance from the blasting source has the strongest influence on the final value of the peak particle velocity. This is in full agreement with previous observations and theory, thus additionally validating our methodology and main conclusions. PMID:24358140
NASA Astrophysics Data System (ADS)
Wilson, R. I.; Barberopoulou, A.; Miller, K. M.; Goltz, J. D.; Synolakis, C. E.
2008-12-01
A consortium of tsunami hydrodynamic modelers, geologic hazard mapping specialists, and emergency planning managers is producing maximum tsunami inundation maps for California, covering most residential and transient populated areas along the state's coastline. The new tsunami inundation maps will be an upgrade from the existing maps for the state, improving on the resolution, accuracy, and coverage of the maximum anticipated tsunami inundation line. Thirty-five separate map areas covering nearly one-half of California's coastline were selected for tsunami modeling using the MOST (Method of Splitting Tsunami) model. From preliminary evaluations of nearly fifty local and distant tsunami source scenarios, those with the maximum expected hazard for a particular area were input to MOST. The MOST model was run with a near-shore bathymetric grid resolution varying from three arc-seconds (90m) to one arc-second (30m), depending on availability. Maximum tsunami "flow depth" and inundation layers were created by combining all modeled scenarios for each area. A method was developed to better define the location of the maximum inland penetration line using higher resolution digital onshore topographic data from interferometric radar sources. The final inundation line for each map area was validated using a combination of digital stereo photography and fieldwork. Further verification of the final inundation line will include ongoing evaluation of tsunami sources (seismic and submarine landslide) as well as comparison to the location of recorded paleotsunami deposits. Local governmental agencies can use these new maximum tsunami inundation lines to assist in the development of their evacuation routes and emergency response plans.
Toward regional corrections of long period CMT inversions using InSAR
NASA Astrophysics Data System (ADS)
Shakibay Senobari, N.; Funning, G.; Ferreira, A. M.
2017-12-01
One of InSAR's main strengths, with respect to other methods of studying earthquakes, is finding the accurate location of the best point source (or `centroid') for an earthquake. While InSAR data have great advantages for study of shallow earthquakes, the number of earthquakes for which we have InSAR data is low, compared with the number of earthquakes recorded seismically. And though improvements to SAR satellite constellations have enhanced the use of InSAR data during earthquake response, post-event data still have a latency on the order of days. On the other hand, earthquake centroid inversion methods using long period seismic data (e.g. the Global CMT method) are fast but include errors caused by inaccuracies in both the Earth velocity model and in wave propagation assumptions (e.g. Hjörleifsdóttir and Ekström, 2010; Ferreira and Woodhouse, 2006). Here we demonstrate a method that combines the strengths of both methods, calculating regional travel-time corrections for long-period waveforms using accurate centroid locations from InSAR, then applying these to other events that occur in the same region. Our method is based on the observation that synthetic seismograms produced from InSAR source models and locations match the data very well except for some phase shifts (travel time biases) between the two waveforms, likely corresponding to inaccuracies in Earth velocity models (Weston et al., 2014). Our previous work shows that adding such phase shifts to the Green's functions can improve the accuracy of long period seismic CMT inversions by reducing tradeoffs between the moment tensor components and centroid location (e.g. Shakibay Senobari et al., AGU Fall Meeting 2015). Preliminary work on several pairs of neighboring events (e.g. Landers-Hector Mine, the 2000 South Iceland earthquake sequences) shows consistent azimuthal patterns of these phase shifts for nearby events at common stations. These phase shift patterns strongly suggest that it is possible to determine regional corrections for the source regions of these events. The aim of this project is to perform a full CMT inversion using the phase shift corrections, calculated for nearby events, to observe improvement in CMT locations and solutions. We will demonstrate our method on the five M 6 events that occurred in central Italy between 1997 and 2016.
Baxter, Suzanne Domel; Guinn, Caroline H.; Smith, Albert F.; Hitchcock, David B.; Royer, Julie A.; Puryear, Megan P.; Collins, Kathleen L.; Smith, Alyssa L.
2017-01-01
Validation-study data were analyzed to investigate retention interval (RI) and prompt effects on accuracy of fourth-grade children’s reports of school-breakfast and school-lunch (in 24-hour recalls), and accuracy of school-breakfast reports by breakfast location (classroom; cafeteria). Randomly-selected fourth-grade children at 10 schools in four districts were observed eating school-provided breakfast and lunch, and interviewed under one of eight conditions (two RIs [short (prior-24-hour recall obtained in afternoon); long (previous-day recall obtained in morning)] crossed with four prompts [forward (distant-to-recent), meal-name (breakfast, etc.), open (no instructions), reverse (recent-to-distant)]). Each condition had 60 children (half girls). Of 480 children, 355 and 409 reported meals satisfying criteria for reports of school-breakfast and school-lunch, respectively. For breakfast and lunch separately, a conventional measure—report rate—and reporting-error-sensitive measures—correspondence rate and inflation ratio—were calculated for energy per meal-reporting child. Correspondence rate and inflation ratio—but not report rate—showed better accuracy for school-breakfast and school-lunch reports with the short than long RI; this pattern was not found for some prompts for each sex. Correspondence rate and inflation ratio showed better school-breakfast report accuracy for the classroom than cafeteria location for each prompt, but report rate showed the opposite. For each RI, correspondence rate and inflation ratio showed better accuracy for lunch than breakfast, but report rate showed the opposite. When choosing RI and prompts for recalls, researchers and practitioners should select short RIs to maximize accuracy. Recommendations for prompt selections are less clear. As report rates distort validation-study accuracy conclusions, reporting-error-sensitive measures are recommended. PMID:26865356
Emergency positioning system accuracy with infrared LEDs in high-security facilities
NASA Astrophysics Data System (ADS)
Knoch, Sierra N.; Nelson, Charles; Walker, Owens
2017-05-01
Instantaneous personnel location presents a challenge in Department of Defense applications where high levels of security restrict real-time tracking of crew members. During emergency situations, command and control requires immediate accountability of all personnel. Current radio frequency (RF) based indoor positioning systems can be unsuitable due to RF leakage and electromagnetic interference with sensitively calibrated machinery on variable platforms like ships, submarines and high-security facilities. Infrared light provide a possible solution to this problem. This paper proposes and evaluates an indoor line-of-sight positioning system that is comprised of IR and high-sensitivity CMOS camera receivers. In this system the movement of the LEDs is captured by the camera, uploaded and analyzed; the highest point of power is located and plotted to create a blueprint of crewmember location. Results provided evaluate accuracy as a function of both wavelength and environmental conditions. Research will further evaluate the accuracy of the LED transmitter and CMOS camera receiver system. Transmissions in both the 780 and 850nm IR are analyzed.
Memory Operations That Support Language Comprehension: Evidence From Verb-Phrase Ellipsis
Martin, Andrea E.; McElree, Brian
2010-01-01
Comprehension of verb-phrase ellipsis (VPE) requires reevaluation of recently processed constituents, which often necessitates retrieval of information about the elided constituent from memory. A. E. Martin and B. McElree (2008) argued that representations formed during comprehension are content addressable and that VPE antecedents are retrieved from memory via a cue-dependent direct-access pointer rather than via a search process. This hypothesis was further tested by manipulating the location of interfering material—either before the onset of the antecedent (proactive interference; PI) or intervening between antecedent and ellipsis site (retroactive interference; RI). The speed–accuracy tradeoff procedure was used to measure the time course of VPE processing. The location of the interfering material affected VPE comprehension accuracy: RI conditions engendered lower accuracy than PI conditions. Crucially, location did not affect the speed of processing VPE, which is inconsistent with both forward and backward search mechanisms. The observed time-course profiles are consistent with the hypothesis that VPE antecedents are retrieved via a cue-dependent direct-access operation. PMID:19686017
Ueguchi, Takashi; Ogihara, Ryota; Yamada, Sachiko
2018-03-21
To investigate the accuracy of dual-energy virtual monochromatic computed tomography (CT) numbers obtained by two typical hardware and software implementations: the single-source projection-based method and the dual-source image-based method. A phantom with different tissue equivalent inserts was scanned with both single-source and dual-source scanners. A fast kVp-switching feature was used on the single-source scanner, whereas a tin filter was used on the dual-source scanner. Virtual monochromatic CT images of the phantom at energy levels of 60, 100, and 140 keV were obtained by both projection-based (on the single-source scanner) and image-based (on the dual-source scanner) methods. The accuracy of virtual monochromatic CT numbers for all inserts was assessed by comparing measured values to their corresponding true values. Linear regression analysis was performed to evaluate the dependency of measured CT numbers on tissue attenuation, method, and their interaction. Root mean square values of systematic error over all inserts at 60, 100, and 140 keV were approximately 53, 21, and 29 Hounsfield unit (HU) with the single-source projection-based method, and 46, 7, and 6 HU with the dual-source image-based method, respectively. Linear regression analysis revealed that the interaction between the attenuation and the method had a statistically significant effect on the measured CT numbers at 100 and 140 keV. There were attenuation-, method-, and energy level-dependent systematic errors in the measured virtual monochromatic CT numbers. CT number reproducibility was comparable between the two scanners, and CT numbers had better accuracy with the dual-source image-based method at 100 and 140 keV. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Evangeliou, Nikolaos; Shevchenko, Vladimir P.; Espen Yttri, Karl; Eckhardt, Sabine; Sollum, Espen; Pokrovsky, Oleg S.; Kobelev, Vasily O.; Korobov, Vladimir B.; Lobanov, Andrey A.; Starodymova, Dina P.; Vorobiev, Sergey N.; Thompson, Rona L.; Stohl, Andreas
2018-01-01
Short-lived climate forcers have been proven important both for the climate and human health. In particular, black carbon (BC) is an important climate forcer both as an aerosol and when deposited on snow and ice surface because of its strong light absorption. This paper presents measurements of elemental carbon (EC; a measurement-based definition of BC) in snow collected from western Siberia and northwestern European Russia during 2014, 2015 and 2016. The Russian Arctic is of great interest to the scientific community due to the large uncertainty of emission sources there. We have determined the major contributing sources of BC in snow in western Siberia and northwestern European Russia using a Lagrangian atmospheric transport model. For the first time, we use a recently developed feature that calculates deposition in backward (so-called retroplume) simulations allowing estimation of the specific locations of sources that contribute to the deposited mass. EC concentrations in snow from western Siberia and northwestern European Russia were highly variable depending on the sampling location. Modelled BC and measured EC were moderately correlated (R = 0.53-0.83) and a systematic region-specific model underestimation was found. The model underestimated observations by 42 % (RMSE = 49 ng g-1) in 2014, 48 % (RMSE = 37 ng g-1) in 2015 and 27 % (RMSE = 43 ng g-1) in 2016. For EC sampled in northwestern European Russia the underestimation by the model was smaller (fractional bias, FB > -100 %). In this region, the major sources were transportation activities and domestic combustion in Finland. When sampling shifted to western Siberia, the model underestimation was more significant (FB < -100 %). There, the sources included emissions from gas flaring as a major contributor to snow BC. The accuracy of the model calculations was also evaluated using two independent datasets of BC measurements in snow covering the entire Arctic. The model underestimated BC concentrations in snow especially for samples collected in springtime.
NASA Astrophysics Data System (ADS)
Mao, D.; Revil, A.; Hort, R. D.; Munakata-Marr, J.; Atekwana, E. A.; Kulessa, B.
2015-11-01
Geophysical methods can be used to remotely characterize contaminated sites and monitor in situ enhanced remediation processes. We have conducted one sandbox experiment and one contaminated field investigation to show the robustness of electrical resistivity tomography and self-potential (SP) tomography for these applications. In the sandbox experiment, we injected permanganate in a trichloroethylene (TCE)-contaminated environment under a constant hydraulic gradient. Inverted resistivity tomograms are able to track the evolution of the permanganate plume in agreement with visual observations made on the side of the tank. Self-potential measurements were also performed at the surface of the sandbox using non-polarizing Ag-AgCl electrodes. These data were inverted to obtain the source density distribution with and without the resistivity information. A compact horizontal dipole source located at the front of the plume was obtained from the inversion of these self-potential data. This current dipole may be related to the redox reaction occurring between TCE and permanganate and the strong concentration gradient at the front of the plume. We demonstrate that time-lapse self-potential signals can be used to track the kinetics of an advecting oxidizer plume with acceptable accuracy and, if needed, in real time, but are unable to completely resolve the shape of the plume. In the field investigation, a 3D resistivity tomography is used to characterize an organic contaminant plume (resistive domain) and an overlying zone of solid waste materials (conductive domain). After removing the influence of the streaming potential, the identified source current density had a magnitude of 0.5 A m-2. The strong source current density may be attributed to charge movement between the neighboring zones that encourage abiotic and microbially enhanced reduction and oxidation reactions. In both cases, the self-potential source current density is located in the area of strong resistivity gradient.
Combining accuracy assessment of land-cover maps with environmental monitoring programs
Stephen V. Stehman; Raymond L. Czaplewski; Sarah M. Nusser; Limin Yang; Zhiliang Zhu
2000-01-01
A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring...
NASA Astrophysics Data System (ADS)
Saikia, C. K.; Woods, B. B.; Thio, H. K.
- Regional crustal waveguide calibration is essential to the retrieval of source parameters and the location of smaller (M<4.8) seismic events. This path calibration of regional seismic phases is strongly dependent on the accuracy of hypocentral locations of calibration (or master) events. This information can be difficult to obtain, especially for smaller events. Generally, explosion or quarry blast generated travel-time data with known locations and origin times are useful for developing the path calibration parameters, but in many regions such data sets are scanty or do not exist. We present a method which is useful for regional path calibration independent of such data, i.e. with earthquakes, which is applicable for events down to Mw = 4 and which has successfully been applied in India, central Asia, western Mediterranean, North Africa, Tibet and the former Soviet Union. These studies suggest that reliably determining depth is essential to establishing accurate epicentral location and origin time for events. We find that the error in source depth does not necessarily trade-off only with the origin time for events with poor azimuthal coverage, but with the horizontal location as well, thus resulting in poor epicentral locations. For example, hypocenters for some events in central Asia were found to move from their fixed-depth locations by about 20km. Such errors in location and depth will propagate into path calibration parameters, particularly with respect to travel times. The modeling of teleseismic depth phases (pP, sP) yields accurate depths for earthquakes down to magnitude Mw = 4.7. This Mwthreshold can be lowered to four if regional seismograms are used in conjunction with a calibrated velocity structure model to determine depth, with the relative amplitude of the Pnl waves to the surface waves and the interaction of regional sPmP and pPmP phases being good indicators of event depths. We also found that for deep events a seismic phase which follows an S-wave path to the surface and becomes critical, developing a head wave by S to P conversion is also indicative of depth. The detailed characteristic of this phase is controlled by the crustal waveguide. The key to calibrating regionalized crustal velocity structure is to determine depths for a set of master events by applying the above methods and then by modeling characteristic features that are recorded on the regional waveforms. The regionalization scheme can also incorporate mixed-path crustal waveguide models for cases in which seismic waves traverse two or more distinctly different crustal structures. We also demonstrate that once depths are established, we need only two-stations travel-time data to obtain reliable epicentral locations using a new adaptive grid-search technique which yields locations similar to those determined using travel-time data from local seismic networks with better azimuthal coverage.
Acoustical evaluation of the NASA Lewis 9 by 15 foot low speed wind tunnel
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Woodward, Richard P.
1992-01-01
The test section of the NASA Lewis 9- by 15-Foot Low Speed Wind Tunnel was acoustically treated to allow the measurement of acoustic sources located within the tunnel test section under simulated free field conditions. The treatment was designed for high sound absorption at frequencies above 250 Hz and to withstand tunnel airflow velocities up to 0.2 Mach. Evaluation tests with no tunnel airflow were conducted in the test section to assess the performance of the installed treatment. This performance would not be significantly affected by low speed airflow. Time delay spectrometry tests showed that interference ripples in the incident signal resulting from reflections occurring within the test section average from 1.7 dB to 3.2 dB wide over a 500 to 5150 Hz frequency range. Late reflections, from upstream and downstream of the test section, were found to be insignificant at the microphone measuring points. For acoustic sources with low directivity characteristics, decay with distance measurements in the test section showed that incident free field behavior can be measured on average with an accuracy of +/- 1.5 dB or better at source frequencies from 400 Hz to 10 kHz. The free field variations are typically much smaller with an omnidirectional source.
The Position/Structure Stability of Four ICRF2 Sources
NASA Technical Reports Server (NTRS)
Fomalont, Ed; Johnston, Kenneth; Fey, Alan; Boboltz, Dave; Oyama, Tomoaki; Honma, Mareki
2010-01-01
Four compact radio sources in the International Celestial Reference Frame (ICRF2) catalog were observed using phase referencing with the VLBA at 43, 23, and 8.6-GHz, and with VERA at 23-GHz over a one-year period. The goal was to determine the stability of the radio cores and to assess structure effects associated with positions in the ICRF2. Conclusions are: (1) 43-GHz VLBI high-resolution observations are often needed to determine the location of the radio core. (2) Over the observing period, the relative positions among the four radio cores were constant to 0.02 mas, suggesting that once the true radio core is identified, it remains stationary in the sky to this accuracy. (3) The emission in 0556+238, one of the four sources investigated and one of the 295 ICRF2 defining sources, was dominated by a strong component near the core and moved 0.1 mas during the year. (4) Comparison of the VLBA images at 43, 23, and 8.6-GHz with the ICRF2 positions suggests that the 8-GHz structure is often dominated by a bright non-core component. The measured ICRF2 position can be displaced more than 0.5 mas from the radio core and partake in the motion of the bright jet component.
AASG Wells Data for the EGS Test Site Planning and Analysis Task
Augustine, Chad
2013-10-09
AASG Wells Data for the EGS Test Site Planning and Analysis Task Temperature measurement data obtained from boreholes for the Association of American State Geologists (AASG) geothermal data project. Typically bottomhole temperatures are recorded from log headers, and this information is provided through a borehole temperature observation service for each state. Service includes header records, well logs, temperature measurements, and other information for each borehole. Information presented in Geothermal Prospector was derived from data aggregated from the borehole temperature observations for all states. For each observation, the given well location was recorded and the best available well identified (name), temperature and depth were chosen. The “Well Name Source,” “Temp. Type” and “Depth Type” attributes indicate the field used from the original service. This data was then cleaned and converted to consistent units. The accuracy of the observation’s location, name, temperature or depth was note assessed beyond that originally provided by the service. - AASG bottom hole temperature datasets were downloaded from repository.usgin.org between the dates of May 16th and May 24th, 2013. - Datasets were cleaned to remove “null” and non-real entries, and data converted into consistent units across all datasets - Methodology for selecting ”best” temperature and depth attributes from column headers in AASG BHT Data sets: • Temperature: • CorrectedTemperature – best • MeasuredTemperature – next best • Depth: • DepthOfMeasurement – best • TrueVerticalDepth – next best • DrillerTotalDepth – last option • Well Name/Identifier • APINo – best • WellName – next best • ObservationURI - last option. The column headers are as follows: • gid = internal unique ID • src_state = the state from which the well was downloaded (note: the low temperature wells in Idaho are coded as “ID_LowTemp”, while all other wells are simply the two character state abbreviation) • source_url = the url for the source WFS service or Excel file • temp_c = “best” temperature in Celsius • temp_type = indicates whether temp_c comes from the corrected or measured temperature header column in the source document • depth_m = “best” depth in meters • depth_type = indicates whether depth_m comes from the measured, true vertical, or driller total depth header column in the source document • well_name = “best” well name or ID • name_src = indicates whether well_name came from apino, wellname, or observationuri header column in the source document • lat_wgs84 = latitude in wgs84 • lon_wgs84 = longitude in wgs84 • state = state in which the point is located • county = county in which the point is located
Effect of atmospherics on beamforming accuracy
NASA Technical Reports Server (NTRS)
Alexander, Richard M.
1990-01-01
Two mathematical representations of noise due to atmospheric turbulence are presented. These representations are derived and used in computer simulations of the Bartlett Estimate implementation of beamforming. Beamforming is an array processing technique employing an array of acoustic sensors used to determine the bearing of an acoustic source. Atmospheric wind conditions introduce noise into the beamformer output. Consequently, the accuracy of the process is degraded and the bearing of the acoustic source is falsely indicated or impossible to determine. The two representations of noise presented here are intended to quantify the effects of mean wind passing over the array of sensors and to correct for these effects. The first noise model is an idealized case. The effect of the mean wind is incorporated as a change in the propagation velocity of the acoustic wave. This yields an effective phase shift applied to each term of the spatial correlation matrix in the Bartlett Estimate. The resultant error caused by this model can be corrected in closed form in the beamforming algorithm. The second noise model acts to change the true direction of propagation at the beginning of the beamforming process. A closed form correction for this model is not available. Efforts to derive effective means to reduce the contributions of the noise have not been successful. In either case, the maximum error introduced by the wind is a beam shift of approximately three degrees. That is, the bearing of the acoustic source is indicated at a point a few degrees from the true bearing location. These effects are not quite as pronounced as those seen in experimental results. Sidelobes are false indications of acoustic sources in the beamformer output away from the true bearing angle. The sidelobes that are observed in experimental results are not caused by these noise models. The effects of mean wind passing over the sensor array as modeled here do not alter the beamformer output as significantly as expected.
A proposed benchmark problem for cargo nuclear threat monitoring
NASA Astrophysics Data System (ADS)
Wesley Holmes, Thomas; Calderon, Adan; Peeples, Cody R.; Gardner, Robin P.
2011-10-01
There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, [1]). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.×4 in.×16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.×16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.
Jamjoom, Faris Z; Kim, Do-Gyoon; Lee, Damian J; McGlumphy, Edwin A; Yilmaz, Burak
2018-02-05
Effects of length and location of the edentulous area on the accuracy of prosthetic treatment plan incorporation into cone-beam computed tomography (CBCT) scans has not been investigated. To evaluate the effect of length and location of the edentulous area on the accuracy of prosthetic treatment plan incorporation into CBCT scans using different methods. Direct digital scans of a completely dentate master model with removable radiopaque teeth were made using an intraoral scanner, and digital scans of stone duplicates of the master model were made using a laboratory scanner. Specific teeth were removed to simulate different clinical situations and their CBCT scans were made. Surface scans were registered onto the CBCT scans. Radiographic templates for each clinical situation were also fabricated and used during CBCT scans of the master models. Using metrology software, three-dimensional (3D) deviation was measured on standard tesselation language (STL) files created from the CBCT scans against an STL file of the master model created from a CBCT scan. Statistical analysis was done using the MIXED procedure in a statistical software and Tukey HSD test (α =.05). The interaction between location and method was significant (P = .009). Location had no significant effect on registration methods (P > .05), but on the radiographic templates (P = .011). Length of the edentulous area did not have any significant effect (P > .05). Accuracy of digital image registration methods was similar and higher than that of radiographic templates in all clinical situations. Tooth-bound radiographic templates were significantly more accurate than the free-end templates. The results of this study suggest using image registration instead of radiographic templates when planning dental implants, particularly in free-end situations. © 2018 Wiley Periodicals, Inc.
Comparison of recycling outcomes in three types of recycling collection units.
Andrews, Ashley; Gregoire, Mary; Rasmussen, Heather; Witowich, Gretchen
2013-03-01
Commercial institutions have many factors to consider when implementing an effective recycling program. This study examined the effectiveness of three different types of recycling bins on recycling accuracy by determining the percent weight of recyclable material placed in the recycling bins, comparing the percent weight of recyclable material by type of container used, and examining whether a change in signage increased recycling accuracy. Data were collected over 6 weeks totaling 30 days from 3 different recycling bin types at a Midwest University medical center. Five bin locations for each bin type were used. Bags from these bins were collected, sorted into recyclable and non-recyclable material, and weighed. The percent recyclable material was calculated using these weights. Common contaminates found in the bins were napkins and paper towels, plastic food wrapping, plastic bags, and coffee cups. The results showed a significant difference in percent recyclable material between bin types and bin locations. Bin type 2 was found to have one bin location to be statistically different (p=0.048), which may have been due to lack of a trash bin next to the recycling bin in that location. Bin type 3 had significantly lower percent recyclable material (p<0.001), which may have been due to lack of a trash bin next to the recycling bin and increased contamination due to the combination of commingled and paper into one bag. There was no significant change in percent recyclable material in recycling bins post signage change. These results suggest a signage change may not be an effective way, when used alone, to increase recycling compliance and accuracy. This study showed two or three-compartment bins located next to a trash bin may be the best bin type for recycling accuracy. Copyright © 2012 Elsevier Ltd. All rights reserved.
Attention to multiple locations is limited by spatial working memory capacity.
Close, Alex; Sapir, Ayelet; Burnett, Katherine; d'Avossa, Giovanni
2014-08-21
What limits the ability to attend several locations simultaneously? There are two possibilities: Either attention cannot be divided without incurring a cost, or spatial memory is limited and observers forget which locations to monitor. We compared motion discrimination when attention was directed to one or multiple locations by briefly presented central cues. The cues were matched for the amount of spatial information they provided. Several random dot kinematograms (RDKs) followed the spatial cues; one of them contained task-relevant, coherent motion. When four RDKs were presented, discrimination accuracy was identical when one and two locations were indicated by equally informative cues. However, when six RDKs were presented, discrimination accuracy was higher following one rather than multiple location cues. We examined whether memory of the cued locations was diminished under these conditions. Recall of the cued locations was tested when participants attended the cued locations and when they did not attend the cued locations. Recall was inaccurate only when the cued locations were attended. Finally, visually marking the cued locations, following one and multiple location cues, equalized discrimination performance, suggesting that participants could attend multiple locations when they did not have to remember which ones to attend. We conclude that endogenously dividing attention between multiple locations is limited by inaccurate recall of the attended locations and that attention poses separate demands on the same central processes used to remember spatial information, even when the locations attended and those held in memory are the same. © 2014 ARVO.
Noise pollution mapping approach and accuracy on landscape scales.
Iglesias Merchan, Carlos; Diaz-Balteiro, Luis
2013-04-01
Noise mapping allows the characterization of environmental variables, such as noise pollution or soundscape, depending on the task. Strategic noise mapping (as per Directive 2002/49/EC, 2002) is a tool intended for the assessment of noise pollution at the European level every five years. These maps are based on common methods and procedures intended for human exposure assessment in the European Union that could be also be adapted for assessing environmental noise pollution in natural parks. However, given the size of such areas, there could be an alternative approach to soundscape characterization rather than using human noise exposure procedures. It is possible to optimize the size of the mapping grid used for such work by taking into account the attributes of the area to be studied and the desired outcome. This would then optimize the mapping time and the cost. This type of optimization is important in noise assessment as well as in the study of other environmental variables. This study compares 15 models, using different grid sizes, to assess the accuracy of the noise mapping of the road traffic noise at a landscape scale, with respect to noise and landscape indicators. In a study area located in the Manzanares High River Basin Regional Park in Spain, different accuracy levels (Kappa index values from 0.725 to 0.987) were obtained depending on the terrain and noise source properties. The time taken for the calculations and the noise mapping accuracy results reveal the potential for setting the map resolution in line with decision-makers' criteria and budget considerations. Copyright © 2013 Elsevier B.V. All rights reserved.
Northern Hemisphere observations of ICRF sources on the USNO stellar catalogue frame
NASA Astrophysics Data System (ADS)
Fienga, A.; Andrei, A. H.
2004-06-01
The most recent USNO stellar catalogue, the USNO B1.0 (Monet et al. \\cite{Monet03}), provides positions for 1 042 618 261 objects, with a published astrometric accuracy of 200 mas and five-band magnitudes with a 0.3 mag accuracy. Its completeness is believed to be up to magnitude 21th in V-band. Such a catalogue would be a very good tool for astrometric reduction. This work investigates the accuracy of the USNO B1.0 link to ICRF and give an estimation of its internal and external accuracies by comparison with different catalogues, and by computation of ICRF sources using USNO B1.0 star positions.
NASA Astrophysics Data System (ADS)
Toschi, I.; Capra, A.; De Luca, L.; Beraldin, J.-A.; Cournoyer, L.
2014-05-01
This paper discusses a methodology to evaluate the accuracy of recently developed image-based 3D modelling techniques. So far, the emergence of these novel methods has not been supported by the definition of an internationally recognized standard which is fundamental for user confidence and market growth. In order to provide an element of reflection and solution to the different communities involved in 3D imaging, a promising approach is presented in this paper for the assessment of both metric quality and limitations of an open-source suite of tools (Apero/MicMac), developed for the extraction of dense 3D point clouds from a set of unordered 2D images. The proposed procedural workflow is performed within a metrological context, through inter-comparisons with "reference" data acquired with two hemispherical laser scanners, one total station, and one laser tracker. The methodology is applied to two case studies, designed in order to analyse the software performances in dealing with both outdoor and environmentally controlled conditions, i.e. the main entrance of Cathédrale de la Major (Marseille, France) and a custom-made scene located at National Research Council of Canada 3D imaging Metrology Laboratory (Ottawa). Comparative data and accuracy evidence produced for both tests allow the study of some key factors affecting 3D model accuracy.
Object localization using a biosonar beam: how opening your mouth improves localization.
Arditi, G; Weiss, A J; Yovel, Y
2015-08-01
Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions.
Object localization using a biosonar beam: how opening your mouth improves localization
Arditi, G.; Weiss, A. J.; Yovel, Y.
2015-01-01
Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions. PMID:26361552
Lightning Mapping With an Array of Fast Antennas
NASA Astrophysics Data System (ADS)
Wu, Ting; Wang, Daohong; Takagi, Nobuyuki
2018-04-01
Fast Antenna Lightning Mapping Array (FALMA), a low-frequency lightning mapping system comprising an array of fast antennas, was developed and established in Gifu, Japan, during the summer of 2017. Location results of two hybrid flashes and a cloud-to-ground flash comprising 11 return strokes (RSs) are described in detail in this paper. Results show that concurrent branches of stepped leaders can be readily resolved, and K changes and dart leaders with speeds up to 2.4 × 107 m/s are also well imaged. These results demonstrate that FALMA can reconstruct three-dimensional structures of lightning flashes with great details. Location accuracy of FALMA is estimated by comparing the located striking points of successive RSs in cloud-to-ground flashes. Results show that distances between successive RSs are mainly below 25 m, indicating exceptionally high location accuracy of FALMA.
Zhang, Xiao-Bo; Li, Meng; Wang, Hui; Guo, Lan-Ping; Huang, Lu-Qi
2017-11-01
In literature, there are many information on the distribution of Chinese herbal medicine. Limited by the technical methods, the origin of Chinese herbal medicine or distribution of information in ancient literature were described roughly. It is one of the main objectives of the national census of Chinese medicine resources, which is the background information of the types and distribution of Chinese medicine resources in the region. According to the national Chinese medicine resource census technical specifications and pilot work experience, census team with "3S" technology, computer network technology, digital camera technology and other modern technology methods, can effectively collect the location information of traditional Chinese medicine resources. Detailed and specific location information, such as regional differences in resource endowment and similarity, biological characteristics and spatial distribution, the Chinese medicine resource census data access to the accuracy and objectivity evaluation work, provide technical support and data support. With the support of spatial information technology, based on location information, statistical summary and sharing of multi-source census data can be realized. The integration of traditional Chinese medicine resources and related basic data can be a spatial integration, aggregation and management of massive data, which can help for the scientific rules data mining of traditional Chinese medicine resources from the overall level and fully reveal its scientific connotation. Copyright© by the Chinese Pharmaceutical Association.
Combined analysis of modeled and monitored SO2 concentrations at a complex smelting facility.
Rehbein, Peter J G; Kennedy, Michael G; Cotsman, David J; Campeau, Madonna A; Greenfield, Monika M; Annett, Melissa A; Lepage, Mike F
2014-03-01
Vale Canada Limited owns and operates a large nickel smelting facility located in Sudbury, Ontario. This is a complex facility with many sources of SO2 emissions, including a mix of source types ranging from passive building roof vents to North America's tallest stack. In addition, as this facility performs batch operations, there is significant variability in the emission rates depending on the operations that are occurring. Although SO2 emission rates for many of the sources have been measured by source testing, the reliability of these emission rates has not been tested from a dispersion modeling perspective. This facility is a significant source of SO2 in the local region, making it critical that when modeling the emissions from this facility for regulatory or other purposes, that the resulting concentrations are representative of what would actually be measured or otherwise observed. To assess the accuracy of the modeling, a detailed analysis of modeled and monitored data for SO2 at the facility was performed. A mobile SO2 monitor sampled at five locations downwind of different source groups for different wind directions resulting in a total of 168 hr of valid data that could be used for the modeled to monitored results comparison. The facility was modeled in AERMOD (American Meteorological Society/U.S. Environmental Protection Agency Regulatory Model) using site-specific meteorological data such that the modeled periods coincided with the same times as the monitored events. In addition, great effort was invested into estimating the actual SO2 emission rates that would likely be occurring during each of the monitoring events. SO2 concentrations were modeled for receptors around each monitoring location so that the modeled data could be directly compared with the monitored data. The modeled and monitored concentrations were compared and showed that there were no systematic biases in the modeled concentrations. This paper is a case study of a Combined Analysis of Modelled and Monitored Data (CAMM), which is an approach promulgated within air quality regulations in the Province of Ontario, Canada. Although combining dispersion models and monitoring data to estimate or refine estimates of source emission rates is not a new technique, this study shows how, with a high degree of rigor in the design of the monitoring and filtering of the data, it can be applied to a large industrial facility, with a variety of emission sources. The comparison of modeled and monitored SO2 concentrations in this case study also provides an illustration of the AERMOD model performance for a large industrial complex with many sources, at short time scales in comparison with monitored data. Overall, this analysis demonstrated that the AERMOD model performed well.
Self-Calibration of CMB Polarimeters
NASA Astrophysics Data System (ADS)
Keating, Brian
2013-01-01
Precision measurements of the polarization of the cosmic microwave background (CMB) radiation, especially experiments seeking to detect the odd-parity "B-modes", have far-reaching implications for cosmology. To detect the B-modes generated during inflation the flux response and polarization angle of these experiments must be calibrated to exquisite precision. While suitable flux calibration sources abound, polarization angle calibrators are deficient in many respects. Man-made polarized sources are often not located in the antenna's far-field, have spectral properties that are radically different from the CMB's, are cumbersome to implement and may be inherently unstable over the (long) duration these searches require to detect the faint signature of the inflationary epoch. Astrophysical sources suffer from time, frequency and spatial variability, are not visible from all CMB observatories, and none are understood with sufficient accuracy to calibrate future CMB polarimeters seeking to probe inflationary energy scales of ~1000 TeV. CMB TB and EB modes, expected to identically vanish in the standard cosmological model, can be used to calibrate CMB polarimeters. By enforcing the observed EB and TB power spectra to be consistent with zero, CMB polarimeters can be calibrated to levels not possible with man-made or astrophysical sources. All of this can be accomplished without any loss of observing time using a calibration source which is spectrally identical to the CMB B-modes. The calibration procedure outlined here can be used for any CMB polarimeter.
Social sensing of urban land use based on analysis of Twitter users’ mobility patterns
Soltani, Kiumars; Yin, Junjun; Padmanabhan, Anand; Wang, Shaowen
2017-01-01
A number of recent studies showed that digital footprints around built environments, such as geo-located tweets, are promising data sources for characterizing urban land use. However, challenges for achieving this purpose exist due to the volume and unstructured nature of geo-located social media. Previous studies focused on analyzing Twitter data collectively resulting in coarse resolution maps of urban land use. We argue that the complex spatial structure of a large collection of tweets, when viewed through the lens of individual-level human mobility patterns, can be simplified to a series of key locations for each user, which could be used to characterize urban land use at a higher spatial resolution. Contingent issues that could affect our approach, such as Twitter users’ biases and tendencies at locations where they tweet the most, were systematically investigated using 39 million geo-located Tweets and two independent datasets of the City of Chicago: 1) travel survey and 2) parcel-level land use map. Our results support that the majority of Twitter users show a preferential return, where their digital traces are clustered around a few key locations. However, we did not find a general relation among users between the ranks of locations for an individual—based on the density of tweets—and their land use types. On the contrary, temporal patterns of tweeting at key locations were found to be coherent among the majority of users and significantly associated with land use types of these locations. Furthermore, we used these temporal patterns to classify key locations into generic land use types with an overall classification accuracy of 0.78. The contribution of our research is twofold: a novel approach to resolving land use types at a higher resolution, and in-depth understanding of Twitter users’ location-related and temporal biases, promising to benefit human mobility and urban studies in general. PMID:28723936
Social sensing of urban land use based on analysis of Twitter users' mobility patterns.
Soliman, Aiman; Soltani, Kiumars; Yin, Junjun; Padmanabhan, Anand; Wang, Shaowen
2017-01-01
A number of recent studies showed that digital footprints around built environments, such as geo-located tweets, are promising data sources for characterizing urban land use. However, challenges for achieving this purpose exist due to the volume and unstructured nature of geo-located social media. Previous studies focused on analyzing Twitter data collectively resulting in coarse resolution maps of urban land use. We argue that the complex spatial structure of a large collection of tweets, when viewed through the lens of individual-level human mobility patterns, can be simplified to a series of key locations for each user, which could be used to characterize urban land use at a higher spatial resolution. Contingent issues that could affect our approach, such as Twitter users' biases and tendencies at locations where they tweet the most, were systematically investigated using 39 million geo-located Tweets and two independent datasets of the City of Chicago: 1) travel survey and 2) parcel-level land use map. Our results support that the majority of Twitter users show a preferential return, where their digital traces are clustered around a few key locations. However, we did not find a general relation among users between the ranks of locations for an individual-based on the density of tweets-and their land use types. On the contrary, temporal patterns of tweeting at key locations were found to be coherent among the majority of users and significantly associated with land use types of these locations. Furthermore, we used these temporal patterns to classify key locations into generic land use types with an overall classification accuracy of 0.78. The contribution of our research is twofold: a novel approach to resolving land use types at a higher resolution, and in-depth understanding of Twitter users' location-related and temporal biases, promising to benefit human mobility and urban studies in general.
Brain-behavior relationships in source memory: Effects of age and memory ability.
Meusel, Liesel-Ann; Grady, Cheryl L; Ebert, Patricia E; Anderson, Nicole D
2017-06-01
There is considerable evidence for age-related decrements in source memory retrieval, but the literature on the neural correlates of these impairments is mixed. In this study, we used functional magnetic resonance imaging to examine source memory retrieval-related brain activity, and the monotonic relationship between retrieval-related brain activity and source memory accuracy, as a function of both healthy aging (younger vs older) and memory ability within the older adult group (Hi-Old vs Lo-Old). Participants studied lists of word pairs, half visually, half aurally; these were re-presented visually in a scanned test phase and participants indicated if the pair was 'seen' or 'heard' in the study phase. The Lo-Old, but not the Hi-Old, showed source memory performance decrements compared to the Young. During retrieval of source memories, younger and older adults engaged lateral and medial prefrontal cortex (PFC) and medial posterior parietal (and occipital) cortices. The groups differed in how brain activity related to source memory accuracy in dorsal anterior cingulate cortex, precuneus/cuneus, and the inferior parietal cortex; in each of these areas, greater activity was associated with poorer accuracy in the Young, but with higher accuracy in the Hi-Old (anterior cingulate and precuneus/cuneus) and Lo-Old (inferior parietal lobe). Follow-up pairwise group interaction analyses revealed that greater activity in right parahippocampal gyrus was associated with better source memory in the Hi-Old, but not in the Lo-Old. We conclude that older adults recruit additional brain regions to compensate for age-related decline in source memory, but the specific regions involved differ depending on their episodic memory ability. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kellogg, James A.; Bankert, David A.; Chaturvedi, Vishnu
1999-01-01
The accuracy of the Microbial Identification System (MIS; MIDI, Inc.) for identification of yeasts to the species level was compared by using 438 isolates grown on prepoured BBL Sabouraud dextrose agar (SDA) and prepoured Remel SDA. Correct identification was observed for 326 (74%) of the yeasts cultured on BBL SDA versus only 214 (49%) of yeasts grown on Remel SDA (P < 0.001). The commercial source of the SDA used in the MIS procedure significantly influences the system’s accuracy. PMID:10325387
An Optimal Design for Placements of Tsunami Observing Systems Around the Nankai Trough, Japan
NASA Astrophysics Data System (ADS)
Mulia, I. E.; Gusman, A. R.; Satake, K.
2017-12-01
Presently, there are numerous tsunami observing systems deployed in several major tsunamigenic regions throughout the world. However, documentations on how and where to optimally place such measurement devices are limited. This study presents a methodological approach to select the best and fewest observation points for the purpose of tsunami source characterizations, particularly in the form of fault slip distributions. We apply the method to design a new tsunami observation network around the Nankai Trough, Japan. In brief, our method can be divided into two stages: initialization and optimization. The initialization stage aims to identify favorable locations of observation points, as well as to determine the initial number of observations. These points are generated based on extrema of an empirical orthogonal function (EOF) spatial modes derived from 11 hypothetical tsunami events in the region. In order to further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search (MADS) to remove redundant measurements from the initially generated points by the first stage. A combinatorial search by the MADS will improve the accuracy and reduce the number of observations simultaneously. The EOF analysis of the hypothetical tsunamis using first 2 leading modes with 4 extrema on each mode results in 30 observation points spread along the trench. This is obtained after replacing some clustered points within the radius of 30 km with only one representative. Furthermore, the MADS optimization can improve the accuracy of the EOF-generated points by approximately 10-20% with fewer observations (23 points). Finally, we compare our result with the existing observation points (68 stations) in the region. The result shows that the optimized design with fewer number of observations can produce better source characterizations with approximately 20-60% improvement of accuracies at all the 11 hypothetical cases. It should be note, however, that our design is a tsunami-based approach, some of the existing observing systems are equipped with additional devices to measure other parameter of interests, i.e., for monitoring seismic activities.
Locations and attributes of utility-scale solar power facilities in Colorado and New Mexico, 2011
Ignizio, Drew A.; Carr, Natasha B.
2012-01-01
The data series consists of polygonal boundaries for utility-scale solar power facilities (both photovoltaic and concentrating solar power) located within Colorado and New Mexico as of December 2011. Attributes captured for each facility include the following: facility name, size/production capacity (in MW), type of solar technology employed, location, state, operational status, year the facility came online, and source identification information. Facility locations and perimeters were derived from 1-meter true-color aerial photographs (2011) produced by the National Agriculture Imagery Program (NAIP); the photographs have a positional accuracy of about ±5 meters (accessed from the NAIP GIS service: http://gis.apfo.usda.gov/arcgis/services). Solar facility perimeters represent the full extent of each solar facility site, unless otherwise noted. When visible, linear features such as fences or road lines were used to delineate the full extent of the solar facility. All related equipment including buildings, power substations, and other associated infrastructure were included within the solar facility. If solar infrastructure was indistinguishable from adjacent infrastructure, or if solar panels were installed on existing building tops, only the solar collecting equipment was digitized. The "Polygon" field indicates whether the "equipment footprint" or the full "site outline" was digitized. The spatial accuracy of features that represent site perimeters or an equipment footprint is estimated at +/- 10 meters. Facilities under construction or not fully visible in the NAIP imagery at the time of digitization (December 2011) are represented by an approximate site outline based on the best available information and documenting materials. The spatial accuracy of these facilities cannot be estimated without more up-to-date imagery – users are advised to consult more recent imagery as it becomes available. The "Status" field provides information about the operational status of each facility as of December 2011. This data series contributes to an Online Interactive Energy Atlas currently in development by the U.S. Geological Survey. The Energy Atlas will synthesize data on existing and potential energy development in Colorado and New Mexico and will include additional natural resource data layers. This information may be used by decision makers to evaluate and compare the potential benefits and tradeoffs associated with different energy development strategies or scenarios. Interactive maps, downloadable data layers, metadata, and decision support tools will be included in the Energy Atlas. The format of the Energy Atlas will facilitate the integration of information about energy with key terrestrial and aquatic resources for evaluating resource values and minimizing risks from energy development activities.
Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders
2013-10-01
Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.
Frizzelle, Brian G; Evenson, Kelly R; Rodriguez, Daniel A; Laraia, Barbara A
2009-01-01
Background Health researchers have increasingly adopted the use of geographic information systems (GIS) for analyzing environments in which people live and how those environments affect health. One aspect of this research that is often overlooked is the quality and detail of the road data and whether or not it is appropriate for the scale of analysis. Many readily available road datasets, both public domain and commercial, contain positional errors or generalizations that may not be compatible with highly accurate geospatial locations. This study examined the accuracy, completeness, and currency of four readily available public and commercial sources for road data (North Carolina Department of Transportation, StreetMap Pro, TIGER/Line 2000, TIGER/Line 2007) relative to a custom road dataset which we developed and used for comparison. Methods and Results A custom road network dataset was developed to examine associations between health behaviors and the environment among pregnant and postpartum women living in central North Carolina in the United States. Three analytical measures were developed to assess the comparative accuracy and utility of four publicly and commercially available road datasets and the custom dataset in relation to participants' residential locations over three time periods. The exclusion of road segments and positional errors in the four comparison road datasets resulted in between 5.9% and 64.4% of respondents lying farther than 15.24 meters from their nearest road, the distance of the threshold set by the project to facilitate spatial analysis. Agreement, using a Pearson's correlation coefficient, between the customized road dataset and the four comparison road datasets ranged from 0.01 to 0.82. Conclusion This study demonstrates the importance of examining available road datasets and assessing their completeness, accuracy, and currency for their particular study area. This paper serves as an example for assessing the feasibility of readily available commercial or public road datasets, and outlines the steps by which an improved custom dataset for a study area can be developed. PMID:19409088
Sky and Elemental Planetary Mapping Via Gamma Ray Emissions
NASA Technical Reports Server (NTRS)
Roland, John M.
2011-01-01
Low-energy gamma ray emissions ((is) approximately 30keV to (is) approximately 30MeV) are significant to astrophysics because many interesting objects emit their primary energy in this regime. As such, there has been increasing demand for a complete map of the gamma ray sky, but many experiments to do so have encountered obstacles. Using an innovative method of applying the Radon Transform to data from BATSE (the Burst And Transient Source Experiment) on NASA's CGRO (Compton Gamma-Ray Observatory) mission, we have circumvented many of these issues and successfully localized many known sources to 0.5 - 1 deg accuracy. Our method, which is based on a simple 2-dimensional planar back-projection approximation of the inverse Radon transform (familiar from medical CAT-scan technology), can thus be used to image the entire sky and locate new gamma ray sources, specifically in energy bands between 200keV and 2MeV which have not been well surveyed to date. Samples of these results will be presented. This same technique can also be applied to elemental planetary surface mapping via gamma ray spectroscopy. Due to our method's simplicity and power, it could potentially improve a current map's resolution by a significant factor.
First detection of precursory ground inflation of a small phreatic eruption by InSAR
NASA Astrophysics Data System (ADS)
Kobayashi, Tomokazu; Morishita, Yu; Munekane, Hiroshi
2018-06-01
Phreatic eruptions are caused by pressurization of geothermal fluid sources at shallow levels. They are relatively small compared to typical magmatic eruptions, but can be very hazardous. However, owing to their small magnitudes, their occurrences are difficult to predict. Here we show the detection of locally distributed ground inflation preceding a small phreatic eruption at the Hakone volcano, Japan, through the application of interferometric synthetic aperture radar analysis. The ground inflation proceeded the eruption at slow speed of ∼5 mm/month with a spatial size of ∼200 m in the early stage, and then it accelerated 2 months before the eruption that occurred for the first time in 800-900 yrs. The ground uplift reached ∼30 cm, and the eruption occurred nearby the most deformed part. The deformation speed correlated well with inflation of spherical source located at 4.8 km below sea level, thus suggesting that heat and/or volcanic fluid supply from the spherical source, maybe magma reservoir, directly drove the subsurface hydrothermal activity. Our results demonstrate that high-spatial-resolution deformation data can be a good indicator of subsurface pressure conditions with pinpoint spatial accuracy during the preparatory process of phreatic eruptions.
Enhance the Value of a Research Paper: Choosing the Right References and Writing them Accurately.
Bavdekar, Sandeep B
2016-03-01
References help readers identify and locate sources used for justifying the need for conducting the research study, verify methods employed in the study and for discussing the interpretation of results and implications of the study. It is extremely essential that references are accurate and complete. This article provides suggestions regarding choosing references and writing reference list. References are a list of sources that are selected by authors to represent the best documents concerning the research study.1 They constitute the foundation of any research paper. Although generally written towards the end of the article-writing process, they are nevertheless extremely important. They provide the context for the hypothesis and help justify the need for conducting the research study. Authors use references to inform readers about the techniques used for conducting the study and convince them about the appropriateness of methodology used. References help provide appropriate perspective in which the research findings should be seen and interpreted. This communication will discuss the purpose of citations, how to select quality sources for citing and the importance of accuracy while writing the reference list. © Journal of the Association of Physicians of India 2011.
Cortical reinstatement and the confidence and accuracy of source memory.
Thakral, Preston P; Wang, Tracy H; Rugg, Michael D
2015-04-01
Cortical reinstatement refers to the overlap between neural activity elicited during the encoding and the subsequent retrieval of an episode, and is held to reflect retrieved mnemonic content. Previous findings have demonstrated that reinstatement effects reflect the quality of retrieved episodic information as this is operationalized by the accuracy of source memory judgments. The present functional magnetic resonance imaging (fMRI) study investigated whether reinstatement-related activity also co-varies with the confidence of accurate source judgments. Participants studied pictures of objects along with their visual or spoken names. At test, they first discriminated between studied and unstudied pictures and then, for each picture judged as studied, they also judged whether it had been paired with a visual or auditory name, using a three-point confidence scale. Accuracy of source memory judgments- and hence the quality of the source-specifying information--was greater for high than for low confidence judgments. Modality-selective retrieval-related activity (reinstatement effects) also co-varied with the confidence of the corresponding source memory judgment. The findings indicate that the quality of the information supporting accurate judgments of source memory is indexed by the relative magnitude of content-selective, retrieval-related neural activity. Copyright © 2015 Elsevier Inc. All rights reserved.
Martin, Shelby; Wagner, Jesse; Lupulescu-Mann, Nicoleta; Ramsey, Katrina; Cohen, Aaron; Graven, Peter; Weiskopf, Nicole G; Dorr, David A
2017-08-02
To measure variation among four different Electronic Health Record (EHR) system documentation locations versus 'gold standard' manual chart review for risk stratification in patients with multiple chronic illnesses. Adults seen in primary care with EHR evidence of at least one of 13 conditions were included. EHRs were manually reviewed to determine presence of active diagnoses, and risk scores were calculated using three different methodologies and five EHR documentation locations. Claims data were used to assess cost and utilization for the following year. Descriptive and diagnostic statistics were calculated for each EHR location. Criterion validity testing compared the gold standard verified diagnoses versus other EHR locations and risk scores in predicting future cost and utilization. Nine hundred patients had 2,179 probable diagnoses. About 70% of the diagnoses from the EHR were verified by gold standard. For a subset of patients having baseline and prediction year data (n=750), modeling showed that the gold standard was the best predictor of outcomes on average for a subset of patients that had these data. However, combining all data sources together had nearly equivalent performance for prediction as the gold standard. EHR data locations were inaccurate 30% of the time, leading to improvement in overall modeling from a gold standard from chart review for individual diagnoses. However, the impact on identification of the highest risk patients was minor, and combining data from different EHR locations was equivalent to gold standard performance. The reviewer's ability to identify a diagnosis as correct was influenced by a variety of factors, including completeness, temporality, and perceived accuracy of chart data.
Mennill, Daniel J.; Burt, John M.; Fristrup, Kurt M.; Vehrencamp, Sandra L.
2008-01-01
A field test was conducted on the accuracy of an eight-microphone acoustic location system designed to triangulate the position of duetting rufous-and-white wrens (Thryothorus rufalbus) in Costa Rica’s humid evergreen forest. Eight microphones were set up in the breeding territories of twenty pairs of wrens, with an average inter-microphone distance of 75.2±2.6 m. The array of microphones was used to record antiphonal duets broadcast through stereo loudspeakers. The positions of the loudspeakers were then estimated by evaluating the delay with which the eight microphones recorded the broadcast sounds. Position estimates were compared to coordinates surveyed with a global-positioning system (GPS). The acoustic location system estimated the position of loudspeakers with an error of 2.82±0.26 m and calculated the distance between the “male” and “female” loudspeakers with an error of 2.12±0.42 m. Given the large range of distances between duetting birds, this relatively low level of error demonstrates that the acoustic location system is a useful tool for studying avian duets. Location error was influenced partly by the difficulties inherent in collecting high accuracy GPS coordinates of microphone positions underneath a lush tropical canopy, and partly by the complicating influence of irregular topography and thick vegetation on sound transmission. PMID:16708941
Rettmann, Maryam E.; Holmes, David R.; Kwartowitz, David M.; Gunawan, Mia; Johnson, Susan B.; Camp, Jon J.; Cameron, Bruce M.; Dalegrave, Charles; Kolasa, Mark W.; Packer, Douglas L.; Robb, Richard A.
2014-01-01
Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamic in vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy. PMID:24506630
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.
2014-02-15
Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Datamore » from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy.« less
Development of Vertical Cable Seismic System (3)
NASA Astrophysics Data System (ADS)
Asakawa, E.; Murakami, F.; Tsukahara, H.; Mizohata, S.; Ishikawa, K.
2013-12-01
The VCS (Vertical Cable Seismic) is one of the reflection seismic methods. It uses hydrophone arrays vertically moored from the seafloor to record acoustic waves generated by surface, deep-towed or ocean bottom sources. Analyzing the reflections from the sub-seabed, we could look into the subsurface structure. Because VCS is an efficient high-resolution 3D seismic survey method for a spatially-bounded area, we proposed the method for the hydrothermal deposit survey tool development program that the Ministry of Education, Culture, Sports, Science and Technology (MEXT) started in 2009. We are now developing a VCS system, including not only data acquisition hardware but data processing and analysis technique. We carried out several VCS surveys combining with surface towed source, deep towed source and ocean bottom source. The water depths of the survey are from 100m up to 2100m. The target of the survey includes not only hydrothermal deposit but oil and gas exploration. Through these experiments, our VCS data acquisition system has been completed. But the data processing techniques are still on the way. One of the most critical issues is the positioning in the water. The uncertainty in the positions of the source and of the hydrophones in water degraded the quality of subsurface image. GPS navigation system are available on sea surface, but in case of deep-towed source or ocean bottom source, the accuracy of shot position with SSBL/USBL is not sufficient for the very high-resolution imaging. We have developed another approach to determine the positions in water using the travel time data from the source to VCS hydrophones. In the data acquisition stage, we estimate the position of VCS location with slant ranging method from the sea surface. The deep-towed source or ocean bottom source is estimated by SSBL/USBL. The water velocity profile is measured by XCTD. After the data acquisition, we pick the first break times of the VCS recorded data. The estimated positions of shot points and receiver points in the field include the errors. We use these data as initial guesses, we invert iteratively shot and receiver positions to match the travel time data. After several iterations we could finally estimate the most probable positions. Integration of the constraint of VCS hydrophone positions, such as the spacing is 10m, can accelerate the convergence of the iterative inversion and improve results. The accuracy of the estimated positions from the travel time date is enough for the VCS data processing.