Sample records for source localization accuracy

  1. Sound source localization identification accuracy: Envelope dependencies.

    PubMed

    Yost, William A

    2017-07-01

    Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.

  2. Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.

    PubMed

    Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael

    2015-08-01

    In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.

  3. Source localization of rhythmic ictal EEG activity: a study of diagnostic accuracy following STARD criteria.

    PubMed

    Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders

    2013-10-01

    Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.

  4. EEG source localization: Sensor density and head surface coverage.

    PubMed

    Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don

    2015-12-30

    The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  5. 3D source localization of interictal spikes in epilepsy patients with MRI lesions

    NASA Astrophysics Data System (ADS)

    Ding, Lei; Worrell, Gregory A.; Lagerlund, Terrence D.; He, Bin

    2006-08-01

    The present study aims to accurately localize epileptogenic regions which are responsible for epileptic activities in epilepsy patients by means of a new subspace source localization approach, i.e. first principle vectors (FINE), using scalp EEG recordings. Computer simulations were first performed to assess source localization accuracy of FINE in the clinical electrode set-up. The source localization results from FINE were compared with the results from a classic subspace source localization approach, i.e. MUSIC, and their differences were tested statistically using the paired t-test. Other factors influencing the source localization accuracy were assessed statistically by ANOVA. The interictal epileptiform spike data from three adult epilepsy patients with medically intractable partial epilepsy and well-defined symptomatic MRI lesions were then studied using both FINE and MUSIC. The comparison between the electrical sources estimated by the subspace source localization approaches and MRI lesions was made through the coregistration between the EEG recordings and MRI scans. The accuracy of estimations made by FINE and MUSIC was also evaluated and compared by R2 statistic, which was used to indicate the goodness-of-fit of the estimated sources to the scalp EEG recordings. The three-concentric-spheres head volume conductor model was built for each patient with three spheres of different radii which takes the individual head size and skull thickness into consideration. The results from computer simulations indicate that the improvement of source spatial resolvability and localization accuracy of FINE as compared with MUSIC is significant when simulated sources are closely spaced, deep, or signal-to-noise ratio is low in a clinical electrode set-up. The interictal electrical generators estimated by FINE and MUSIC are in concordance with the patients' structural abnormality, i.e. MRI lesions, in all three patients. The higher R2 values achieved by FINE than MUSIC indicate that FINE provides a more satisfactory fitting of the scalp potential measurements than MUSIC in all patients. The present results suggest that FINE provides a useful brain source imaging technique, from clinical EEG recordings, for identifying and localizing epileptogenic regions in epilepsy patients with focal partial seizures. The present study may lead to the establishment of a high-resolution source localization technique from scalp-recorded EEGs for aiding presurgical planning in epilepsy patients.

  6. Estimation of source location and ground impedance using a hybrid multiple signal classification and Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung

    2016-07-01

    A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.

  7. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  8. Systematic study of target localization for bioluminescence tomography guided radiation therapy

    PubMed Central

    Yu, Jingjing; Zhang, Bin; Iordachita, Iulian I.; Reyes, Juvenal; Lu, Zhihao; Brock, Malcolm V.; Patterson, Michael S.; Wong, John W.

    2016-01-01

    Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstruct source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models. PMID:27147371

  9. Trust index based fault tolerant multiple event localization algorithm for WSNs.

    PubMed

    Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue

    2011-01-01

    This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms.

  10. Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs

    PubMed Central

    Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue

    2011-01-01

    This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms. PMID:22163972

  11. A microwave imaging-based 3D localization algorithm for an in-body RF source as in wireless capsule endoscopes.

    PubMed

    Chandra, Rohit; Balasingham, Ilangko

    2015-01-01

    A microwave imaging-based technique for 3D localization of an in-body RF source is presented. Such a technique can be useful for localization of an RF source as in wireless capsule endoscopes for positioning of any abnormality in the gastrointestinal tract. Microwave imaging is used to determine the dielectric properties (relative permittivity and conductivity) of the tissues that are required for a precise localization. A 2D microwave imaging algorithm is used for determination of the dielectric properties. Calibration method is developed for removing any error due to the used 2D imaging algorithm on the imaging data of a 3D body. The developed method is tested on a simple 3D heterogeneous phantom through finite-difference-time-domain simulations. Additive white Gaussian noise at the signal-to-noise ratio of 30 dB is added to the simulated data to make them more realistic. The developed calibration method improves the imaging and the localization accuracy. Statistics on the localization accuracy are generated by randomly placing the RF source at various positions inside the small intestine of the phantom. The cumulative distribution function of the localization error is plotted. In 90% of the cases, the localization accuracy was found within 1.67 cm, showing the capability of the developed method for 3D localization.

  12. Comparison of Phase-Based 3D Near-Field Source Localization Techniques for UHF RFID.

    PubMed

    Parr, Andreas; Miesen, Robert; Vossiek, Martin

    2016-06-25

    In this paper, we present multiple techniques for phase-based narrowband backscatter tag localization in three-dimensional space with planar antenna arrays or synthetic apertures. Beamformer and MUSIC localization algorithms, known from near-field source localization and direction-of-arrival estimation, are applied to the 3D backscatter scenario and their performance in terms of localization accuracy is evaluated. We discuss the impact of different transceiver modes known from the literature, which evaluate different send and receive antenna path combinations for a single localization, as in multiple input multiple output (MIMO) systems. Furthermore, we propose a new Singledimensional-MIMO (S-MIMO) transceiver mode, which is especially suited for use with mobile robot systems. Monte-Carlo simulations based on a realistic multipath error model ensure spatial correlation of the simulated signals, and serve to critically appraise the accuracies of the different localization approaches. A synthetic uniform rectangular array created by a robotic arm is used to evaluate selected localization techniques. We use an Ultra High Frequency (UHF) Radiofrequency Identification (RFID) setup to compare measurements with the theory and simulation. The results show how a mean localization accuracy of less than 30 cm can be reached in an indoor environment. Further simulations demonstrate how the distance between aperture and tag affects the localization accuracy and how the size and grid spacing of the rectangular array need to be adapted to improve the localization accuracy down to orders of magnitude in the centimeter range, and to maximize array efficiency in terms of localization accuracy per number of elements.

  13. Systematic study of target localization for bioluminescence tomography guided radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Jingjing; Zhang, Bin; Reyes, Juvenal

    Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstructmore » source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models.« less

  14. MR-based source localization for MR-guided HDR brachytherapy

    NASA Astrophysics Data System (ADS)

    Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.

    2018-04-01

    For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.

  15. Relation of sound intensity and accuracy of localization.

    PubMed

    Farrimond, T

    1989-08-01

    Tests were carried out on 17 subjects to determine the accuracy of monaural sound localization when the head is not free to turn toward the sound source. Maximum accuracy of localization for a constant-volume sound source coincided with the position for maximum perceived intensity of the sound in the front quadrant. There was a tendency for sounds to be perceived more often as coming from a position directly toward the ear. That is, for sounds in the front quadrant, errors of localization tended to be predominantly clockwise (i.e., biased toward a line directly facing the ear). Errors for sounds occurring in the rear quadrant tended to be anticlockwise. The pinna's differential effect on sound intensity between front and rear quadrants would assist in identifying the direction of movement of objects, for example an insect, passing the ear.

  16. [The underwater and airborne horizontal localization of sound by the northern fur seal].

    PubMed

    Babushina, E S; Poliakov, M A

    2004-01-01

    The accuracy of the underwater and airborne horizontal localization of different acoustic signals by the northern fur seal was investigated by the method of instrumental conditioned reflexes with food reinforcement. For pure-tone pulsed signals in the frequency range of 0.5-25 kHz the minimum angles of sound localization at 75% of correct responses corresponded to sound transducer azimuth of 6.5-7.5 degrees +/- 0.1-0.4 degrees underwater (at impulse duration of 3-90 ms) and of 3.5-5.5 degrees +/- 0.05-0.5 degrees in air (at impulse duration of 3-160 ms). The source of pulsed noise signals (of 3-ms duration) was localized with the accuracy of 3.0 degrees +/- 0.2 degrees underwater. The source of continuous (of 1-s duration) narrow band (10% of c.fr.) noise signals was localized in air with the accuracy of 2-5 degrees +/- 0.02-0.4 degrees and of continuous broad band (1-20 kHz) noise, with the accuracy of 4.5 degrees +/- 0.2 degrees.

  17. Cross-coherent vector sensor processing for spatially distributed glider networks.

    PubMed

    Nichols, Brendan; Sabra, Karim G

    2015-09-01

    Autonomous underwater gliders fitted with vector sensors can be used as a spatially distributed sensor array to passively locate underwater sources. However, to date, the positional accuracy required for robust array processing (especially coherent processing) is not achievable using dead-reckoning while the gliders remain submerged. To obtain such accuracy, the gliders can be temporarily surfaced to allow for global positioning system contact, but the acoustically active sea surface introduces locally additional sensor noise. This letter demonstrates that cross-coherent array processing, which inherently mitigates the effects of local noise, outperforms traditional incoherent processing source localization methods for this spatially distributed vector sensor network.

  18. Bioluminescence Tomography–Guided Radiation Therapy for Preclinical Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Bin; Wang, Ken Kang-Hsin, E-mail: kwang27@jhmi.edu; Yu, Jingjing

    Purpose: In preclinical radiation research, it is challenging to localize soft tissue targets based on cone beam computed tomography (CBCT) guidance. As a more effective method to localize soft tissue targets, we developed an online bioluminescence tomography (BLT) system for small-animal radiation research platform (SARRP). We demonstrated BLT-guided radiation therapy and validated targeting accuracy based on a newly developed reconstruction algorithm. Methods and Materials: The BLT system was designed to dock with the SARRP for image acquisition and to be detached before radiation delivery. A 3-mirror system was devised to reflect the bioluminescence emitted from the subject to a stationarymore » charge-coupled device (CCD) camera. Multispectral BLT and the incomplete variables truncated conjugate gradient method with a permissible region shrinking strategy were used as the optimization scheme to reconstruct bioluminescent source distributions. To validate BLT targeting accuracy, a small cylindrical light source with high CBCT contrast was placed in a phantom and also in the abdomen of a mouse carcass. The center of mass (CoM) of the source was recovered from BLT and used to guide radiation delivery. The accuracy of the BLT-guided targeting was validated with films and compared with the CBCT-guided delivery. In vivo experiments were conducted to demonstrate BLT localization capability for various source geometries. Results: Online BLT was able to recover the CoM of the embedded light source with an average accuracy of 1 mm compared to that with CBCT localization. Differences between BLT- and CBCT-guided irradiation shown on the films were consistent with the source localization revealed in the BLT and CBCT images. In vivo results demonstrated that our BLT system could potentially be applied for multiple targets and tumors. Conclusions: The online BLT/CBCT/SARRP system provides an effective solution for soft tissue targeting, particularly for small, nonpalpable, or orthotopic tumor models.« less

  19. Sound localization skills in children who use bilateral cochlear implants and in children with normal acoustic hearing

    PubMed Central

    Grieco-Calub, Tina M.; Litovsky, Ruth Y.

    2010-01-01

    Objectives To measure sound source localization in children who have sequential bilateral cochlear implants (BICIs); to determine if localization accuracy correlates with performance on a right-left discrimination task (i.e., spatial acuity); to determine if there is a measurable bilateral benefit on a sound source identification task (i.e., localization accuracy) by comparing performance under bilateral and unilateral listening conditions; to determine if sound source localization continues to improve with longer durations of bilateral experience. Design Two groups of children participated in this study: a group of 21 children who received BICIs in sequential procedures (5–14 years old) and a group of 7 typically-developing children with normal acoustic hearing (5 years old). Testing was conducted in a large sound-treated booth with loudspeakers positioned on a horizontal arc with a radius of 1.2 m. Children participated in two experiments that assessed spatial hearing skills. Spatial hearing acuity was assessed with a discrimination task in which listeners determined if a sound source was presented on the right or left side of center; the smallest angle at which performance on this task was reliably above chance is the minimum audible angle. Sound localization accuracy was assessed with a sound source identification task in which children identified the perceived position of the sound source from a multi-loudspeaker array (7 or 15); errors are quantified using the root-mean-square (RMS) error. Results Sound localization accuracy was highly variable among the children with BICIs, with RMS errors ranging from 19°–56°. Performance of the NH group, with RMS errors ranging from 9°–29° was significantly better. Within the BICI group, in 11/21 children RMS errors were smaller in the bilateral vs. unilateral listening condition, indicating bilateral benefit. There was a significant correlation between spatial acuity and sound localization accuracy (R2=0.68, p<0.01), suggesting that children who achieve small RMS errors tend to have the smallest MAAs. Although there was large intersubject variability, testing of 11 children in the BICI group at two sequential visits revealed a subset of children who show improvement in spatial hearing skills over time. Conclusions A subset of children who use sequential BICIs can acquire sound localization abilities, even after long intervals between activation of hearing in the first- and second-implanted ears. This suggests that children with activation of the second implant later in life may be capable of developing spatial hearing abilities. The large variability in performance among the children with BICIs suggests that maturation of sound localization abilities in children with BICIs may be dependent on various individual subject factors such as age of implantation and chronological age. PMID:20592615

  20. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners With Bilateral and With Hearing-Preservation Cochlear Implants.

    PubMed

    Loiselle, Louise H; Dorman, Michael F; Yost, William A; Cook, Sarah J; Gifford, Rene H

    2016-08-01

    To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli. Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available. The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.

  1. Auditory and visual localization accuracy in young children and adults.

    PubMed

    Martin, Karen; Johnstone, Patti; Hedrick, Mark

    2015-06-01

    This study aimed to measure and compare sound and light source localization ability in young children and adults who have normal hearing and normal/corrected vision in order to determine the extent to which age, type of stimuli, and stimulus order affects sound localization accuracy. Two experiments were conducted. The first involved a group of adults only. The second involved a group of 30 children aged 3 to 5 years. Testing occurred in a sound-treated booth containing a semi-circular array of 15 loudspeakers set at 10° intervals from -70° to 70° azimuth. Each loudspeaker had a tiny light bulb and a small picture fastened underneath. Seven of the loudspeakers were used to randomly test sound and light source identification. The sound stimulus was the word "baseball". The light stimulus was a flashing of a light bulb triggered by the digital signal of the word "baseball". Each participant was asked to face 0° azimuth, and identify the location of the test stimulus upon presentation. Adults used a computer mouse to click on an icon; children responded by verbally naming or walking toward the picture underneath the corresponding loudspeaker or light. A mixed experimental design using repeated measures was used to determine the effect of age and stimulus type on localization accuracy in children and adults. A mixed experimental design was used to compare the effect of stimulus order (light first/last) and varying or fixed intensity sound on localization accuracy in children and adults. Localization accuracy was significantly better for light stimuli than sound stimuli for children and adults. Children, compared to adults, showed significantly greater localization errors for audition. Three-year-old children had significantly greater sound localization errors compared to 4- and 5-year olds. Adults performed better on the sound localization task when the light localization task occurred first. Young children can understand and attend to localization tasks, but show poorer localization accuracy than adults in sound localization. This may be a reflection of differences in sensory modality development and/or central processes in young children, compared to adults. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. WE-FG-BRA-06: Systematic Study of Target Localization for Bioluminescence Tomography Guided Radiation Therapy for Preclinical Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B; Reyes, J; Wong, J

    Purpose: To overcome the limitation of CT/CBCT in guiding radiation for soft tissue targets, we developed a bioluminescence tomography(BLT) system for preclinical radiation research. We systematically assessed the system performance in target localization and the ability of resolving two sources in simulations, phantom and in vivo environments. Methods: Multispectral images acquired in single projection were used for the BLT reconstruction. Simulation studies were conducted for single spherical source radius from 0.5 to 3 mm at depth of 3 to 12 mm. The same configuration was also applied for the double sources simulation with source separations varying from 3 to 9more » mm. Experiments were performed in a standalone BLT/CBCT system. Two sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single source at 6 and 9 mm depth, 2 sources with 3 and 5 mm separation at depth of 5 mm or 3 sources in the abdomen were also used to illustrate the in vivo localization capability of the BLT system. Results: Simulation and phantom results illustrate that our BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single source case at 6 and 9 mm depth, respectively. For the 2 sources study, both sources can be distinguished at 3 and 5 mm separations at approximately 1 mm accuracy using 3D BLT but not 2D bioluminescence image. Conclusion: Our BLT/CBCT system can be potentially applied to localize and resolve targets at a wide range of target sizes, depths and separations. The information provided in this study can be instructive to devise margins for BLT-guided irradiation and suggests that the BLT could guide radiation for multiple targets, such as metastasis. Drs. John W. Wong and Iulian I. Iordachita receive royalty payment from a licensing agreement between Xstrahl Ltd and Johns Hopkins University.« less

  3. Resolution limits of ultrafast ultrasound localization microscopy

    NASA Astrophysics Data System (ADS)

    Desailly, Yann; Pierre, Juliette; Couture, Olivier; Tanter, Mickael

    2015-11-01

    As in other imaging methods based on waves, the resolution of ultrasound imaging is limited by the wavelength. However, the diffraction-limit can be overcome by super-localizing single events from isolated sources. In recent years, we developed plane-wave ultrasound allowing frame rates up to 20 000 fps. Ultrafast processes such as rapid movement or disruption of ultrasound contrast agents (UCA) can thus be monitored, providing us with distinct punctual sources that could be localized beyond the diffraction limit. We previously showed experimentally that resolutions beyond λ/10 can be reached in ultrafast ultrasound localization microscopy (uULM) using a 128 transducer matrix in reception. Higher resolutions are theoretically achievable and the aim of this study is to predict the maximum resolution in uULM with respect to acquisition parameters (frequency, transducer geometry, sampling electronics). The accuracy of uULM is the error on the localization of a bubble, considered a point-source in a homogeneous medium. The proposed model consists in two steps: determining the timing accuracy of the microbubble echo in radiofrequency data, then transferring this time accuracy into spatial accuracy. The simplified model predicts a maximum resolution of 40 μm for a 1.75 MHz transducer matrix composed of two rows of 64 elements. Experimental confirmation of the model was performed by flowing microbubbles within a 60 μm microfluidic channel and localizing their blinking under ultrafast imaging (500 Hz frame rate). The experimental resolution, determined as the standard deviation in the positioning of the microbubbles, was predicted within 6 μm (13%) of the theoretical values and followed the analytical relationship with respect to the number of elements and depth. Understanding the underlying physical principles determining the resolution of superlocalization will allow the optimization of the imaging setup for each organ. Ultimately, accuracies better than the size of capillaries are achievable at several centimeter depths.

  4. Anatomically constrained dipole adjustment (ANACONDA) for accurate MEG/EEG focal source localizations

    NASA Astrophysics Data System (ADS)

    Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio

    2005-10-01

    This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG.

  5. MEG source localization of spatially extended generators of epileptic activity: comparing entropic and hierarchical bayesian approaches.

    PubMed

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.

  6. MEG Source Localization of Spatially Extended Generators of Epileptic Activity: Comparing Entropic and Hierarchical Bayesian Approaches

    PubMed Central

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485

  7. Techniques for detection and localization of weak hippocampal and medial frontal sources using beamformers in MEG.

    PubMed

    Mills, Travis; Lalancette, Marc; Moses, Sandra N; Taylor, Margot J; Quraan, Maher A

    2012-07-01

    Magnetoencephalography provides precise information about the temporal dynamics of brain activation and is an ideal tool for investigating rapid cognitive processing. However, in many cognitive paradigms visual stimuli are used, which evoke strong brain responses (typically 40-100 nAm in V1) that may impede the detection of weaker activations of interest. This is particularly a concern when beamformer algorithms are used for source analysis, due to artefacts such as "leakage" of activation from the primary visual sources into other regions. We have previously shown (Quraan et al. 2011) that we can effectively reduce leakage patterns and detect weak hippocampal sources by subtracting the functional images derived from the experimental task and a control task with similar stimulus parameters. In this study we assess the performance of three different subtraction techniques. In the first technique we follow the same post-localization subtraction procedures as in our previous work. In the second and third techniques, we subtract the sensor data obtained from the experimental and control paradigms prior to source localization. Using simulated signals embedded in real data, we show that when beamformers are used, subtraction prior to source localization allows for the detection of weaker sources and higher localization accuracy. The improvement in localization accuracy exceeded 10 mm at low signal-to-noise ratios, and sources down to below 5 nAm were detected. We applied our techniques to empirical data acquired with two different paradigms designed to evoke hippocampal and frontal activations, and demonstrated our ability to detect robust activations in both regions with substantial improvements over image subtraction. We conclude that removal of the common-mode dominant sources through data subtraction prior to localization further improves the beamformer's ability to project the n-channel sensor-space data to reveal weak sources of interest and allows more accurate localization.

  8. An alternative subspace approach to EEG dipole source localization

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  9. Reaching nearby sources: comparison between real and virtual sound and visual targets

    PubMed Central

    Parseihian, Gaëtan; Jouffrais, Christophe; Katz, Brian F. G.

    2014-01-01

    Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments. PMID:25228855

  10. Magnetic tracking for TomoTherapy systems: gradiometer based methods to filter eddy-current magnetic fields.

    PubMed

    McGary, John E; Xiong, Zubiao; Chen, Ji

    2013-07-01

    TomoTherapy systems lack real-time, tumor tracking. A possible solution is to use electromagnetic markers; however, eddy-current magnetic fields generated in response to a magnetic source can be comparable to the signal, thus degrading the localization accuracy. Therefore, the tracking system must be designed to account for the eddy fields created along the inner bore conducting surfaces. The aim of this work is to investigate localization accuracy using magnetic field gradients to determine feasibility toward TomoTherapy applications. Electromagnetic models are used to simulate magnetic fields created by a source and its simultaneous generation of eddy currents within a conducting cylinder. The source position is calculated using a least-squares fit of simulated sensor data using the dipole equation as the model equation. To account for field gradients across the sensor area (≈ 25 cm(2)), an iterative method is used to estimate the magnetic field at the sensor center. Spatial gradients are calculated with two arrays of uniaxial, paired sensors that form a gradiometer array, where the sensors are considered ideal. Experimental measurements of magnetic fields within the TomoTherapy bore are shown to be 1%-10% less than calculated with the electromagnetic model. Localization results using a 5 × 5 array of gradiometers are, in general, 2-4 times more accurate than a planar array of sensors, depending on the solenoid orientation and position. Simulation results show that the localization accuracy using a gradiometer array is within 1.3 mm over a distance of 20 cm from the array plane. In comparison, localization errors using single array are within 5 mm. The results indicate that the gradiometer method merits further studies and work due to the accuracy achieved with ideal sensors. Future studies should include realistic sensor models and extensive numerical studies to estimate the expected magnetic tracking accuracy within a TomoTherapy system before proceeding with prototype development.

  11. EEG minimum-norm estimation compared with MEG dipole fitting in the localization of somatosensory sources at S1.

    PubMed

    Komssi, S; Huttunen, J; Aronen, H J; Ilmoniemi, R J

    2004-03-01

    Dipole models, which are frequently used in attempts to solve the electromagnetic inverse problem, require explicit a priori assumptions about the cerebral current sources. This is not the case for solutions based on minimum-norm estimates. In the present study, we evaluated the spatial accuracy of the L2 minimum-norm estimate (MNE) in realistic noise conditions by assessing its ability to localize sources of evoked responses at the primary somatosensory cortex (SI). Multichannel somatosensory evoked potentials (SEPs) and magnetic fields (SEFs) were recorded in 5 subjects while stimulating the median and ulnar nerves at the left wrist. A Tikhonov-regularized L2-MNE, constructed on a spherical surface from the SEP signals, was compared with an equivalent current dipole (ECD) solution obtained from the SEFs. Primarily tangential current sources accounted for both SEP and SEF distributions at around 20 ms (N20/N20m) and 70 ms (P70/P70m), which deflections were chosen for comparative analysis. The distances between the locations of the maximum current densities obtained from MNE and the locations of ECDs were on the average 12-13 mm for both deflections and nerves stimulated. In accordance with the somatotopical order of SI, both the MNE and ECD tended to localize median nerve activation more laterally than ulnar nerve activation for the N20/N20m deflection. Simulation experiments further indicated that, with a proper estimate of the source depth and with a good fit of the head model, the MNE can reach a mean accuracy of 5 mm in 0.2-microV root-mean-square noise. When compared with previously reported localizations based on dipole modelling of SEPs, it appears that equally accurate localization of S1 can be obtained with the MNE. MNE can be used to verify parametric source modelling results. Having a relatively good localization accuracy and requiring minimal assumptions, the MNE may be useful for the localization of poorly known activity distributions and for tracking activity changes between brain areas as a function of time.

  12. Characterization of dynamic changes of current source localization based on spatiotemporal fMRI constrained EEG source imaging

    NASA Astrophysics Data System (ADS)

    Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun

    2018-06-01

    Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be extended to any subsequent brain connectivity analyses used to construct the associated dynamic brain networks.

  13. [EEG source localization using LORETA (low resolution electromagnetic tomography)].

    PubMed

    Puskás, Szilvia

    2011-03-30

    Eledctroencephalography (EEG) has excellent temporal resolution, but the spatial resolution is poor. Different source localization methods exist to solve the so-called inverse problem, thus increasing the accuracy of spatial localization. This paper provides an overview of the history of source localization and the main categories of techniques are discussed. LORETA (low resolution electromagnetic tomography) is introduced in details: technical informations are discussed and localization properties of LORETA method are compared to other inverse solutions. Validation of the method with different imaging techniques is also discussed. This paper reviews several publications using LORETA both in healthy persons and persons with different neurological and psychiatric diseases. Finally future possible applications are discussed.

  14. Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network.

    PubMed

    Beck, Christoph; Garreau, Guillaume; Georgiou, Julius

    2016-01-01

    Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  15. Analysis on accuracy improvement of rotor-stator rubbing localization based on acoustic emission beamforming method.

    PubMed

    He, Tian; Xiao, Denghong; Pan, Qiang; Liu, Xiandong; Shan, Yingchun

    2014-01-01

    This paper attempts to introduce an improved acoustic emission (AE) beamforming method to localize rotor-stator rubbing fault in rotating machinery. To investigate the propagation characteristics of acoustic emission signals in casing shell plate of rotating machinery, the plate wave theory is used in a thin plate. A simulation is conducted and its result shows the localization accuracy of beamforming depends on multi-mode, dispersion, velocity and array dimension. In order to reduce the effect of propagation characteristics on the source localization, an AE signal pre-process method is introduced by combining plate wave theory and wavelet packet transform. And the revised localization velocity to reduce effect of array size is presented. The accuracy of rubbing localization based on beamforming and the improved method of present paper are compared by the rubbing test carried on a test table of rotating machinery. The results indicate that the improved method can localize rub fault effectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. An evaluation of kurtosis beamforming in magnetoencephalography to localize the epileptogenic zone in drug resistant epilepsy patients.

    PubMed

    Hall, Michael B H; Nissen, Ida A; van Straaten, Elisabeth C W; Furlong, Paul L; Witton, Caroline; Foley, Elaine; Seri, Stefano; Hillebrand, Arjan

    2018-06-01

    Kurtosis beamforming is a useful technique for analysing magnetoencephalograpy (MEG) data containing epileptic spikes. However, the implementation varies and few studies measure concordance with subsequently resected areas. We evaluated kurtosis beamforming as a means of localizing spikes in drug-resistant epilepsy patients. We retrospectively applied kurtosis beamforming to MEG recordings of 22 epilepsy patients that had previously been analysed using equivalent current dipole (ECD) fitting. Virtual electrodes were placed in the kurtosis volumetric peaks and visually inspected to select a candidate source. The candidate sources were compared to the ECD localizations and resection areas. The kurtosis beamformer produced interpretable localizations in 18/22 patients, of which the candidate source coincided with the resection lobe in 9/13 seizure-free patients and in 3/5 patients with persistent seizures. The sublobar accuracy of the kurtosis beamformer with respect to the resection zone was higher than ECD (56% and 50%, respectively), however, ECD resulted in a higher lobar accuracy (75%, 67%). Kurtosis beamforming may provide additional value when spikes are not clearly discernible on the sensors and support ECD localizations when dipoles are scattered. Kurtosis beamforming should be integrated with existing clinical protocols to assist in localizing the epileptogenic zone. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  17. Auditory Localization: An Annotated Bibliography

    DTIC Science & Technology

    1983-11-01

    tranverse plane, natural sound localization in ,-- both horizontal and vertical planes can be performed with nearly the same accuracy as real sound sources...important for unscrambling the competing sounds which so often occur in natural environments. A workable sound sensor has been constructed and empirical

  18. Ictal and interictal electric source imaging in presurgical evaluation: a prospective study.

    PubMed

    Sharma, Praveen; Scherg, Michael; Pinborg, Lars H; Fabricius, Martin; Rubboli, Guido; Pedersen, Birthe; Leffers, Anne-Mette; Uldall, Peter; Jespersen, Bo; Brennum, Jannick; Mølby Henriksen, Otto; Beniczky, Sándor

    2018-05-11

    Accurate localization of the epileptic focus is essential for surgical treatment of patients with drug- resistant epilepsy. EEG source imaging (ESI) is increasingly used in presurgical evaluation. However, most previous studies analysed interictal discharges. Prospective studies comparing feasibility and accuracy of interictal (II) and ictal (IC) ESI are lacking. We prospectively analysed long-term video EEG recordings (LTM) of patients admitted for presurgical evaluation. We performed ESI of II and IC signals, using two methods: equivalent current dipole (ECD) and distributed source model (DSM). LTM recordings employed the standard 25-electrode array (including inferior temporal electrodes). An age-matched template head-model was used for source analysis. Results were compared with intracranial recordings (ICR), conventional neuroimaging methods (MRI, PET, SPECT) and outcome one year after surgery. Eighty-seven consecutive patients were analysed. ECD gave a significantly higher proportion of patients with localised focal abnormalities (94%) compared to MRI (70%), PET (66%) and SPECT (64%). Agreement between the ESI methods and ICR was moderate to substantial (k=0.56-0.79). Fifty-four patients were operated (47 for more than one year ago) and 62% of them became seizure-free. Localization accuracy of II-ESI was 51% for DSM and 57% for ECD; for IC-ESI this was 51% (DSM) and 62% (ECD). The differences between the ESI methods were not significant. Differences in localization accuracy between ESI and MRI (55%), PET (33%) and SPECT (40%) were not significant. II and IC ESI of LTM-data have high feasibility and their localisation accuracy is similar to the conventional neuroimaging methods. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  19. Sound source localization method in an environment with flow based on Amiet-IMACS

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin

    2017-05-01

    A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.

  20. Localization of incipient tip vortex cavitation using ray based matched field inversion method

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon

    2015-10-01

    Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.

  1. Imaging of tumor hypermetabolism with near-infrared fluorescence contrast agents

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Zheng, Gang; Zhang, Zhihong; Blessington, Dana; Intes, Xavier; Achilefu, Samuel I.; Chance, Britton

    2004-08-01

    We have developed a high sensitivity near-infrared (NIR) optical imaging system for non-invasive cancer detection through molecular labeled fluorescent contrast agents. Near-infrared (NIR) imaging can probe tissue deeply thus possess the potential for non-invasively detection of breast or lymph node cancer. Recent developments in molecular beacons can selectively label various pre-cancer/cancer signatures and provide high tumor to background contrast. To increase the sensitivity in detecting fluorescent photons and the accuracy of localization, phase cancellation (in- and anti-phase) device is employed. This frequency-domain system utilizes the interference-like pattern of diffuse photon density wave to achieve high detection sensitivity and localization accuracy for the fluorescent heterogeneity embedded inside the scattering media. The opto-electronic system consists of the laser sources, fiber optics, interference filter to select the fluorescent photons and the high sensitivity photon detector (photomultiplier tube). The source-detector pair scans the tissue surface in multiple directions and the two-dimensional localization image can be obtained using goniometric reconstruction. In vivo measurements with tumor-bearing mouse model using the novel Cypate-mono-2-deoxy-glucose (Cypate-2-D-Glucosamide) fluorescent contrast agent, which targets the enhanced tumor glycolysis, demonstrated the feasibility on detection of 2 cm deep subsurface tumor in the tissue-like medium, with a localization accuracy within 2 ~ 3 mm. This instrument has the potential for tumor diagnosis and imaging, and the accuracy of the localization suggests that this system could help to guide the clinical fine-needle biopsy. This portable device would be complementary to X-ray mammogram and provide add-on information on early diagnosis and localization of early breast tumor.

  2. Improved Bayesian Infrasonic Source Localization for regional infrasound

    DOE PAGES

    Blom, Philip S.; Marcillo, Omar; Arrowsmith, Stephen J.

    2015-10-20

    The Bayesian Infrasonic Source Localization (BISL) methodology is examined and simplified providing a generalized method of estimating the source location and time for an infrasonic event and the mathematical framework is used therein. The likelihood function describing an infrasonic detection used in BISL has been redefined to include the von Mises distribution developed in directional statistics and propagation-based, physically derived celerity-range and azimuth deviation models. Frameworks for constructing propagation-based celerity-range and azimuth deviation statistics are presented to demonstrate how stochastic propagation modelling methods can be used to improve the precision and accuracy of the posterior probability density function describing themore » source localization. Infrasonic signals recorded at a number of arrays in the western United States produced by rocket motor detonations at the Utah Test and Training Range are used to demonstrate the application of the new mathematical framework and to quantify the improvement obtained by using the stochastic propagation modelling methods. Moreover, using propagation-based priors, the spatial and temporal confidence bounds of the source decreased by more than 40 per cent in all cases and by as much as 80 per cent in one case. Further, the accuracy of the estimates remained high, keeping the ground truth within the 99 per cent confidence bounds for all cases.« less

  3. Integration of Heterogenous Digital Surface Models

    NASA Astrophysics Data System (ADS)

    Boesch, R.; Ginzler, C.

    2011-08-01

    The application of extended digital surface models often reveals, that despite an acceptable global accuracy for a given dataset, the local accuracy of the model can vary in a wide range. For high resolution applications which cover the spatial extent of a whole country, this can be a major drawback. Within the Swiss National Forest Inventory (NFI), two digital surface models are available, one derived from LiDAR point data and the other from aerial images. Automatic photogrammetric image matching with ADS80 aerial infrared images with 25cm and 50cm resolution is used to generate a surface model (ADS-DSM) with 1m resolution covering whole switzerland (approx. 41000 km2). The spatially corresponding LiDAR dataset has a global point density of 0.5 points per m2 and is mainly used in applications as interpolated grid with 2m resolution (LiDAR-DSM). Although both surface models seem to offer a comparable accuracy from a global view, local analysis shows significant differences. Both datasets have been acquired over several years. Concerning LiDAR-DSM, different flight patterns and inconsistent quality control result in a significantly varying point density. The image acquisition of the ADS-DSM is also stretched over several years and the model generation is hampered by clouds, varying illumination and shadow effects. Nevertheless many classification and feature extraction applications requiring high resolution data depend on the local accuracy of the used surface model, therefore precise knowledge of the local data quality is essential. The commercial photogrammetric software NGATE (part of SOCET SET) generates the image based surface model (ADS-DSM) and delivers also a map with figures of merit (FOM) of the matching process for each calculated height pixel. The FOM-map contains matching codes like high slope, excessive shift or low correlation. For the generation of the LiDAR-DSM only first- and last-pulse data was available. Therefore only the point distribution can be used to derive a local accuracy measure. For the calculation of a robust point distribution measure, a constrained triangulation of local points (within an area of 100m2) has been implemented using the Open Source project CGAL. The area of each triangle is a measure for the spatial distribution of raw points in this local area. Combining the FOM-map with the local evaluation of LiDAR points allows an appropriate local accuracy evaluation of both surface models. The currently implemented strategy ("partial replacement") uses the hypothesis, that the ADS-DSM is superior due to its better global accuracy of 1m. If the local analysis of the FOM-map within the 100m2 area shows significant matching errors, the corresponding area of the triangulated LiDAR points is analyzed. If the point density and distribution is sufficient, the LiDAR-DSM will be used in favor of the ADS-DSM at this location. If the local triangulation reflects low point density or the variance of triangle areas exceeds a threshold, the investigated location will be marked as NODATA area. In a future implementation ("anisotropic fusion") an anisotropic inverse distance weighting (IDW) will be used, which merges both surface models in the point data space by using FOM-map and local triangulation to derive a quality weight for each of the interpolation points. The "partial replacement" implementation and the "fusion" prototype for the anisotropic IDW make use of the Open Source projects CGAL (Computational Geometry Algorithms Library), GDAL (Geospatial Data Abstraction Library) and OpenCV (Open Source Computer Vision).

  4. Method and system for determining radiation shielding thickness and gamma-ray energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klann, Raymond T.; Vilim, Richard B.; de la Barrera, Sergio

    2015-12-15

    A system and method for determining the shielding thickness of a detected radiation source. The gamma ray spectrum of a radiation detector is utilized to estimate the shielding between the detector and the radiation source. The determination of the shielding may be used to adjust the information from known source-localization techniques to provide improved performance and accuracy of locating the source of radiation.

  5. SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization

    PubMed Central

    Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah

    2014-01-01

    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431

  6. An iterative method for the localization of a neutron source in a large box (container)

    NASA Astrophysics Data System (ADS)

    Dubinski, S.; Presler, O.; Alfassi, Z. B.

    2007-12-01

    The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.

  7. Determination of localization accuracy based on experimentally acquired image sets: applications to single molecule microscopy

    PubMed Central

    Tahmasbi, Amir; Ward, E. Sally; Ober, Raimund J.

    2015-01-01

    Fluorescence microscopy is a photon-limited imaging modality that allows the study of subcellular objects and processes with high specificity. The best possible accuracy (standard deviation) with which an object of interest can be localized when imaged using a fluorescence microscope is typically calculated using the Cramér-Rao lower bound, that is, the inverse of the Fisher information. However, the current approach for the calculation of the best possible localization accuracy relies on an analytical expression for the image of the object. This can pose practical challenges since it is often difficult to find appropriate analytical models for the images of general objects. In this study, we instead develop an approach that directly uses an experimentally collected image set to calculate the best possible localization accuracy for a general subcellular object. In this approach, we fit splines, i.e. smoothly connected piecewise polynomials, to the experimentally collected image set to provide a continuous model of the object, which can then be used for the calculation of the best possible localization accuracy. Due to its practical importance, we investigate in detail the application of the proposed approach in single molecule fluorescence microscopy. In this case, the object of interest is a point source and, therefore, the acquired image set pertains to an experimental point spread function. PMID:25837101

  8. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    DOE PAGES

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; ...

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less

  9. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.

    Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less

  10. The effect of using genealogy-based haplotypes for genomic prediction

    PubMed Central

    2013-01-01

    Background Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. Methods A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. Results About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Conclusions Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy. PMID:23496971

  11. The effect of using genealogy-based haplotypes for genomic prediction.

    PubMed

    Edriss, Vahid; Fernando, Rohan L; Su, Guosheng; Lund, Mogens S; Guldbrandtsen, Bernt

    2013-03-06

    Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy.

  12. Effect of conductor geometry on source localization: Implications for epilepsy studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlitt, H.; Heller, L.; Best, E.

    1994-07-01

    We shall discuss the effects of conductor geometry on source localization for applications in epilepsy studies. The most popular conductor model for clinical MEG studies is a homogeneous sphere. However, several studies have indicated that a sphere is a poor model for the head when the sources are deep, as is the case for epileptic foci in the mesial temporal lobe. We believe that replacing the spherical model with a more realistic one in the inverse fitting procedure will improve the accuracy of localizing epileptic sources. In order to include a realistic head model in the inverse problem, we mustmore » first solve the forward problem for the realistic conductor geometry. We create a conductor geometry model from MR images, and then solve the forward problem via a boundary integral equation for the electric potential due to a specified primary source. One the electric potential is known, the magnetic field can be calculated directly. The most time-intensive part of the problem is generating the conductor model; fortunately, this needs to be done only once for each patient. It takes little time to change the primary current and calculate a new magnetic field for use in the inverse fitting procedure. We present the results of a series of computer simulations in which we investigate the localization accuracy due to replacing the spherical model with the realistic head model in the inverse fitting procedure. The data to be fit consist of a computer generated magnetic field due to a known current dipole in a realistic head model, with added noise. We compare the localization errors when this field is fit using a spherical model to the fit using a realistic head model. Using a spherical model is comparable to what is usually done when localizing epileptic sources in humans, where the conductor model used in the inverse fitting procedure does not correspond to the actual head.« less

  13. Accurate Energies and Orbital Description in Semi-Local Kohn-Sham DFT

    NASA Astrophysics Data System (ADS)

    Lindmaa, Alexander; Kuemmel, Stephan; Armiento, Rickard

    2015-03-01

    We present our progress on a scheme in semi-local Kohn-Sham density-functional theory (KS-DFT) for improving the orbital description while still retaining the level of accuracy of the usual semi-local exchange-correlation (xc) functionals. DFT is a widely used tool for first-principles calculations of properties of materials. A given task normally requires a balance of accuracy and computational cost, which is well achieved with semi-local DFT. However, commonly used semi-local xc functionals have important shortcomings which often can be attributed to features of the corresponding xc potential. One shortcoming is an overly delocalized representation of localized orbitals. Recently a semi-local GGA-type xc functional was constructed to address these issues, however, it has the trade-off of lower accuracy of the total energy. We discuss the source of this error in terms of a surplus energy contribution in the functional that needs to be accounted for, and offer a remedy for this issue which formally stays within KS-DFT, and, which does not harshly increase the computational effort. The end result is a scheme that combines accurate total energies (e.g., relaxed geometries) with an improved orbital description (e.g., improved band structure).

  14. Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.

    PubMed

    Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens

    2005-05-01

    Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.

  15. Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments.

    PubMed

    Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco

    2017-10-27

    Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis.

  16. Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments

    PubMed Central

    Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco

    2017-01-01

    Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis. PMID:29077071

  17. Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array.

    PubMed

    Zhang, Yankui; Ba, Bin; Wang, Daming; Geng, Wei; Xu, Haiyun

    2018-05-08

    Direct position determination (DPD) is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer⁻Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.

  18. Investigations on the effect of frequency and noise in a localization technique based on microwave imaging for an in-body RF source

    NASA Astrophysics Data System (ADS)

    Chandra, Rohit; Balasingham, Ilangko

    2015-05-01

    Localization of a wireless capsule endoscope finds many clinical applications from diagnostics to therapy. There are potentially two approaches of the electromagnetic waves based localization: a) signal propagation model based localization using a priori information about the persons dielectric channels, and b) recently developed microwave imaging based localization without using any a priori information about the persons dielectric channels. In this paper, we study the second approach in terms of a variety of frequencies and signal-to-noise ratios for localization accuracy. To this end, we select a 2-D anatomically realistic numerical phantom for microwave imaging at different frequencies. The selected frequencies are 13:56 MHz, 431:5 MHz, 920 MHz, and 2380 MHz that are typically considered for medical applications. Microwave imaging of a phantom will provide us with an electromagnetic model with electrical properties (relative permittivity and conductivity) of the internal parts of the body and can be useful as a foundation for localization of an in-body RF source. Low frequency imaging at 13:56 MHz provides a low resolution image with high contrast in the dielectric properties. However, at high frequencies, the imaging algorithm is able to image only the outer boundaries of the tissues due to low penetration depth as higher frequency means higher attenuation. Furthermore, recently developed localization method based on microwave imaging is used for estimating the localization accuracy at different frequencies and signal-to-noise ratios. Statistical evaluation of the localization error is performed using the cumulative distribution function (CDF). Based on our results, we conclude that the localization accuracy is minimally affected by the frequency or the noise. However, the choice of the frequency will become critical if the purpose of the method is to image the internal parts of the body for tumor and/or cancer detection.

  19. Classification of event location using matched filters via on-floor accelerometers

    NASA Astrophysics Data System (ADS)

    Woolard, Americo G.; Malladi, V. V. N. Sriram; Alajlouni, Sa'ed; Tarazaga, Pablo A.

    2017-04-01

    Recent years have shown prolific advancements in smart infrastructures, allowing buildings of the modern world to interact with their occupants. One of the sought-after attributes of smart buildings is the ability to provide unobtrusive, indoor localization of occupants. The ability to locate occupants indoors can provide a broad range of benefits in areas such as security, emergency response, and resource management. Recent research has shown promising results in occupant building localization, although there is still significant room for improvement. This study presents a passive, small-scale localization system using accelerometers placed around the edges of a small area in an active building environment. The area is discretized into a grid of small squares, and vibration measurements are processed using a pattern matching approach that estimates the location of the source. Vibration measurements are produced with ball-drops, hammer-strikes, and footsteps as the sources of the floor excitation. The developed approach uses matched filters based on a reference data set, and the location is classified using a nearest-neighbor search. This approach detects the appropriate location of impact-like sources i.e. the ball-drops and hammer-strikes with a 100% accuracy. However, this accuracy reduces to 56% for footsteps, with the average localization results being within 0.6 m (α = 0.05) from the true source location. While requiring a reference data set can make this method difficult to implement on a large scale, it may be used to provide accurate localization abilities in areas where training data is readily obtainable. This exploratory work seeks to examine the feasibility of the matched filter and nearest neighbor search approach for footstep and event localization in a small, instrumented area within a multi-story building.

  20. Augmented Lagrange Programming Neural Network for Localization Using Time-Difference-of-Arrival Measurements.

    PubMed

    Han, Zifa; Leung, Chi Sing; So, Hing Cheung; Constantinides, Anthony George

    2017-08-15

    A commonly used measurement model for locating a mobile source is time-difference-of-arrival (TDOA). As each TDOA measurement defines a hyperbola, it is not straightforward to compute the mobile source position due to the nonlinear relationship in the measurements. This brief exploits the Lagrange programming neural network (LPNN), which provides a general framework to solve nonlinear constrained optimization problems, for the TDOA-based localization. The local stability of the proposed LPNN solution is also analyzed. Simulation results are included to evaluate the localization accuracy of the LPNN scheme by comparing with the state-of-the-art methods and the optimality benchmark of Cramér-Rao lower bound.

  1. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  2. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method

    PubMed Central

    Juric, Matjaz B.

    2018-01-01

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage. PMID:29587352

  3. Localization of diffusion sources in complex networks with sparse observations

    NASA Astrophysics Data System (ADS)

    Hu, Zhao-Long; Shen, Zhesi; Tang, Chang-Bing; Xie, Bin-Bin; Lu, Jian-Feng

    2018-04-01

    Locating sources in a large network is of paramount importance to reduce the spreading of disruptive behavior. Based on the backward diffusion-based method and integer programming, we propose an efficient approach to locate sources in complex networks with limited observers. The results on model networks and empirical networks demonstrate that, for a certain fraction of observers, the accuracy of our method for source localization will improve as the increase of network size. Besides, compared with the previous method (the maximum-minimum method), the performance of our method is much better with a small fraction of observers, especially in heterogeneous networks. Furthermore, our method is more robust against noise environments and strategies of choosing observers.

  4. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  5. Evaluation of a head-repositioner and Z-plate system for improved accuracy of dose delivery.

    PubMed

    Charney, Sarah C; Lutz, Wendell R; Klein, Mary K; Jones, Pamela D

    2009-01-01

    Radiation therapy requires accurate dose delivery to targets often identifiable only on computed tomography (CT) images. Translation between the isocenter localized on CT and laser setup for radiation treatment, and interfractional head repositioning are frequent sources of positioning error. The objective was to design a simple, accurate apparatus to eliminate these sources of error. System accuracy was confirmed with phantom and in vivo measurements. A head repositioner that fixates the maxilla via dental mold with fiducial marker Z-plates attached was fabricated to facilitate the connection between the isocenter on CT and laser treatment setup. A phantom study targeting steel balls randomly located within the head repositioner was performed. The center of each ball was marked on a transverse CT slice on which six points of the Z-plate were also visible. Based on the relative position of the six Z-plate points and the ball center, the laser setup position on each Z-plate and a top plate was calculated. Based on these setup marks, orthogonal port films, directed toward each target, were evaluated for accuracy without regard to visual setup. A similar procedure was followed to confirm accuracy of in vivo treatment setups in four dogs using implanted gold seeds. Sequential port films of three dogs were made to confirm interfractional accuracy. Phantom and in vivo measurements confirmed accuracy of 2 mm between isocenter on CT and the center of the treatment dose distribution. Port films confirmed similar accuracy for interfractional treatments. The system reliably connects CT target localization to accurate initial and interfractional radiation treatment setup.

  6. Precision time distribution within a deep space communications complex

    NASA Technical Reports Server (NTRS)

    Curtright, J. B.

    1972-01-01

    The Precision Time Distribution System (PTDS) at the Golstone Deep Space Communications Complex is a practical application of existing technology to the solution of a local problem. The problem was to synchronize four station timing systems to a master source with a relative accuracy consistently and significantly better than 10 microseconds. The solution involved combining a precision timing source, an automatic error detection assembly and a microwave distribution network into an operational system. Upon activation of the completed PTDS two years ago, synchronization accuracy at Goldstone (two station relative) was improved by an order of magnitude. It is felt that the validation of the PTDS mechanization is now completed. Other facilities which have site dispersion and synchronization accuracy requirements similar to Goldstone may find the PTDS mechanization useful in solving their problem. At present, the two station relative synchronization accuracy at Goldstone is better than one microsecond.

  7. A 3D simulation look-up library for real-time airborne gamma-ray spectroscopy

    NASA Astrophysics Data System (ADS)

    Kulisek, Jonathan A.; Wittman, Richard S.; Miller, Erin A.; Kernan, Warnick J.; McCall, Jonathon D.; McConn, Ron J.; Schweppe, John E.; Seifert, Carolyn E.; Stave, Sean C.; Stewart, Trevor N.

    2018-01-01

    A three-dimensional look-up library consisting of simulated gamma-ray spectra was developed to leverage, in real-time, the abundance of data provided by a helicopter-mounted gamma-ray detection system consisting of 92 CsI-based radiation sensors and exhibiting a highly angular-dependent response. We have demonstrated how this library can be used to help effectively estimate the terrestrial gamma-ray background, develop simulated flight scenarios, and to localize radiological sources. Source localization accuracy was significantly improved, particularly for weak sources, by estimating the entire gamma-ray spectra while accounting for scattering in the air, and especially off the ground.

  8. Accuracy of Estimating Highly Eccentric Binary Black Hole Parameters with Gravitational-wave Detections

    NASA Astrophysics Data System (ADS)

    Gondán, László; Kocsis, Bence; Raffai, Péter; Frei, Zsolt

    2018-03-01

    Mergers of stellar-mass black holes on highly eccentric orbits are among the targets for ground-based gravitational-wave detectors, including LIGO, VIRGO, and KAGRA. These sources may commonly form through gravitational-wave emission in high-velocity dispersion systems or through the secular Kozai–Lidov mechanism in triple systems. Gravitational waves carry information about the binaries’ orbital parameters and source location. Using the Fisher matrix technique, we determine the measurement accuracy with which the LIGO–VIRGO–KAGRA network could measure the source parameters of eccentric binaries using a matched filtering search of the repeated burst and eccentric inspiral phases of the waveform. We account for general relativistic precession and the evolution of the orbital eccentricity and frequency during the inspiral. We find that the signal-to-noise ratio and the parameter measurement accuracy may be significantly higher for eccentric sources than for circular sources. This increase is sensitive to the initial pericenter distance, the initial eccentricity, and the component masses. For instance, compared to a 30 {M}ȯ –30 {M}ȯ non-spinning circular binary, the chirp mass and sky-localization accuracy can improve by a factor of ∼129 (38) and ∼2 (11) for an initially highly eccentric binary assuming an initial pericenter distance of 20 M tot (10 M tot).

  9. Discussion of Source Reconstruction Models Using 3D MCG Data

    NASA Astrophysics Data System (ADS)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  10. Localization of MEG human brain responses to retinotopic visual stimuli with contrasting source reconstruction approaches

    PubMed Central

    Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine

    2014-01-01

    Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268

  11. Ambient Sound-Based Collaborative Localization of Indeterministic Devices

    PubMed Central

    Kamminga, Jacob; Le, Duc; Havinga, Paul

    2016-01-01

    Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and suffer from uncertain input latencies. This uncertainty makes accurate localization challenging. Therefore, we present a collaborative localization algorithm (Cooperative Localization on Android with ambient Sound Sources (CLASS)) that simultaneously localizes the position of indeterministic devices and ambient sound sources without local infrastructure. The CLASS algorithm deals with the uncertainty by splitting the devices into subsets so that outliers can be removed from the time difference of arrival values and localization results. Since Android is indeterministic, we select Android devices to evaluate our approach. The algorithm is evaluated with an outdoor experiment and achieves a mean Root Mean Square Error (RMSE) of 2.18 m with a standard deviation of 0.22 m. Estimated directions towards the sound sources have a mean RMSE of 17.5° and a standard deviation of 2.3°. These results show that it is feasible to simultaneously achieve a relative positioning of both devices and sound sources with sufficient accuracy, even when using non-deterministic devices and platforms, such as Android. PMID:27649176

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Berry, M. L..; Grieme, M.

    We propose a localization-based radiation source detection (RSD) algorithm using the Ratio of Squared Distance (ROSD) method. Compared with the triangulation-based method, the advantages of this ROSD method are multi-fold: i) source location estimates based on four detectors improve their accuracy, ii) ROSD provides closed-form source location estimates and thus eliminates the imaginary-roots issue, and iii) ROSD produces a unique source location estimate as opposed to two real roots (if any) in triangulation, and obviates the need to identify real phantom roots during clustering.

  13. s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography

    PubMed Central

    Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai

    2016-01-01

    EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529

  14. Effect of EEG electrode density on dipole localization accuracy using two realistically shaped skull resistivity models.

    PubMed

    Laarne, P H; Tenhunen-Eskelinen, M L; Hyttinen, J K; Eskola, H J

    2000-01-01

    The effect of number of EEG electrodes on the dipole localization was studied by comparing the results obtained using the 10-20 and 10-10 electrode systems. Two anatomically detailed models with resistivity values of 177.6 omega m and 67.0 omega m for the skull were applied. Simulated potential values generated by current dipoles were applied to different combinations of the volume conductors and electrode systems. High and low resistivity models differed slightly in favour of the lower skull resistivity model when dipole localization was based on noiseless data. The localization errors were approximately three times larger using low resistivity model for generating the potentials, but applying high resistivity model for the inverse solution. The difference between the two electrode systems was minor in favour of the 10-10 electrode system when simulated, noiseless potentials were used. In the presence of noise the dipole localization algorithm operated more accurately using the denser electrode system. In conclusion, increasing the number of recording electrodes seems to improve the localization accuracy in the presence of noise. The absolute skull resistivity value also affects the accuracy, but using an incorrect value in modelling calculations seems to be the most serious source of error.

  15. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  16. Application of Gauss's theorem to quantify localized surface emissions from airborne measurements of wind and trace gases

    DOE PAGES

    Conley, Stephen; Faloona, Ian; Mehrotra, Shobhit; ...

    2017-09-13

    Airborne estimates of greenhouse gas emissions are becoming more prevalent with the advent of rapid commercial development of trace gas instrumentation featuring increased measurement accuracy, precision, and frequency, and the swelling interest in the verification of current emission inventories. Multiple airborne studies have indicated that emission inventories may underestimate some hydrocarbon emission sources in US oil- and gas-producing basins. Consequently, a proper assessment of the accuracy of these airborne methods is crucial to interpreting the meaning of such discrepancies. We present a new method of sampling surface sources of any trace gas for which fast and precise measurements can be mademore » and apply it to methane, ethane, and carbon dioxide on spatial scales of ~1000 m, where consecutive loops are flown around a targeted source region at multiple altitudes. Using Reynolds decomposition for the scalar concentrations, along with Gauss's theorem, we show that the method accurately accounts for the smaller-scale turbulent dispersion of the local plume, which is often ignored in other average mass balance methods. With the help of large eddy simulations (LES) we further show how the circling radius can be optimized for the micrometeorological conditions encountered during any flight. Furthermore, by sampling controlled releases of methane and ethane on the ground we can ascertain that the accuracy of the method, in appropriate meteorological conditions, is often better than 10 %, with limits of detection below 5 kg h -1 for both methane and ethane. Because of the FAA-mandated minimum flight safe altitude of 150 m, placement of the aircraft is critical to preventing a large portion of the emission plume from flowing underneath the lowest aircraft sampling altitude, which is generally the leading source of uncertainty in these measurements. Finally, we show how the accuracy of the method is strongly dependent on the number of sampling loops and/or time spent sampling the source plume.« less

  17. Application of Gauss's theorem to quantify localized surface emissions from airborne measurements of wind and trace gases

    NASA Astrophysics Data System (ADS)

    Conley, Stephen; Faloona, Ian; Mehrotra, Shobhit; Suard, Maxime; Lenschow, Donald H.; Sweeney, Colm; Herndon, Scott; Schwietzke, Stefan; Pétron, Gabrielle; Pifer, Justin; Kort, Eric A.; Schnell, Russell

    2017-09-01

    Airborne estimates of greenhouse gas emissions are becoming more prevalent with the advent of rapid commercial development of trace gas instrumentation featuring increased measurement accuracy, precision, and frequency, and the swelling interest in the verification of current emission inventories. Multiple airborne studies have indicated that emission inventories may underestimate some hydrocarbon emission sources in US oil- and gas-producing basins. Consequently, a proper assessment of the accuracy of these airborne methods is crucial to interpreting the meaning of such discrepancies. We present a new method of sampling surface sources of any trace gas for which fast and precise measurements can be made and apply it to methane, ethane, and carbon dioxide on spatial scales of ˜ 1000 m, where consecutive loops are flown around a targeted source region at multiple altitudes. Using Reynolds decomposition for the scalar concentrations, along with Gauss's theorem, we show that the method accurately accounts for the smaller-scale turbulent dispersion of the local plume, which is often ignored in other average mass balance methods. With the help of large eddy simulations (LES) we further show how the circling radius can be optimized for the micrometeorological conditions encountered during any flight. Furthermore, by sampling controlled releases of methane and ethane on the ground we can ascertain that the accuracy of the method, in appropriate meteorological conditions, is often better than 10 %, with limits of detection below 5 kg h-1 for both methane and ethane. Because of the FAA-mandated minimum flight safe altitude of 150 m, placement of the aircraft is critical to preventing a large portion of the emission plume from flowing underneath the lowest aircraft sampling altitude, which is generally the leading source of uncertainty in these measurements. Finally, we show how the accuracy of the method is strongly dependent on the number of sampling loops and/or time spent sampling the source plume.

  18. Application of Gauss's theorem to quantify localized surface emissions from airborne measurements of wind and trace gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conley, Stephen; Faloona, Ian; Mehrotra, Shobhit

    Airborne estimates of greenhouse gas emissions are becoming more prevalent with the advent of rapid commercial development of trace gas instrumentation featuring increased measurement accuracy, precision, and frequency, and the swelling interest in the verification of current emission inventories. Multiple airborne studies have indicated that emission inventories may underestimate some hydrocarbon emission sources in US oil- and gas-producing basins. Consequently, a proper assessment of the accuracy of these airborne methods is crucial to interpreting the meaning of such discrepancies. We present a new method of sampling surface sources of any trace gas for which fast and precise measurements can be mademore » and apply it to methane, ethane, and carbon dioxide on spatial scales of ~1000 m, where consecutive loops are flown around a targeted source region at multiple altitudes. Using Reynolds decomposition for the scalar concentrations, along with Gauss's theorem, we show that the method accurately accounts for the smaller-scale turbulent dispersion of the local plume, which is often ignored in other average mass balance methods. With the help of large eddy simulations (LES) we further show how the circling radius can be optimized for the micrometeorological conditions encountered during any flight. Furthermore, by sampling controlled releases of methane and ethane on the ground we can ascertain that the accuracy of the method, in appropriate meteorological conditions, is often better than 10 %, with limits of detection below 5 kg h -1 for both methane and ethane. Because of the FAA-mandated minimum flight safe altitude of 150 m, placement of the aircraft is critical to preventing a large portion of the emission plume from flowing underneath the lowest aircraft sampling altitude, which is generally the leading source of uncertainty in these measurements. Finally, we show how the accuracy of the method is strongly dependent on the number of sampling loops and/or time spent sampling the source plume.« less

  19. Tracking Accuracy of a Real-Time Fiducial Tracking System for Patient Positioning and Monitoring in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shchory, Tal; Schifter, Dan; Lichtman, Rinat

    Purpose: In radiation therapy there is a need to accurately know the location of the target in real time. A novel radioactive tracking technology has been developed to answer this need. The technology consists of a radioactive implanted fiducial marker designed to minimize migration and a linac mounted tracking device. This study measured the static and dynamic accuracy of the new tracking technology in a clinical radiation therapy environment. Methods and Materials: The tracking device was installed on the linac gantry. The radioactive marker was located in a tissue equivalent phantom. Marker location was measured simultaneously by the radioactive trackingmore » system and by a Microscribe G2 coordinate measuring machine (certified spatial accuracy of 0.38 mm). Localization consistency throughout a volume and absolute accuracy in the Fixed coordinate system were measured at multiple gantry angles over volumes of at least 10 cm in diameter centered at isocenter. Dynamic accuracy was measured with the marker located inside a breathing phantom. Results: The mean consistency for the static source was 0.58 mm throughout the tested region at all measured gantry angles. The mean absolute position error in the Fixed coordinate system for all gantry angles was 0.97 mm. The mean real-time tracking error for the dynamic source within the breathing phantom was less than 1 mm. Conclusions: This novel radioactive tracking technology has the potential to be useful in accurate target localization and real-time monitoring for radiation therapy.« less

  20. Tracking accuracy of a real-time fiducial tracking system for patient positioning and monitoring in radiation therapy.

    PubMed

    Shchory, Tal; Schifter, Dan; Lichtman, Rinat; Neustadter, David; Corn, Benjamin W

    2010-11-15

    In radiation therapy there is a need to accurately know the location of the target in real time. A novel radioactive tracking technology has been developed to answer this need. The technology consists of a radioactive implanted fiducial marker designed to minimize migration and a linac mounted tracking device. This study measured the static and dynamic accuracy of the new tracking technology in a clinical radiation therapy environment. The tracking device was installed on the linac gantry. The radioactive marker was located in a tissue equivalent phantom. Marker location was measured simultaneously by the radioactive tracking system and by a Microscribe G2 coordinate measuring machine (certified spatial accuracy of 0.38 mm). Localization consistency throughout a volume and absolute accuracy in the Fixed coordinate system were measured at multiple gantry angles over volumes of at least 10 cm in diameter centered at isocenter. Dynamic accuracy was measured with the marker located inside a breathing phantom. The mean consistency for the static source was 0.58 mm throughout the tested region at all measured gantry angles. The mean absolute position error in the Fixed coordinate system for all gantry angles was 0.97 mm. The mean real-time tracking error for the dynamic source within the breathing phantom was less than 1 mm. This novel radioactive tracking technology has the potential to be useful in accurate target localization and real-time monitoring for radiation therapy. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Olfactory source localization in the open field using one or both nostrils.

    PubMed

    Welge-Lussen, A; Looser, G L; Westermann, B; Hummel, T

    2014-03-01

    This study aims to examine humans ́ abilities to localize odorants within the open field. Young participants were tested on a localization task using a relatively selective olfactory stimulus (2-phenylethyl-alcohol, PEA) and cineol, an odorant with a strong trigeminal component. Participants were blindfolded and had to localize an odorant source at 2 m distance (far-field condition) and a 0.4 m distance (near-field condition) with either two nostrils open or only one open nostril. For the odorant with trigeminal properties, the number of correct trials did not differ when one or both nostrils were used, while more PEA localization trials were correctly completed with both rather than one nostril. In the near-field condition, correct localization was possible in 72-80% of the trials, irrespective of the odorant and the number of nostrils used. Localization accuracy, measured as spatial deviation from the olfactory source, was significantly higher in the near-field compared to the far-field condition, but independent of the odorant being localized. Odorant localization within the open field is difficult, but possible. In contrast to the general view, humans seem to be able to exploit the two-nostril advantage with increasing task difficulty.

  2. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  3. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  4. Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves

    NASA Astrophysics Data System (ADS)

    Chen, W.; Ni, S.; Wang, Z.

    2011-12-01

    In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.

  5. Localizer: fast, accurate, open-source, and modular software package for superresolution microscopy

    PubMed Central

    Duwé, Sam; Neely, Robert K.; Zhang, Jin

    2012-01-01

    Abstract. We present Localizer, a freely available and open source software package that implements the computational data processing inherent to several types of superresolution fluorescence imaging, such as localization (PALM/STORM/GSDIM) and fluctuation imaging (SOFI/pcSOFI). Localizer delivers high accuracy and performance and comes with a fully featured and easy-to-use graphical user interface but is also designed to be integrated in higher-level analysis environments. Due to its modular design, Localizer can be readily extended with new algorithms as they become available, while maintaining the same interface and performance. We provide front-ends for running Localizer from Igor Pro, Matlab, or as a stand-alone program. We show that Localizer performs favorably when compared with two existing superresolution packages, and to our knowledge is the only freely available implementation of SOFI/pcSOFI microscopy. By dramatically improving the analysis performance and ensuring the easy addition of current and future enhancements, Localizer strongly improves the usability of superresolution imaging in a variety of biomedical studies. PMID:23208219

  6. Design and optimization of a brachytherapy robot

    NASA Astrophysics Data System (ADS)

    Meltsner, Michael A.

    Trans-rectal ultrasound guided (TRUS) low dose rate (LDR) interstitial brachytherapy has become a popular procedure for the treatment of prostate cancer, the most common type of non-skin cancer among men. The current TRUS technique of LDR implantation may result in less than ideal coverage of the tumor with increased risk of negative response such as rectal toxicity and urinary retention. This technique is limited by the skill of the physician performing the implant, the accuracy of needle localization, and the inherent weaknesses of the procedure itself. The treatment may require 100 or more sources and 25 needles, compounding the inaccuracy of the needle localization procedure. A robot designed for prostate brachytherapy may increase the accuracy of needle placement while minimizing the effect of physician technique in the TRUS procedure. Furthermore, a robot may improve associated toxicities by utilizing angled insertions and freeing implantations from constraints applied by the 0.5 cm-spaced template used in the TRUS method. Within our group, Lin et al. have designed a new type of LDR source. The "directional" source is a seed designed to be partially shielded. Thus, a directional, or anisotropic, source does not emit radiation in all directions. The source can be oriented to irradiate cancerous tissues while sparing normal ones. This type of source necessitates a new, highly accurate method for localization in 6 degrees of freedom. A robot is the best way to accomplish this task accurately. The following presentation of work describes the invention and optimization of a new prostate brachytherapy robot that fulfills these goals. Furthermore, some research has been dedicated to the use of the robot to perform needle insertion tasks (brachytherapy, biopsy, RF ablation, etc.) in nearly any other soft tissue in the body. This can be accomplished with the robot combined with automatic, magnetic tracking.

  7. SQUID (superconducting quantum interference device) arrays for simultaneous magnetic measurements: Calibration and source localization performance

    NASA Astrophysics Data System (ADS)

    Kaufman, Lloyd; Williamson, Samuel J.; Costaribeiro, P.

    1988-02-01

    Recently developed small arrays of SQUID-based magnetic sensors can, if appropriately placed, locate the position of a confined biomagnetic source without moving the array. The authors present a technique with a relative accuracy of about 2 percent for calibrating such sensors having detection coils with the geometry of a second-order gradiometer. The effects of calibration error and magnetic noise on the accuracy of locating an equivalent current dipole source in the human brain are investigated for 5- and 7-sensor probes and for a pair of 7-sensor probes. With a noise level of 5 percent of peak signal, uncertainties of about 20 percent in source strength and depth for a 5-sensor probe are reduced to 8 percent for a pair of 7-sensor probes, and uncertainties of about 15 mm in lateral position are reduced to 1 mm, for the configuration considered.

  8. Localizing gravitational wave sources with single-baseline atom interferometers

    NASA Astrophysics Data System (ADS)

    Graham, Peter W.; Jung, Sunghoon

    2018-02-01

    Localizing sources on the sky is crucial for realizing the full potential of gravitational waves for astronomy, astrophysics, and cosmology. We show that the midfrequency band, roughly 0.03 to 10 Hz, has significant potential for angular localization. The angular location is measured through the changing Doppler shift as the detector orbits the Sun. This band maximizes the effect since these are the highest frequencies in which sources live for several months. Atom interferometer detectors can observe in the midfrequency band, and even with just a single baseline they can exploit this effect for sensitive angular localization. The single-baseline orbits around the Earth and the Sun, causing it to reorient and change position significantly during the lifetime of the source, and making it similar to having multiple baselines/detectors. For example, atomic detectors could predict the location of upcoming black hole or neutron star merger events with sufficient accuracy to allow optical and other electromagnetic telescopes to observe these events simultaneously. Thus, midband atomic detectors are complementary to other gravitational wave detectors and will help complete the observation of a broad range of the gravitational spectrum.

  9. Computationally efficient method for localizing the spiral rotor source using synthetic intracardiac electrograms during atrial fibrillation.

    PubMed

    Shariat, M H; Gazor, S; Redfearn, D

    2015-08-01

    Atrial fibrillation (AF), the most common sustained cardiac arrhythmia, is an extremely costly public health problem. Catheter-based ablation is a common minimally invasive procedure to treat AF. Contemporary mapping methods are highly dependent on the accuracy of anatomic localization of rotor sources within the atria. In this paper, using simulated atrial intracardiac electrograms (IEGMs) during AF, we propose a computationally efficient method for localizing the tip of the electrical rotor with an Archimedean/arithmetic spiral wavefront. The proposed method deploys the locations of electrodes of a catheter and their IEGMs activation times to estimate the unknown parameters of the spiral wavefront including its tip location. The proposed method is able to localize the spiral as soon as the wave hits three electrodes of the catheter. Our simulation results show that the method can efficiently localize the spiral wavefront that rotates either clockwise or counterclockwise.

  10. Kalman Filters for Time Delay of Arrival-Based Source Localization

    NASA Astrophysics Data System (ADS)

    Klee, Ulrich; Gehrig, Tobias; McDonough, John

    2006-12-01

    In this work, we propose an algorithm for acoustic source localization based on time delay of arrival (TDOA) estimation. In earlier work by other authors, an initial closed-form approximation was first used to estimate the true position of the speaker followed by a Kalman filtering stage to smooth the time series of estimates. In the proposed algorithm, this closed-form approximation is eliminated by employing a Kalman filter to directly update the speaker's position estimate based on the observed TDOAs. In particular, the TDOAs comprise the observation associated with an extended Kalman filter whose state corresponds to the speaker's position. We tested our algorithm on a data set consisting of seminars held by actual speakers. Our experiments revealed that the proposed algorithm provides source localization accuracy superior to the standard spherical and linear intersection techniques. Moreover, the proposed algorithm, although relying on an iterative optimization scheme, proved efficient enough for real-time operation.

  11. About the inevitable compromise between spatial resolution and accuracy of strain measurement for bone tissue: a 3D zero-strain study.

    PubMed

    Dall'Ara, E; Barber, D; Viceconti, M

    2014-09-22

    The accurate measurement of local strain is necessary to study bone mechanics and to validate micro computed tomography (µCT) based finite element (FE) models at the tissue scale. Digital volume correlation (DVC) has been used to provide a volumetric estimation of local strain in trabecular bone sample with a reasonable accuracy. However, nothing has been reported so far for µCT based analysis of cortical bone. The goal of this study was to evaluate accuracy and precision of a deformable registration method for prediction of local zero-strains in bovine cortical and trabecular bone samples. The accuracy and precision were analyzed by comparing scans virtually displaced, repeated scans without any repositioning of the sample in the scanner and repeated scans with repositioning of the samples. The analysis showed that both precision and accuracy errors decrease with increasing the size of the region analyzed, by following power laws. The main source of error was found to be the intrinsic noise of the images compared to the others investigated. The results, once extrapolated for larger regions of interest that are typically used in the literature, were in most cases better than the ones previously reported. For a nodal spacing equal to 50 voxels (498 µm), the accuracy and precision ranges were 425-692 µε and 202-394 µε, respectively. In conclusion, it was shown that the proposed method can be used to study the local deformation of cortical and trabecular bone loaded beyond yield, if a sufficiently high nodal spacing is used. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Resolution and quantification accuracy enhancement of functional delay and sum beamforming for three-dimensional acoustic source identification with solid spherical arrays

    NASA Astrophysics Data System (ADS)

    Chu, Zhigang; Yang, Yang; Shen, Linbang

    2017-05-01

    Functional delay and sum (FDAS) is a novel beamforming algorithm introduced for the three-dimensional (3D) acoustic source identification with solid spherical microphone arrays. Being capable of offering significantly attenuated sidelobes with a fast speed, the algorithm promises to play an important role in interior acoustic source identification. However, it presents some intrinsic imperfections, specifically poor spatial resolution and low quantification accuracy. This paper focuses on conquering these imperfections by ridge detection (RD) and deconvolution approach for the mapping of acoustic sources (DAMAS). The suggested methods are referred to as FDAS+RD and FDAS+RD+DAMAS. Both computer simulations and experiments are utilized to validate their effects. Several interesting conclusions have emerged: (1) FDAS+RD and FDAS+RD+DAMAS both can dramatically ameliorate FDAS's spatial resolution and at the same time inherit its advantages. (2) Compared to the conventional DAMAS, FDAS+RD+DAMAS enjoys the same super spatial resolution, stronger sidelobe attenuation capability and more than two hundred times faster speed. (3) FDAS+RD+DAMAS can effectively conquer FDAS's low quantification accuracy. Whether the focus distance is equal to the distance from the source to the array center or not, it can quantify the source average pressure contribution accurately. This study will be of great significance to the accurate and quick localization and quantification of acoustic sources in cabin environments.

  13. Acoustic localization of triggered lightning

    NASA Astrophysics Data System (ADS)

    Arechiga, Rene O.; Johnson, Jeffrey B.; Edens, Harald E.; Thomas, Ronald J.; Rison, William

    2011-05-01

    We use acoustic (3.3-500 Hz) arrays to locate local (<20 km) thunder produced by triggered lightning in the Magdalena Mountains of central New Mexico. The locations of the thunder sources are determined by the array back azimuth and the elapsed time since discharge of the lightning flash. We compare the acoustic source locations with those obtained by the Lightning Mapping Array (LMA) from Langmuir Laboratory, which is capable of accurately locating the lightning channels. To estimate the location accuracy of the acoustic array we performed Monte Carlo simulations and measured the distance (nearest neighbors) between acoustic and LMA sources. For close sources (<5 km) the mean nearest-neighbors distance was 185 m compared to 100 m predicted by the Monte Carlo analysis. For far distances (>6 km) the error increases to 800 m for the nearest neighbors and 650 m for the Monte Carlo analysis. This work shows that thunder sources can be accurately located using acoustic signals.

  14. The Joint Adaptive Kalman Filter (JAKF) for Vehicle Motion State Estimation.

    PubMed

    Gao, Siwei; Liu, Yanheng; Wang, Jian; Deng, Weiwen; Oh, Heekuck

    2016-07-16

    This paper proposes a multi-sensory Joint Adaptive Kalman Filter (JAKF) through extending innovation-based adaptive estimation (IAE) to estimate the motion state of the moving vehicles ahead. JAKF views Lidar and Radar data as the source of the local filters, which aims to adaptively adjust the measurement noise variance-covariance (V-C) matrix 'R' and the system noise V-C matrix 'Q'. Then, the global filter uses R to calculate the information allocation factor 'β' for data fusion. Finally, the global filter completes optimal data fusion and feeds back to the local filters to improve the measurement accuracy of the local filters. Extensive simulation and experimental results show that the JAKF has better adaptive ability and fault tolerance. JAKF enables one to bridge the gap of the accuracy difference of various sensors to improve the integral filtering effectivity. If any sensor breaks down, the filtered results of JAKF still can maintain a stable convergence rate. Moreover, the JAKF outperforms the conventional Kalman filter (CKF) and the innovation-based adaptive Kalman filter (IAKF) with respect to the accuracy of displacement, velocity, and acceleration, respectively.

  15. Contaminant point source localization error estimates as functions of data quantity and model quality

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  16. System and method for bullet tracking and shooter localization

    DOEpatents

    Roberts, Randy S [Livermore, CA; Breitfeller, Eric F [Dublin, CA

    2011-06-21

    A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios.

  17. Material sound source localization through headphones

    NASA Astrophysics Data System (ADS)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  18. Performance evaluation of the Champagne source reconstruction algorithm on simulated and real M/EEG data.

    PubMed

    Owen, Julia P; Wipf, David P; Attias, Hagai T; Sekihara, Kensuke; Nagarajan, Srikantan S

    2012-03-01

    In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Terrain clutter simulation using physics-based scattering model and digital terrain profile data

    NASA Astrophysics Data System (ADS)

    Park, James; Johnson, Joel T.; Ding, Kung-Hau; Kim, Kristopher; Tenbarge, Joseph

    2015-05-01

    Localization of a wireless capsule endoscope finds many clinical applications from diagnostics to therapy. There are potentially two approaches of the electromagnetic waves based localization: a) signal propagation model based localization using a priori information about the persons dielectric channels, and b) recently developed microwave imaging based localization without using any a priori information about the persons dielectric channels. In this paper, we study the second approach in terms of a variety of frequencies and signal-to-noise ratios for localization accuracy. To this end, we select a 2-D anatomically realistic numerical phantom for microwave imaging at different frequencies. The selected frequencies are 13:56 MHz, 431:5 MHz, 920 MHz, and 2380 MHz that are typically considered for medical applications. Microwave imaging of a phantom will provide us with an electromagnetic model with electrical properties (relative permittivity and conductivity) of the internal parts of the body and can be useful as a foundation for localization of an in-body RF source. Low frequency imaging at 13:56 MHz provides a low resolution image with high contrast in the dielectric properties. However, at high frequencies, the imaging algorithm is able to image only the outer boundaries of the tissues due to low penetration depth as higher frequency means higher attenuation. Furthermore, recently developed localization method based on microwave imaging is used for estimating the localization accuracy at different frequencies and signal-to-noise ratios. Statistical evaluation of the localization error is performed using the cumulative distribution function (CDF). Based on our results, we conclude that the localization accuracy is minimally affected by the frequency or the noise. However, the choice of the frequency will become critical if the purpose of the method is to image the internal parts of the body for tumor and/or cancer detection.

  20. Localizing gravitational wave sources with single-baseline atom interferometers

    DOE PAGES

    Graham, Peter W.; Jung, Sunghoon

    2018-01-31

    Localizing sources on the sky is crucial for realizing the full potential of gravitational waves for astronomy, astrophysics, and cosmology. Here in this paper, we show that the midfrequency band, roughly 0.03 to 10 Hz, has significant potential for angular localization. The angular location is measured through the changing Doppler shift as the detector orbits the Sun. This band maximizes the effect since these are the highest frequencies in which sources live for several months. Atom interferometer detectors can observe in the midfrequency band, and even with just a single baseline they can exploit this effect for sensitive angular localization.more » The single-baseline orbits around the Earth and the Sun, causing it to reorient and change position significantly during the lifetime of the source, and making it similar to having multiple baselines/detectors. For example, atomic detectors could predict the location of upcoming black hole or neutron star merger events with sufficient accuracy to allow optical and other electromagnetic telescopes to observe these events simultaneously. Thus, midband atomic detectors are complementary to other gravitational wave detectors and will help complete the observation of a broad range of the gravitational spectrum.« less

  1. Localizing gravitational wave sources with single-baseline atom interferometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Peter W.; Jung, Sunghoon

    Localizing sources on the sky is crucial for realizing the full potential of gravitational waves for astronomy, astrophysics, and cosmology. Here in this paper, we show that the midfrequency band, roughly 0.03 to 10 Hz, has significant potential for angular localization. The angular location is measured through the changing Doppler shift as the detector orbits the Sun. This band maximizes the effect since these are the highest frequencies in which sources live for several months. Atom interferometer detectors can observe in the midfrequency band, and even with just a single baseline they can exploit this effect for sensitive angular localization.more » The single-baseline orbits around the Earth and the Sun, causing it to reorient and change position significantly during the lifetime of the source, and making it similar to having multiple baselines/detectors. For example, atomic detectors could predict the location of upcoming black hole or neutron star merger events with sufficient accuracy to allow optical and other electromagnetic telescopes to observe these events simultaneously. Thus, midband atomic detectors are complementary to other gravitational wave detectors and will help complete the observation of a broad range of the gravitational spectrum.« less

  2. Error assessment of local tie vectors in space geodesy

    NASA Astrophysics Data System (ADS)

    Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald

    2014-05-01

    For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.

  3. Fiber optic distributed temperature sensing for fire source localization

    NASA Astrophysics Data System (ADS)

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Sigrist, Markus W.; Li, Jun; Dong, Fengzhong

    2017-08-01

    A method for localizing a fire source based on a distributed temperature sensor system is proposed. Two sections of optical fibers were placed orthogonally to each other as the sensing elements. A tray of alcohol was lit to act as a fire outbreak in a cabinet with an uneven ceiling to simulate a real scene of fire. Experiments were carried out to demonstrate the feasibility of the method. Rather large fluctuations and systematic errors with respect to predicting the exact room coordinates of the fire source caused by the uneven ceiling were observed. Two mathematical methods (smoothing recorded temperature curves and finding temperature peak positions) to improve the prediction accuracy are presented, and the experimental results indicate that the fluctuation ranges and systematic errors are significantly reduced. The proposed scheme is simple and appears reliable enough to locate a fire source in large spaces.

  4. Localization of synchronous cortical neural sources.

    PubMed

    Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc

    2013-03-01

    Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.

  5. Contaminant point source localization error estimates as functions of data quantity and model quality

    DOE PAGES

    Hansen, Scott K.; Vesselinov, Velimir Valentinov

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less

  6. Adaptive Environmental Source Localization and Tracking with Unknown Permittivity and Path Loss Coefficients †

    PubMed Central

    Fidan, Barış; Umay, Ilknur

    2015-01-01

    Accurate signal-source and signal-reflector target localization tasks via mobile sensory units and wireless sensor networks (WSNs), including those for environmental monitoring via sensory UAVs, require precise knowledge of specific signal propagation properties of the environment, which are permittivity and path loss coefficients for the electromagnetic signal case. Thus, accurate estimation of these coefficients has significant importance for the accuracy of location estimates. In this paper, we propose a geometric cooperative technique to instantaneously estimate such coefficients, with details provided for received signal strength (RSS) and time-of-flight (TOF)-based range sensors. The proposed technique is integrated to a recursive least squares (RLS)-based adaptive localization scheme and an adaptive motion control law, to construct adaptive target localization and adaptive target tracking algorithms, respectively, that are robust to uncertainties in aforementioned environmental signal propagation coefficients. The efficiency of the proposed adaptive localization and tracking techniques are both mathematically analysed and verified via simulation experiments. PMID:26690441

  7. Low resolution brain electromagnetic tomography in a realistic geometry head model: a simulation study

    NASA Astrophysics Data System (ADS)

    Ding, Lei; Lai, Yuan; He, Bin

    2005-01-01

    It is of importance to localize neural sources from scalp recorded EEG. Low resolution brain electromagnetic tomography (LORETA) has received considerable attention for localizing brain electrical sources. However, most such efforts have used spherical head models in representing the head volume conductor. Investigation of the performance of LORETA in a realistic geometry head model, as compared with the spherical model, will provide useful information guiding interpretation of data obtained by using the spherical head model. The performance of LORETA was evaluated by means of computer simulations. The boundary element method was used to solve the forward problem. A three-shell realistic geometry (RG) head model was constructed from MRI scans of a human subject. Dipole source configurations of a single dipole located at different regions of the brain with varying depth were used to assess the performance of LORETA in different regions of the brain. A three-sphere head model was also used to approximate the RG head model, and similar simulations performed, and results compared with the RG-LORETA with reference to the locations of the simulated sources. Multi-source localizations were discussed and examples given in the RG head model. Localization errors employing the spherical LORETA, with reference to the source locations within the realistic geometry head, were about 20-30 mm, for four brain regions evaluated: frontal, parietal, temporal and occipital regions. Localization errors employing the RG head model were about 10 mm over the same four brain regions. The present simulation results suggest that the use of the RG head model reduces the localization error of LORETA, and that the RG head model based LORETA is desirable if high localization accuracy is needed.

  8. The impact of acquisition times on the accuracy of microwave soil moisture retrievals over the contiguous U.S.

    USDA-ARS?s Scientific Manuscript database

    Satellite-derived soil moisture products have become an important data source for the study of land surface processes and related applications. For satellites with sun-synchronous orbits, these products are typically derived separately for ascending and descending overpasses with different local acq...

  9. Influence of double stimulation on sound-localization behavior in barn owls.

    PubMed

    Kettler, Lutz; Wagner, Hermann

    2014-12-01

    Barn owls do not immediately approach a source after they hear a sound, but wait for a second sound before they strike. This represents a gain in striking behavior by avoiding responses to random incidents. However, the first stimulus is also expected to change the threshold for perceiving the subsequent second sound, thus possibly introducing some costs. We mimicked this situation in a behavioral double-stimulus paradigm utilizing saccadic head turns of owls. The first stimulus served as an adapter, was presented in frontal space, and did not elicit a head turn. The second stimulus, emitted from a peripheral source, elicited the head turn. The time interval between both stimuli was varied. Data obtained with double stimulation were compared with data collected with a single stimulus from the same positions as the second stimulus in the double-stimulus paradigm. Sound-localization performance was quantified by the response latency, accuracy, and precision of the head turns. Response latency was increased with double stimuli, while accuracy and precision were decreased. The effect depended on the inter-stimulus interval. These results suggest that waiting for a second stimulus may indeed impose costs on sound localization by adaptation and this reduces the gain obtained by waiting for a second stimulus.

  10. Speech Understanding and Sound Source Localization by Cochlear Implant Listeners Using a Pinna-Effect Imitating Microphone and an Adaptive Beamformer.

    PubMed

    Dorman, Michael F; Natale, Sarah; Loiselle, Louise

    2018-03-01

    Sentence understanding scores for patients with cochlear implants (CIs) when tested in quiet are relatively high. However, sentence understanding scores for patients with CIs plummet with the addition of noise. To assess, for patients with CIs (MED-EL), (1) the value to speech understanding of two new, noise-reducing microphone settings and (2) the effect of the microphone settings on sound source localization. Single-subject, repeated measures design. For tests of speech understanding, repeated measures on (1) number of CIs (one, two), (2) microphone type (omni, natural, adaptive beamformer), and (3) type of noise (restaurant, cocktail party). For sound source localization, repeated measures on type of signal (low-pass [LP], high-pass [HP], broadband noise). Ten listeners, ranging in age from 48 to 83 yr (mean = 57 yr), participated in this prospective study. Speech understanding was assessed in two noise environments using monaural and bilateral CIs fit with three microphone types. Sound source localization was assessed using three microphone types. In Experiment 1, sentence understanding scores (in terms of percent words correct) were obtained in quiet and in noise. For each patient, noise was first added to the signal to drive performance off of the ceiling in the bilateral CI-omni microphone condition. The other conditions were then administered at that signal-to-noise ratio in quasi-random order. In Experiment 2, sound source localization accuracy was assessed for three signal types using a 13-loudspeaker array over a 180° arc. The dependent measure was root-mean-score error. Both the natural and adaptive microphone settings significantly improved speech understanding in the two noise environments. The magnitude of the improvement varied between 16 and 19 percentage points for tests conducted in the restaurant environment and between 19 and 36 percentage points for tests conducted in the cocktail party environment. In the restaurant and cocktail party environments, both the natural and adaptive settings, when implemented on a single CI, allowed scores that were as good as, or better, than scores in the bilateral omni test condition. Sound source localization accuracy was unaltered by either the natural or adaptive settings for LP, HP, or wideband noise stimuli. The data support the use of the natural microphone setting as a default setting. The natural setting (1) provides better speech understanding in noise than the omni setting, (2) does not impair sound source localization, and (3) retains low-frequency sensitivity to signals from the rear. Moreover, bilateral CIs equipped with adaptive beamforming technology can engender speech understanding scores in noise that fall only a little short of scores for a single CI in quiet. American Academy of Audiology

  11. Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources.

    PubMed

    Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter

    2016-01-01

    Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. EEG data were generated by simulating multiple cortical sources (2-4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms.

  12. Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources

    PubMed Central

    Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter

    2016-01-01

    Background Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. Methods EEG data were generated by simulating multiple cortical sources (2–4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. Results While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms. PMID:26809000

  13. Methods of localization of Lamb wave sources on thin plates

    NASA Astrophysics Data System (ADS)

    Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut

    2015-04-01

    Signal localization techniques are ubiquitous in both industry and academic communities. We propose a new localization method on plates which is based on energy amplitude attenuation and inverted source amplitude comparison. This inversion is tested on synthetic data using Lamb wave propagation direct model and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers (1-26 kHz frequency range)). We compare the performance of the technique to the classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. Furthermore, we measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, geometry, Signal to Noise Ratio, and we show that this very versatile technique works better than classical ones over the sampling rates 100kHz - 1MHz. Experimental phase consists of a glass plate having dimensions of 80cmx40cm with a thickness of 1cm. Generated signals due to a wooden hammer hit or a steel ball hit are captured by sensors placed on the plate on different locations with the mentioned sensors. Numerical simulations are done using dispersive far field approximation of plate waves. Signals are generated using a hertzian loading over the plate. Using imaginary sources outside the plate boundaries the effect of reflections is also included. This proposed method, can be modified to be implemented on 3d environments, monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).

  14. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    NASA Astrophysics Data System (ADS)

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.

  15. Single-Sided Deafness: Impact of Cochlear Implantation on Speech Perception in Complex Noise and on Auditory Localization Accuracy.

    PubMed

    Döge, Julia; Baumann, Uwe; Weissgerber, Tobias; Rader, Tobias

    2017-12-01

    To assess auditory localization accuracy and speech reception threshold (SRT) in complex noise conditions in adult patients with acquired single-sided deafness, after intervention with a cochlear implant (CI) in the deaf ear. Nonrandomized, open, prospective patient series. Tertiary referral university hospital. Eleven patients with late-onset single-sided deafness (SSD) and normal hearing in the unaffected ear, who received a CI. All patients were experienced CI users. Unilateral cochlear implantation. Speech perception was tested in a complex multitalker equivalent noise field consisting of multiple sound sources. Speech reception thresholds in noise were determined in aided (with CI) and unaided conditions. Localization accuracy was assessed in complete darkness. Acoustic stimuli were radiated by multiple loudspeakers distributed in the frontal horizontal plane between -60 and +60 degrees. In the aided condition, results show slightly improved speech reception scores compared with the unaided condition in most of the patients. For 8 of the 11 subjects, SRT was improved between 0.37 and 1.70 dB. Three of the 11 subjects showed deteriorations between 1.22 and 3.24 dB SRT. Median localization error decreased significantly by 12.9 degrees compared with the unaided condition. CI in single-sided deafness is an effective treatment to improve the auditory localization accuracy. Speech reception in complex noise conditions is improved to a lesser extent in 73% of the participating CI SSD patients. However, the absence of true binaural interaction effects (summation, squelch) impedes further improvements. The development of speech processing strategies that respect binaural interaction seems to be mandatory to advance speech perception in demanding listening situations in SSD patients.

  16. Development of a microbial contamination susceptibility model for private domestic groundwater sources

    NASA Astrophysics Data System (ADS)

    Hynds, Paul D.; Misstear, Bruce D.; Gill, Laurence W.

    2012-12-01

    Groundwater quality analyses were carried out on samples from 262 private sources in the Republic of Ireland during the period from April 2008 to November 2010, with microbial quality assessed by thermotolerant coliform (TTC) presence. Assessment of potential microbial contamination risk factors was undertaken at all sources, and local meteorological data were also acquired. Overall, 28.9% of wells tested positive for TTC, with risk analysis indicating that source type (i.e., borehole or hand-dug well), local bedrock type, local subsoil type, groundwater vulnerability, septic tank setback distance, and 48 h antecedent precipitation were all significantly associated with TTC presence (p < 0.05). A number of source-specific design parameters were also significantly associated with bacterial presence. Hierarchical logistic regression with stepwise parameter entry was used to develop a private well susceptibility model, with the final model exhibiting a mean predictive accuracy of >80% (TTC present or absent) when compared to an independent validation data set. Model hierarchies of primary significance are source design (20%), septic tank location (11%), hydrogeological setting (10%), and antecedent 120 h precipitation (2%). Sensitivity analysis shows that the probability of contamination is highly sensitive to septic tank setback distance, with probability increasing linearly with decreases in setback distance. Likewise, contamination probability was shown to increase with increasing antecedent precipitation. Results show that while groundwater vulnerability category is a useful indicator of aquifer susceptibility to contamination, its suitability with regard to source contamination is less clear. The final model illustrates that both localized (well-specific) and generalized (aquifer-specific) contamination mechanisms are involved in contamination events, with localized bypass mechanisms dominant. The susceptibility model developed here could be employed in the appropriate location, design, construction, and operation of private groundwater wells, thereby decreasing the contamination risk, and hence health risk, associated with these sources.

  17. Effects of reconstructed magnetic field from sparse noisy boundary measurements on localization of active neural source.

    PubMed

    Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin

    2016-01-01

    Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization.

  18. Parameter Estimation for Gravitational-wave Bursts with the BayesWave Pipeline

    NASA Technical Reports Server (NTRS)

    Becsy, Bence; Raffai, Peter; Cornish, Neil; Essick, Reed; Kanner, Jonah; Katsavounidis, Erik; Littenberg, Tyson B.; Millhouse, Margaret; Vitale, Salvatore

    2017-01-01

    We provide a comprehensive multi-aspect study of the performance of a pipeline used by the LIGO-Virgo Collaboration for estimating parameters of gravitational-wave bursts. We add simulated signals with four different morphologies (sine-Gaussians (SGs), Gaussians, white-noise bursts, and binary black hole signals) to simulated noise samples representing noise of the two Advanced LIGO detectors during their first observing run. We recover them with the BayesWave (BW) pipeline to study its accuracy in sky localization, waveform reconstruction, and estimation of model-independent waveform parameters. BW localizes sources with a level of accuracy comparable for all four morphologies, with the median separation of actual and estimated sky locations ranging from 25.1deg to30.3deg. This is a reasonable accuracy in the two-detector case, and is comparable to accuracies of other localization methods studied previously. As BW reconstructs generic transient signals with SG wavelets, it is unsurprising that BW performs best in reconstructing SG and Gaussian waveforms. The BW accuracy in waveform reconstruction increases steeply with the network signal-to-noise ratio (S/N(sub net), reaching a 85% and 95% match between the reconstructed and actual waveform below S/N(sub net) approx. = 20 and S/N(sub net) approx. = 50, respectively, for all morphologies. The BW accuracy in estimating central moments of waveforms is only limited by statistical errors in the frequency domain, and is also affected by systematic errors in the time domain as BW cannot reconstruct low-amplitude parts of signals that are overwhelmed by noise. The figures of merit we introduce can be used in future characterizations of parameter estimation pipelines.

  19. Localization of virtual sound at 4 Gz.

    PubMed

    Sandor, Patrick M B; McAnally, Ken I; Pellieux, Lionel; Martin, Russell L

    2005-02-01

    Acceleration directed along the body's z-axis (Gz) leads to misperception of the elevation of visual objects (the "elevator illusion"), most probably as a result of errors in the transformation from eye-centered to head-centered coordinates. We have investigated whether the location of sound sources is misperceived under increased Gz. Visually guided localization responses were made, using a remotely controlled laser pointer, to virtual auditory targets under conditions of 1 and 4 Gz induced in a human centrifuge. As these responses would be expected to be affected by the elevator illusion, we also measured the effect of Gz on the accuracy with which subjects could point to the horizon. Horizon judgments were lower at 4 Gz than at 1 Gz, so sound localization responses at 4 Gz were corrected for this error in the transformation from eye-centered to head-centered coordinates. We found that the accuracy and bias of sound localization are not significantly affected by increased Gz. The auditory modality is likely to provide a reliable means of conveying spatial information to operators in dynamic environments in which Gz can vary.

  20. Microwave tunable laser source: A stable, precision tunable heterodyne local oscillator

    NASA Technical Reports Server (NTRS)

    Sachse, G. W.

    1980-01-01

    The development and capabilities of a tunable laser source utilizing a wideband electro-optic modulator and a CO2 laser are described. The precision tunability and high stability of the device are demonstrated with examples of laboratory spectroscopy. Heterodyne measurements are also presented to demonstrate the performance of the laser source as a heterodyne local oscillator. With the use of five CO2 isotope lasers and the 8 to 18 GHz sideband offset tunability of the modulator, calculations indicate that 50 percent spectral coverage in the 9 to 12 micron region is achievable. The wavelength accuracy and stability of this laser source is limited by the CO2 laser and is more than adequate for the measurement of narrow Doppler-broadened line profiles. The room-temperature operating capability and the programmability of the microwave tunable laser source are attractive features for its in-the-field implementation. Although heterodyne measurements indicated some S/N degradation when using the device as a local oscillator, there does not appear to be any fundamental limitation to the heterodyne efficiency of this laser source. Through the use of a lower noise-figure traveling wave tube amplifier and optical matching of the output beam with the photomixer, a substantial increase in the heterodyne S/N is expected.

  1. Single-source PPG-based local pulse wave velocity measurement: a potential cuffless blood pressure estimation technique.

    PubMed

    Nabeel, P M; Jayaraj, J; Mohanasankar, S

    2017-11-30

    A novel photoplethysmograph probe employing dual photodiodes excited using a single infrared light source was developed for local pulse wave velocity (PWV) measurement. The potential use of the proposed system in cuffless blood pressure (BP) techniques was demonstrated. Initial validation measurements were performed on a phantom using a reference method. Further, an in vivo study was carried out in 35 volunteers (age  =  28  ±  4.5 years). The carotid local PWV, carotid to finger pulse transit time (PTT R ) and pulse arrival time at the carotid artery (PAT C ) were simultaneously measured. Beat-by-beat variation of the local PWV due to BP changes was studied during post-exercise relaxation. The cuffless BP estimation accuracy of local PWV, PAT C , and PTT R was investigated based on inter- and intra-subject models with best-case calibration. The accuracy of the proposed system, hardware inter-channel delay (<0.1 ms), repeatability (beat-to-beat variation  =  4.15%-11.38%) and reproducibility of measurement (r  =  0.96) were examined. For the phantom experiment, the measured PWV values did not differ by more than 0.74 m s -1 compared to the reference PWV. Better correlation was observed between brachial BP parameters versus local PWV (r  =  0.74-0.78) compared to PTT R (|r|  =  0.62-0.67) and PAT C (|r|  =  0.52-0.68). Cuffless BP estimation using local PWV was better than PTT R and PAT C with population-specific models. More accurate estimates of arterial BP levels were achieved using local PWV via subject-specific models (root-mean-square error  ⩽2.61 mmHg). A reliable system for cuffless BP measurement and local estimation of arterial wall properties.

  2. Head movement compensation in real-time magnetoencephalographic recordings.

    PubMed

    Little, Graham; Boe, Shaun; Bardouille, Timothy

    2014-01-01

    Neurofeedback- and brain-computer interface (BCI)-based interventions can be implemented using real-time analysis of magnetoencephalographic (MEG) recordings. Head movement during MEG recordings, however, can lead to inaccurate estimates of brain activity, reducing the efficacy of the intervention. Most real-time applications in MEG have utilized analyses that do not correct for head movement. Effective means of correcting for head movement are needed to optimize the use of MEG in such applications. Here we provide preliminary validation of a novel analysis technique, real-time source estimation (rtSE), that measures head movement and generates corrected current source time course estimates in real-time. rtSE was applied while recording a calibrated phantom to determine phantom position localization accuracy and source amplitude estimation accuracy under stationary and moving conditions. Results were compared to off-line analysis methods to assess validity of the rtSE technique. The rtSE method allowed for accurate estimation of current source activity at the source-level in real-time, and accounted for movement of the source due to changes in phantom position. The rtSE technique requires modifications and specialized analysis of the following MEG work flow steps.•Data acquisition•Head position estimation•Source localization•Real-time source estimation This work explains the technical details and validates each of these steps.

  3. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  4. Photoacoustic-based sO2 estimation through excised bovine prostate tissue with interstitial light delivery.

    PubMed

    Mitcham, Trevor; Taghavi, Houra; Long, James; Wood, Cayla; Fuentes, David; Stefan, Wolfgang; Ward, John; Bouchard, Richard

    2017-09-01

    Photoacoustic (PA) imaging is capable of probing blood oxygen saturation (sO 2 ), which has been shown to correlate with tissue hypoxia, a promising cancer biomarker. However, wavelength-dependent local fluence changes can compromise sO 2 estimation accuracy in tissue. This work investigates using PA imaging with interstitial irradiation and local fluence correction to assess precision and accuracy of sO 2 estimation of blood samples through ex vivo bovine prostate tissue ranging from 14% to 100% sO 2 . Study results for bovine blood samples at distances up to 20 mm from the irradiation source show that local fluence correction improved average sO 2 estimation error from 16.8% to 3.2% and maintained an average precision of 2.3% when compared to matched CO-oximeter sO 2 measurements. This work demonstrates the potential for future clinical translation of using fluence-corrected and interstitially driven PA imaging to accurately and precisely assess sO 2 at depth in tissue with high resolution.

  5. Quantitative evaluation of software packages for single-molecule localization microscopy.

    PubMed

    Sage, Daniel; Kirshner, Hagai; Pengo, Thomas; Stuurman, Nico; Min, Junhong; Manley, Suliana; Unser, Michael

    2015-08-01

    The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.

  6. Time-distance domain transformation for Acoustic Emission source localization in thin metallic plates.

    PubMed

    Grabowski, Krzysztof; Gawronski, Mateusz; Baran, Ireneusz; Spychalski, Wojciech; Staszewski, Wieslaw J; Uhl, Tadeusz; Kundu, Tribikram; Packo, Pawel

    2016-05-01

    Acoustic Emission used in Non-Destructive Testing is focused on analysis of elastic waves propagating in mechanical structures. Then any information carried by generated acoustic waves, further recorded by a set of transducers, allow to determine integrity of these structures. It is clear that material properties and geometry strongly impacts the result. In this paper a method for Acoustic Emission source localization in thin plates is presented. The approach is based on the Time-Distance Domain Transform, that is a wavenumber-frequency mapping technique for precise event localization. The major advantage of the technique is dispersion compensation through a phase-shifting of investigated waveforms in order to acquire the most accurate output, allowing for source-sensor distance estimation using a single transducer. The accuracy and robustness of the above process are also investigated. This includes the study of Young's modulus value and numerical parameters influence on damage detection. By merging the Time-Distance Domain Transform with an optimal distance selection technique, an identification-localization algorithm is achieved. The method is investigated analytically, numerically and experimentally. The latter involves both laboratory and large scale industrial tests. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Localization of focused-ultrasound beams in a tissue phantom, using remote thermocouple arrays.

    PubMed

    Hariharan, Prasanna; Dibaji, Seyed Ahmad Reza; Banerjee, Rupak K; Nagaraja, Srinidhi; Myers, Matthew R

    2014-12-01

    In focused-ultrasound procedures such as vessel cauterization or clot lysis, targeting accuracy is critical. To investigate the targeting accuracy of the focused-ultrasound systems, tissue phantoms embedded with thermocouples can be employed. This paper describes a method that utilizes an array of thermocouples to localize the focused ultrasound beam. All of the thermocouples are located away from the beam, so that thermocouple artifacts and sensor interference are minimized. Beam propagation and temperature rise in the phantom are simulated numerically, and an optimization routine calculates the beam location that produces the best agreement between the numerical temperature values and those measured with thermocouples. The accuracy of the method was examined as a function of the array characteristics, including the number of thermocouples in the array and their orientation. For exposures with a 3.3-MHz source, the remote-thermocouple technique was able to predict the focal position to within 0.06 mm. Once the focal location is determined using the localization method, temperatures at desired locations (including the focus) can be estimated from remote thermocouple measurements by curve fitting an analytical solution to the heat equation. Temperature increases in the focal plane were predicted to within 5% agreement with measured values using this method.

  8. A Crowd-Sourcing Indoor Localization Algorithm via Optical Camera on a Smartphone Assisted by Wi-Fi Fingerprint RSSI

    PubMed Central

    Chen, Wei; Wang, Weiping; Li, Qun; Chang, Qiang; Hou, Hongtao

    2016-01-01

    Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies and the processing speed of smartphones, indoor positioning using the optical camera on a smartphone has become an attractive research topic; however, the major challenge is its high computational complexity; as a result, real-time positioning cannot be achieved. In this paper we introduce a crowd-sourcing indoor localization algorithm via an optical camera and orientation sensor on a smartphone to address these issues. First, we use Wi-Fi fingerprint based on the K Weighted Nearest Neighbor (KWNN) algorithm to make a coarse estimation. Second, we adopt a mean-weighted exponent algorithm to fuse optical image features and orientation sensor data as well as KWNN in the smartphone to refine the result. Furthermore, a crowd-sourcing approach is utilized to update and supplement the positioning database. We perform several experiments comparing our approach with other positioning algorithms on a common smartphone to evaluate the performance of the proposed sensor-calibrated algorithm, and the results demonstrate that the proposed algorithm could significantly improve accuracy, stability, and applicability of positioning. PMID:27007379

  9. A Crowd-Sourcing Indoor Localization Algorithm via Optical Camera on a Smartphone Assisted by Wi-Fi Fingerprint RSSI.

    PubMed

    Chen, Wei; Wang, Weiping; Li, Qun; Chang, Qiang; Hou, Hongtao

    2016-03-19

    Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies and the processing speed of smartphones, indoor positioning using the optical camera on a smartphone has become an attractive research topic; however, the major challenge is its high computational complexity; as a result, real-time positioning cannot be achieved. In this paper we introduce a crowd-sourcing indoor localization algorithm via an optical camera and orientation sensor on a smartphone to address these issues. First, we use Wi-Fi fingerprint based on the K Weighted Nearest Neighbor (KWNN) algorithm to make a coarse estimation. Second, we adopt a mean-weighted exponent algorithm to fuse optical image features and orientation sensor data as well as KWNN in the smartphone to refine the result. Furthermore, a crowd-sourcing approach is utilized to update and supplement the positioning database. We perform several experiments comparing our approach with other positioning algorithms on a common smartphone to evaluate the performance of the proposed sensor-calibrated algorithm, and the results demonstrate that the proposed algorithm could significantly improve accuracy, stability, and applicability of positioning.

  10. Impact of MAC Delay on AUV Localization: Underwater Localization Based on Hyperbolic Frequency Modulation Signal

    PubMed Central

    2018-01-01

    Medium Access Control (MAC) delay which occurs between the anchor node’s transmissions is one of the error sources in underwater localization. In particular, in AUV localization, the MAC delay significantly degrades the ranging accuracy. The Cramer-Rao Low Bound (CRLB) definition theoretically proves that the MAC delay significantly degrades the localization performance. This paper proposes underwater localization combined with multiple access technology to decouple the localization performance from the MAC delay. Towards this goal, we adopt hyperbolic frequency modulation (HFM) signal that provides multiplexing based on its good property, high-temporal correlation. Owing to the multiplexing ability of the HFM signal, the anchor nodes can transmit packets without MAC delay, i.e., simultaneous transmission is possible. In addition, the simulation results show that the simultaneous transmission is not an optional communication scheme, but essential for the localization of mobile object in underwater. PMID:29373518

  11. Impact of MAC Delay on AUV Localization: Underwater Localization Based on Hyperbolic Frequency Modulation Signal.

    PubMed

    Kim, Sungryul; Yoo, Younghwan

    2018-01-26

    Medium Access Control (MAC) delay which occurs between the anchor node's transmissions is one of the error sources in underwater localization. In particular, in AUV localization, the MAC delay significantly degrades the ranging accuracy. The Cramer-Rao Low Bound (CRLB) definition theoretically proves that the MAC delay significantly degrades the localization performance. This paper proposes underwater localization combined with multiple access technology to decouple the localization performance from the MAC delay. Towards this goal, we adopt hyperbolic frequency modulation (HFM) signal that provides multiplexing based on its good property, high-temporal correlation. Owing to the multiplexing ability of the HFM signal, the anchor nodes can transmit packets without MAC delay, i.e., simultaneous transmission is possible. In addition, the simulation results show that the simultaneous transmission is not an optional communication scheme, but essential for the localization of mobile object in underwater.

  12. Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments

    NASA Astrophysics Data System (ADS)

    Tsuji, Daisuke; Suyama, Kenji

    This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.

  13. Water Vapor Tracers as Diagnostics of the Regional Hydrologic Cycle

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Schubert, Siegfried D.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Numerous studies suggest that local feedback of surface evaporation on precipitation, or recycling, is a significant source of water for precipitation. Quantitative results on the exact amount of recycling have been difficult to obtain in view of the inherent limitations of diagnostic recycling calculations. The current study describes a calculation of the amount of local and remote geographic sources of surface evaporation for precipitation, based on the implementation of three-dimensional constituent tracers of regional water vapor sources (termed water vapor tracers, WVT) in a general circulation model. The major limitation on the accuracy of the recycling estimates is the veracity of the numerically simulated hydrological cycle, though we note that this approach can also be implemented within the context of a data assimilation system. In the WVT approach, each tracer is associated with an evaporative source region for a prognostic three-dimensional variable that represents a partial amount of the total atmospheric water vapor. The physical processes that act on a WVT are determined in proportion to those that act on the model's prognostic water vapor. In this way, the local and remote sources of water for precipitation can be predicted within the model simulation, and can be validated against the model's prognostic water vapor. As a demonstration of the method, the regional hydrologic cycles for North America and India are evaluated for six summers (June, July and August) of model simulation. More than 50% of the precipitation in the Midwestern United States came from continental regional sources, and the local source was the largest of the regional tracers (14%). The Gulf of Mexico and Atlantic regions contributed 18% of the water for Midwestern precipitation, but further analysis suggests that the greater region of the Tropical Atlantic Ocean may also contribute significantly. In most North American continental regions, the local source of precipitation is correlated with total precipitation. There is a general positive correlation between local evaporation and local precipitation, but it can be weaker because large evaporation can occur when precipitation is inhibited. In India, the local source of precipitation is a small percentage of the precipitation owing to the dominance of the atmospheric transport of oceanic water. The southern Indian Ocean provides a key source of water for both the Indian continent and the Sahelian region.

  14. Continuous monitoring of prostate position using stereoscopic and monoscopic kV image guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, M. Tynan R.; Parsons, Dave D.; Robar, James L.

    2016-05-15

    Purpose: To demonstrate continuous kV x-ray monitoring of prostate motion using both stereoscopic and monoscopic localizations, assess the spatial accuracy of these techniques, and evaluate the dose delivered from the added image guidance. Methods: The authors implemented both stereoscopic and monoscopic fiducial localizations using a room-mounted dual oblique x-ray system. Recently developed monoscopic 3D position estimation techniques potentially overcome the issue of treatment head interference with stereoscopic imaging at certain gantry angles. To demonstrate continuous position monitoring, a gold fiducial marker was placed in an anthropomorphic phantom and placed on the Linac couch. The couch was used as a programmablemore » translation stage. The couch was programmed with a series of patient prostate motion trajectories exemplifying five distinct categories: stable prostate, slow drift, persistent excursion, transient excursion, and high frequency excursions. The phantom and fiducial were imaged using 140 kVp, 0.63 mAs per image at 1 Hz for a 60 s monitoring period. Both stereoscopic and monoscopic 3D localization accuracies were assessed by comparison to the ground-truth obtained from the Linac log file. Imaging dose was also assessed, using optically stimulated luminescence dosimeter inserts in the phantom. Results: Stereoscopic localization accuracy varied between 0.13 ± 0.05 and 0.33 ± 0.30 mm, depending on the motion trajectory. Monoscopic localization accuracy varied from 0.2 ± 0.1 to 1.1 ± 0.7 mm. The largest localization errors were typically observed in the left–right direction. There were significant differences in accuracy between the two monoscopic views, but which view was better varied from trajectory to trajectory. The imaging dose was measured to be between 2 and 15 μGy/mAs, depending on location in the phantom. Conclusions: The authors have demonstrated the first use of monoscopic localization for a room-mounted dual x-ray system. Three-dimensional position estimation from monoscopic imaging permits continuous, uninterrupted intrafraction motion monitoring even in the presence of gantry rotation, which may block kV sources or imagers. This potentially allows for more accurate treatment delivery, by ensuring that the prostate does not deviate substantially from the initial setup position.« less

  15. The High Energy Transient Explorer (HETE): Mission and Science Overview

    NASA Astrophysics Data System (ADS)

    Ricker, G. R.; Atteia, J.-L.; Crew, G. B.; Doty, J. P.; Fenimore, E. E.; Galassi, M.; Graziani, C.; Hurley, K.; Jernigan, J. G.; Kawai, N.; Lamb, D. Q.; Matsuoka, M.; Pizzichini, G.; Shirasaki, Y.; Tamagawa, T.; Vanderspek, R.; Vedrenne, G.; Villasenor, J.; Woosley, S. E.; Yoshida, A.

    2003-04-01

    The High Energy Transient Explorer (HETE ) mission is devoted to the study of gamma-ray bursts (GRBs) using soft X-ray, medium X-ray, and gamma-ray instruments mounted on a compact spacecraft. The HETE satellite was launched into equatorial orbit on 9 October 2000. A science team from France, Japan, Brazil, India, Italy, and the US is responsible for the HETE mission, which was completed for ~ 1/3 the cost of a NASA Small Explorer (SMEX). The HETE mission is unique in that it is entirely ``self-contained,'' insofar as it relies upon dedicated tracking, data acquisition, mission operations, and data analysis facilities run by members of its international Science Team. A powerful feature of HETE is its potential for localizing GRBs within seconds of the trigger with good precision (~ 10') using medium energy X-rays and, for a subset of bright GRBs, improving the localization to ~ 30''accuracy using low energy X-rays. Real-time GRB localizations are transmitted to ground observers within seconds via a dedicated network of 14 automated ``Burst Alert Stations,'' thereby allowing prompt optical, IR, and radio follow-up, leading to the identification of counterparts for a large fraction of HETE -localized GRBs. HETE is the only satellite that can provide near-real time localizations of GRBs, and that can localize GRBs that do not have X-ray, optical, and radio afterglows, during the next two years. These capabilities are the key to allowing HETE to probe further the unique physics that produces the brightest known photon sources in the universe. To date (December 2002), HETE has produced 31 GRB localizations. Localization accuracies are routinely in the 4'- 20' range; for the five GRBs with SXC localization, accuracies are ~1-2'. In addition, HETE has detected ~ 25 bursts from soft gamma repeaters (SGRs), and >600 X-ray bursts (XRBs).

  16. Evaluation of coded aperture radiation detectors using a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Miller, Kyle; Huggins, Peter; Labov, Simon; Nelson, Karl; Dubrawski, Artur

    2016-12-01

    We investigate tradeoffs arising from the use of coded aperture gamma-ray spectrometry to detect and localize sources of harmful radiation in the presence of noisy background. Using an example application scenario of area monitoring and search, we empirically evaluate weakly supervised spectral, spatial, and hybrid spatio-spectral algorithms for scoring individual observations, and two alternative methods of fusing evidence obtained from multiple observations. Results of our experiments confirm the intuition that directional information provided by spectrometers masked with coded aperture enables gains in source localization accuracy, but at the expense of reduced probability of detection. Losses in detection performance can however be to a substantial extent reclaimed by using our new spatial and spatio-spectral scoring methods which rely on realistic assumptions regarding masking and its impact on measured photon distributions.

  17. Measuring coalescing massive binary black holes with gravitational waves: The impact of spin-induced precession

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Ryan N.; Hughes, Scott A.

    The coalescence of massive black holes generates gravitational waves (GWs) that will be measurable by space-based detectors such as LISA to large redshifts. The spins of a binary's black holes have an important impact on its waveform. Specifically, geodetic and gravitomagnetic effects cause the spins to precess; this precession then modulates the waveform, adding periodic structure which encodes useful information about the binary's members. Following pioneering work by Vecchio, we examine the impact upon GW measurements of including these precession-induced modulations in the waveform model. We find that the additional periodicity due to spin precession breaks degeneracies among certain parameters,more » greatly improving the accuracy with which they may be measured. In particular, mass measurements are improved tremendously, by one to several orders of magnitude. Localization of the source on the sky is also improved, though not as much--low redshift systems can be localized to an ellipse which is roughly 10-a fewx10 arcminutes in the long direction and a factor of 2 smaller in the short direction. Though not a drastic improvement relative to analyses which neglect spin precession, even modest gains in source localization will greatly facilitate searches for electromagnetic counterparts to GW events. Determination of distance to the source is likewise improved: We find that relative error in measured luminosity distance is commonly {approx}0.1%-0.4% at z{approx}1. Finally, with the inclusion of precession, we find that the magnitude of the spins themselves can typically be determined for low redshift systems with an accuracy of about 0.1%-10%, depending on the spin value, allowing accurate surveys of mass and spin evolution over cosmic time.« less

  18. Squids in the Study of Cerebral Magnetic Field

    NASA Astrophysics Data System (ADS)

    Romani, G. L.; Narici, L.

    The following sections are included: * INTRODUCTION * HISTORICAL OVERVIEW * NEUROMAGNETIC FIELDS AND AMBIENT NOISE * DETECTORS * Room temperature sensors * SQUIDs * DETECTION COILS * Magnetometers * Gradiometers * Balancing * Planar gradiometers * Choice of the gradiometer parameters * MODELING * Current pattern due to neural excitations * Action potentials and postsynaptic currents * The current dipole model * Neural population and detected fields * Spherically bounded medium * SPATIAL CONFIGURATION OF THE SENSORS * SOURCE LOCALIZATION * Localization procedure * Experimental accuracy and reproducibility * SIGNAL PROCESSING * Analog Filtering * Bandpass filters * Line rejection filters * DATA ANALYSIS * Analysis of evoked/event-related responses * Simple average * Selected average * Recursive techniques * Similarity analysis * Analysis of spontaneous activity * Mapping and localization * EXAMPLES OF NEUROMAGNETIC STUDIES * Neuromagnetic measurements * Studies on the normal brain * Clinical applications * Epilepsy * Tinnitus * CONCLUSIONS * ACKNOWLEDGEMENTS * REFERENCES

  19. Mass of the Local Group from Proper Motions of Distant Dwarf Galaxies

    NASA Astrophysics Data System (ADS)

    van der Marel, Roeland

    2010-09-01

    The Local Group and its two dominant spirals, the Milky Way and M31, have become the benchmark for testing many aspects of cosmological and galaxy formation theories, due to many exciting new discoveries in the past decade. However, it is difficult to put results in a proper cosmological context, because our knowledge of the mass M of the Local Group remains uncertain by a factor 4. In units of 10^{12} solar masses, a spherical infall model for the zero-velocity surface gives M 1.3; the sum of estimates for the Milky Way and M31 masses gives M 2.6; and the Local Group Timing argument for the M31 orbit gives M 5.6. It is possible to discriminate between the proposed masses by calculating the orbits of galaxies at the edge of the Local Group, which requires knowledge of transverse velocity components. We therefore propose to use ACS/WFC to determine the proper motions of the 4 dwarf galaxies near the edge of the Local Group {Cetus, Leo A, Tucana, Sag DIG} for which deep first epoch data {with 5-7 year time baselines} already exist in the HST Archive. Our team has extensive expertise with HST astrometric science, and our past/ongoing work for, e.g., Omega Cen, LMC/SMC and M31 show that the necessary astrometric accuracy is within the reach of HST's demonstrated capabilities. We have developed, tested, and published a new technique that uses compact background galaxies as astrometric reference sources, and we have already reduced the first epoch data. The final predicted transverse velocity accuracy, 36 km/s when averaged over the sample, will be sufficient to discriminate between each of the proposed Local Group masses at 2-sigma significance {4-sigma between the most extreme values}. Our project will yield the most accurate Local Group mass determination to date, and only HST can achieve the required accuracy.

  20. Development of an on-line source-tagged model for sulfate, nitrate and ammonium: A modeling study for highly polluted periods in Shanghai, China.

    PubMed

    Wu, Jian-Bin; Wang, Zifa; Wang, Qian; Li, Jie; Xu, Jianming; Chen, HuanSheng; Ge, Baozhu; Zhou, Guangqiang; Chang, Luyu

    2017-02-01

    An on-line source-tagged model coupled with an air quality model (Nested Air Quality Prediction Model System, NAQPMS) was applied to estimate source contributions of primary and secondary sulfate, nitrate and ammonium (SNA) during a representative winter period in Shanghai. This source-tagged model system could simultaneously track spatial and temporal sources of SNA, which were apportioned to their respective primary precursors in a simulation run. The results indicate that in the study period, local emissions in Shanghai accounted for over 20% of SNA contributions and that Jiangsu and Shandong were the two major non-local sources. In particular, non-local emissions had higher contributions during recorded pollution periods. This suggests that the transportation of pollutants plays a key role in air pollution in Shanghai. The temporal contributions show that the emissions from the "current day" (emission contribution from the current day during which the model was simulating) contributed 60%-70% of the sulfate and ammonium concentrations but only 10%-20% of the nitrate concentration, while the previous days' contributions increased during the recorded pollution periods. Emissions that were released within three days contributed over 85% averagely for SNA in January 2013. To evaluate the source-tagged model system, the results were compared by sensitivity analysis (emission perturbation of -30%) and backward trajectory analysis. The consistency of the comparison results indicated that the source-tagged model system can track sources of SNA with reasonable accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Bio-inspired UAV routing, source localization, and acoustic signature classification for persistent surveillance

    NASA Astrophysics Data System (ADS)

    Burman, Jerry; Hespanha, Joao; Madhow, Upamanyu; Pham, Tien

    2011-06-01

    A team consisting of Teledyne Scientific Company, the University of California at Santa Barbara and the Army Research Laboratory* is developing technologies in support of automated data exfiltration from heterogeneous battlefield sensor networks to enhance situational awareness for dismounts and command echelons. Unmanned aerial vehicles (UAV) provide an effective means to autonomously collect data from a sparse network of unattended ground sensors (UGSs) that cannot communicate with each other. UAVs are used to reduce the system reaction time by generating autonomous collection routes that are data-driven. Bio-inspired techniques for search provide a novel strategy to detect, capture and fuse data. A fast and accurate method has been developed to localize an event by fusing data from a sparse number of UGSs. This technique uses a bio-inspired algorithm based on chemotaxis or the motion of bacteria seeking nutrients in their environment. A unique acoustic event classification algorithm was also developed based on using swarm optimization. Additional studies addressed the problem of routing multiple UAVs, optimally placing sensors in the field and locating the source of gunfire at helicopters. A field test was conducted in November of 2009 at Camp Roberts, CA. The field test results showed that a system controlled by bio-inspired software algorithms can autonomously detect and locate the source of an acoustic event with very high accuracy and visually verify the event. In nine independent test runs of a UAV, the system autonomously located the position of an explosion nine times with an average accuracy of 3 meters. The time required to perform source localization using the UAV was on the order of a few minutes based on UAV flight times. In June 2011, additional field tests of the system will be performed and will include multiple acoustic events, optimal sensor placement based on acoustic phenomenology and the use of the International Technology Alliance (ITA) Sensor Network Fabric (IBM).

  2. A low cost indoor localization system for mobile robot experimental setup

    NASA Astrophysics Data System (ADS)

    Adinandra, S.; Syarif, A.

    2018-04-01

    Indoor localization becomes one of the most important part in mobile robot system One fundamental requirement is to provide an easy-to-use and practical localization system for real-time experiments. In this paper we propose a combination of a recent open source virtual reality (VR) tools, a simple MATLAB code and a low cost USB webcam as an indoor mobile robot localization system Using the VR tools as a server and MATLAB as a client, the proposed solution can cover up to 1.6 [m] × 3.2 [m] with the measurement position accuracy up to 1.2 [cm]. The system is insensitive to light, easy to move and can be quickly set up. A series of successful real-time experiments with three different mobile robot types has been conducted.

  3. [Data sources, the data used, and the modality for collection].

    PubMed

    Mercier, G; Costa, N; Dutot, C; Riche, V-P

    2018-03-01

    The hospital costing process implies access to various sources of data. Whether a micro-costing or a gross-costing approach is used, the choice of the methodology is based on a compromise between the cost of data collection, data accuracy, and data transferability. This work describes the data sources available in France and the access modalities that are used, as well as the main advantages and shortcomings of: (1) the local unit costs, (2) the hospital analytical accounting, (3) the Angers database, (4) the National Health Cost Studies, (5) the INTER CHR/U databases, (6) the Program for Medicalizing Information Systems, and (7) the public health insurance databases. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  4. iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization.

    PubMed

    Blenkmann, Alejandro O; Phillips, Holly N; Princich, Juan P; Rowe, James B; Bekinschtein, Tristan A; Muravchik, Carlos H; Kochen, Silvia

    2017-01-01

    The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2-3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions.

  5. Object localization using a biosonar beam: how opening your mouth improves localization.

    PubMed

    Arditi, G; Weiss, A J; Yovel, Y

    2015-08-01

    Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions.

  6. Object localization using a biosonar beam: how opening your mouth improves localization

    PubMed Central

    Arditi, G.; Weiss, A. J.; Yovel, Y.

    2015-01-01

    Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions. PMID:26361552

  7. iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization

    PubMed Central

    Blenkmann, Alejandro O.; Phillips, Holly N.; Princich, Juan P.; Rowe, James B.; Bekinschtein, Tristan A.; Muravchik, Carlos H.; Kochen, Silvia

    2017-01-01

    The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2–3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions. PMID:28303098

  8. The performance of the spatiotemporal Kalman filter and LORETA in seizure onset localization.

    PubMed

    Hamid, Laith; Sarabi, Masoud; Japaridze, Natia; Wiegand, Gert; Heute, Ulrich; Stephani, Ulrich; Galka, Andreas; Siniatchkin, Michael

    2015-08-01

    The assumption of spatial-smoothness is often used to solve the bioelectric inverse problem during electroencephalographic (EEG) source imaging, e.g., in low resolution electromagnetic tomography (LORETA). Since the EEG data show a temporal structure, the combination of the temporal-smoothness and the spatial-smoothness constraints may improve the solution of the EEG inverse problem. This study investigates the performance of the spatiotemporal Kalman filter (STKF) method, which is based on spatial and temporal smoothness, in the localization of a focal seizure's onset and compares its results to those of LORETA. The main finding of the study was that the STKF with an autoregressive model of order two significantly outperformed LORETA in the accuracy and consistency of the localization, provided that the source space consists of a whole-brain volumetric grid. In the future, these promising results will be confirmed using data from more patients and performing statistical analyses on the results. Furthermore, the effects of the temporal smoothness constraint will be studied using different types of focal seizures.

  9. The FieldTrip-SimBio pipeline for EEG forward solutions.

    PubMed

    Vorwerk, Johannes; Oostenveld, Robert; Piastra, Maria Carla; Magyari, Lilla; Wolters, Carsten H

    2018-03-27

    Accurately solving the electroencephalography (EEG) forward problem is crucial for precise EEG source analysis. Previous studies have shown that the use of multicompartment head models in combination with the finite element method (FEM) can yield high accuracies both numerically and with regard to the geometrical approximation of the human head. However, the workload for the generation of multicompartment head models has often been too high and the use of publicly available FEM implementations too complicated for a wider application of FEM in research studies. In this paper, we present a MATLAB-based pipeline that aims to resolve this lack of easy-to-use integrated software solutions. The presented pipeline allows for the easy application of five-compartment head models with the FEM within the FieldTrip toolbox for EEG source analysis. The FEM from the SimBio toolbox, more specifically the St. Venant approach, was integrated into the FieldTrip toolbox. We give a short sketch of the implementation and its application, and we perform a source localization of somatosensory evoked potentials (SEPs) using this pipeline. We then evaluate the accuracy that can be achieved using the automatically generated five-compartment hexahedral head model [skin, skull, cerebrospinal fluid (CSF), gray matter, white matter] in comparison to a highly accurate tetrahedral head model that was generated on the basis of a semiautomatic segmentation with very careful and time-consuming manual corrections. The source analysis of the SEP data correctly localizes the P20 component and achieves a high goodness of fit. The subsequent comparison to the highly detailed tetrahedral head model shows that the automatically generated five-compartment head model performs about as well as a highly detailed four-compartment head model (skin, skull, CSF, brain). This is a significant improvement in comparison to a three-compartment head model, which is frequently used in praxis, since the importance of modeling the CSF compartment has been shown in a variety of studies. The presented pipeline facilitates the use of five-compartment head models with the FEM for EEG source analysis. The accuracy with which the EEG forward problem can thereby be solved is increased compared to the commonly used three-compartment head models, and more reliable EEG source reconstruction results can be obtained.

  10. Odometry and Laser Scanner Fusion Based on a Discrete Extended Kalman Filter for Robotic Platooning Guidance

    PubMed Central

    Espinosa, Felipe; Santos, Carlos; Marrón-Romera, Marta; Pizarro, Daniel; Valdés, Fernando; Dongil, Javier

    2011-01-01

    This paper describes a relative localization system used to achieve the navigation of a convoy of robotic units in indoor environments. This positioning system is carried out fusing two sensorial sources: (a) an odometric system and (b) a laser scanner together with artificial landmarks located on top of the units. The laser source allows one to compensate the cumulative error inherent to dead-reckoning; whereas the odometry source provides less pose uncertainty in short trajectories. A discrete Extended Kalman Filter, customized for this application, is used in order to accomplish this aim under real time constraints. Different experimental results with a convoy of Pioneer P3-DX units tracking non-linear trajectories are shown. The paper shows that a simple setup based on low cost laser range systems and robot built-in odometry sensors is able to give a high degree of robustness and accuracy to the relative localization problem of convoy units for indoor applications. PMID:22164079

  11. Odometry and laser scanner fusion based on a discrete extended Kalman Filter for robotic platooning guidance.

    PubMed

    Espinosa, Felipe; Santos, Carlos; Marrón-Romera, Marta; Pizarro, Daniel; Valdés, Fernando; Dongil, Javier

    2011-01-01

    This paper describes a relative localization system used to achieve the navigation of a convoy of robotic units in indoor environments. This positioning system is carried out fusing two sensorial sources: (a) an odometric system and (b) a laser scanner together with artificial landmarks located on top of the units. The laser source allows one to compensate the cumulative error inherent to dead-reckoning; whereas the odometry source provides less pose uncertainty in short trajectories. A discrete Extended Kalman Filter, customized for this application, is used in order to accomplish this aim under real time constraints. Different experimental results with a convoy of Pioneer P3-DX units tracking non-linear trajectories are shown. The paper shows that a simple setup based on low cost laser range systems and robot built-in odometry sensors is able to give a high degree of robustness and accuracy to the relative localization problem of convoy units for indoor applications.

  12. The use of precise ephemerides, ionospheric data, and corrected antenna coordinates in a long-distance GPS time transfer

    NASA Technical Reports Server (NTRS)

    Lewandowski, Wlodzimierz W.; Petit, Gerard; Thomas, Claudine; Weiss, Marc A.

    1990-01-01

    Over intercontinental distances, the accuracy of The Global Positioning System (GPS) time transfers ranges from 10 to 20 ns. The principal error sources are the broadcast ionospheric model, the broadcast ephemerides and the local antenna coordinates. For the first time, the three major error sources for GPS time transfer can be reduced simultaneously for a particular time link. Ionospheric measurement systems of the National Institute of Standards and Technology (NIST) type are now operating on a regular basis at the National Institute of Standards and Technology in Boulder and at the Paris Observatory in Paris. Broadcast ephemerides are currently recorded for time-transfer tracks between these sites, this being necessary for using precise ephemerides. At last, corrected local GPS antenna coordinates are now introduced in GPS receivers at both sites. Shown here is the improvement in precision for this long-distance time comparison resulting from the reduction of these three error sources.

  13. Open-Fit Domes and Children with Bilateral High-Frequency Sensorineural Hearing Loss: Benefits and Outcomes.

    PubMed

    Johnstone, Patti M; Yeager, Kelly R; Pomeroy, Marnie L; Hawk, Nicole

    2018-04-01

    Open-fit domes (OFDs) coupled with behind-the-ear (BTE) hearing aids were designed for adult listeners with moderate-to-severe bilateral high-frequency hearing loss (BHFL) with little to no concurrent loss in the lower frequencies. Adult research shows that BHFL degrades sound localization accuracy (SLA) and that BTE hearing aids with conventional earmolds (CEs) make matters worse. In contrast, research has shown that OFDs enhance spatial hearing percepts in adults with BHFL. Although the benefits of OFDs have been studied in adults with BHFL, no published studies to date have investigated the use of OFDs in children with the same hearing loss configuration. This study seeks to use SLA measurements to assess efficacy of bilateral OFDs in children with BHFL. To measure SLA in children with BHFL to determine the extent to which hearing loss, age, duration of CE use, and OFDs affect localization accuracy. A within-participant experimental design using repeated measures was used to determine the effect of OFDs on localization accuracy in children with BHFL. A between-participant experimental design was used to compare localization accuracy between children with BHFL and age-matched controls with normal hearing (NH). Eighteen children with BHFL who used CE and 18 age-matched NH controls. Children in both groups were divided into two age groups: older children (10-16 yr) and younger children (6-9 yr). All testing was done in a sound-treated booth with a horizontal array of 15 loudspeakers (radius of 1 m). The stimulus was a spondee word, "baseball": the level averaged 60 dB SPL and randomly roved (±8 dB). Each child was asked to identify the location of a sound source. Localization error was calculated across the loudspeaker array for each listening condition. A significant interaction was found between immediate benefit from OFD and duration of CE usage. Longer CE usage was associated with degraded localization accuracy using OFDs. Regardless of chronological age, children who had used CEs for <6 yr showed immediate localization benefit using OFDs, whereas children who had used CEs for >6 yr showed immediate localization interference using OFDs. Development, however, may play a role in SLA in children with BHFL. When unaided, older children had significantly better localization acuity than younger children with BHFL. When compared to age-matched controls, children with BHFL of all ages showed greater localization error. Nearly all (94% [17/18]) children with BHFL spontaneously reported immediate own-voice improvement when using OFDs. OFDs can provide sound localization benefit to younger children with BHFL. However, immediate benefit from OFDs is reduced by prolonged use of CEs. Although developmental factors may play a role in improving localization abilities over time, children with BHFL will rarely equal that of peers without early use of minimally disruptive hearing aid technology. Also, the occlusion effect likely impacts children far more than currently thought. American Academy of Audiology.

  14. Localization Accuracy of Distributed Inverse Solutions for Electric and Magnetic Source Imaging of Interictal Epileptic Discharges in Patients with Focal Epilepsy.

    PubMed

    Heers, Marcel; Chowdhury, Rasheda A; Hedrich, Tanguy; Dubeau, François; Hall, Jeffery A; Lina, Jean-Marc; Grova, Christophe; Kobayashi, Eliane

    2016-01-01

    Distributed inverse solutions aim to realistically reconstruct the origin of interictal epileptic discharges (IEDs) from noninvasively recorded electroencephalography (EEG) and magnetoencephalography (MEG) signals. Our aim was to compare the performance of different distributed inverse solutions in localizing IEDs: coherent maximum entropy on the mean (cMEM), hierarchical Bayesian implementations of independent identically distributed sources (IID, minimum norm prior) and spatially coherent sources (COH, spatial smoothness prior). Source maxima (i.e., the vertex with the maximum source amplitude) of IEDs in 14 EEG and 19 MEG studies from 15 patients with focal epilepsy were analyzed. We visually compared their concordance with intracranial EEG (iEEG) based on 17 cortical regions of interest and their spatial dispersion around source maxima. Magnetic source imaging (MSI) maxima from cMEM were most often confirmed by iEEG (cMEM: 14/19, COH: 9/19, IID: 8/19 studies). COH electric source imaging (ESI) maxima co-localized best with iEEG (cMEM: 8/14, COH: 11/14, IID: 10/14 studies). In addition, cMEM was less spatially spread than COH and IID for ESI and MSI (p < 0.001 Bonferroni-corrected post hoc t test). Highest positive predictive values for cortical regions with IEDs in iEEG could be obtained with cMEM for MSI and with COH for ESI. Additional realistic EEG/MEG simulations confirmed our findings. Accurate spatially extended sources, as found in cMEM (ESI and MSI) and COH (ESI) are desirable for source imaging of IEDs because this might influence surgical decision. Our simulations suggest that COH and IID overestimate the spatial extent of the generators compared to cMEM.

  15. Camera calibration: active versus passive targets

    NASA Astrophysics Data System (ADS)

    Schmalz, Christoph; Forster, Frank; Angelopoulou, Elli

    2011-11-01

    Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the camera's focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better.

  16. Computationally Efficient Radio Frequency Source Localization for Radio Interferometric Arrays

    NASA Astrophysics Data System (ADS)

    Steeb, J.-W.; Davidson, David B.; Wijnholds, Stefan J.

    2018-03-01

    Radio frequency interference (RFI) is an ever-increasing problem for remote sensing and radio astronomy, with radio telescope arrays especially vulnerable to RFI. Localizing the RFI source is the first step to dealing with the culprit system. In this paper, a new localization algorithm for interferometric arrays with low array beam sidelobes is presented. The algorithm has been adapted to work both in the near field and far field (only the direction of arrival can be recovered when the source is in the far field). In the near field the computational complexity of the algorithm is linear with search grid size compared to cubic scaling of the state-of-the-art 3-D MUltiple SIgnal Classification (MUSIC) method. The new method is as accurate as 3-D MUSIC. The trade-off is that the proposed algorithm requires a once-off a priori calculation and storing of weighting matrices. The accuracy of the algorithm is validated using data generated by low-frequency array while a hexacopter was flying around it and broadcasting a continuous-wave signal. For the flight, the mean distance between the differential GPS positions and the corresponding estimated positions of the hexacopter is 2 m at a wavelength of 6.7 m.

  17. MEG-EEG Information Fusion and Electromagnetic Source Imaging: From Theory to Clinical Application in Epilepsy.

    PubMed

    Chowdhury, Rasheda Arman; Zerouali, Younes; Hedrich, Tanguy; Heers, Marcel; Kobayashi, Eliane; Lina, Jean-Marc; Grova, Christophe

    2015-11-01

    The purpose of this study is to develop and quantitatively assess whether fusion of EEG and MEG (MEEG) data within the maximum entropy on the mean (MEM) framework increases the spatial accuracy of source localization, by yielding better recovery of the spatial extent and propagation pathway of the underlying generators of inter-ictal epileptic discharges (IEDs). The key element in this study is the integration of the complementary information from EEG and MEG data within the MEM framework. MEEG was compared with EEG and MEG when localizing single transient IEDs. The fusion approach was evaluated using realistic simulation models involving one or two spatially extended sources mimicking propagation patterns of IEDs. We also assessed the impact of the number of EEG electrodes required for an efficient EEG-MEG fusion. MEM was compared with minimum norm estimate, dynamic statistical parametric mapping, and standardized low-resolution electromagnetic tomography. The fusion approach was finally assessed on real epileptic data recorded from two patients showing IEDs simultaneously in EEG and MEG. Overall the localization of MEEG data using MEM provided better recovery of the source spatial extent, more sensitivity to the source depth and more accurate detection of the onset and propagation of IEDs than EEG or MEG alone. MEM was more accurate than the other methods. MEEG proved more robust than EEG and MEG for single IED localization in low signal-to-noise ratio conditions. We also showed that only few EEG electrodes are required to bring additional relevant information to MEG during MEM fusion.

  18. Towards 3D Noise Source Localization using Matched Field Processing

    NASA Astrophysics Data System (ADS)

    Umlauft, J.; Walter, F.; Lindner, F.; Flores Estrella, H.; Korn, M.

    2017-12-01

    The Matched Field Processing (MFP) is an array-processing and beamforming method, initially developed in ocean acoustics, that locates noise sources in range, depth and azimuth. In this study, we discuss the applicability of MFP for geophysical problems on the exploration scale and its suitability as a monitoring tool for near surface processes. First, we used synthetic seismograms to analyze the resolution and sensitivity of MFP in a 3D environment. The inversion shows how the localization accuracy is affected by the array design, pre-processing techniques, the velocity model and considered wave field characteristics. Hence, we can formulate guidelines for an improved MFP handling. Additionally, we present field datasets, aquired from two different environmental settings and in the presence of different source types. Small-scale, dense aperture arrays (Ø <1 km) were installed on a natural CO2 degassing field (Czech Republic) and on a Glacier site (Switzerland). The located noise sources form distinct 3 dimensional zones and channel-like structures (several 100 m depth range), which could be linked to the expected environmental processes taking place at each test site. Furthermore, fast spatio-temporal variations (hours to days) of the source distribution could be succesfully monitored.

  19. Functional connectivity analysis in EEG source space: The choice of method

    PubMed Central

    Knyazeva, Maria G.

    2017-01-01

    Functional connectivity (FC) is among the most informative features derived from EEG. However, the most straightforward sensor-space analysis of FC is unreliable owing to volume conductance effects. An alternative—source-space analysis of FC—is optimal for high- and mid-density EEG (hdEEG, mdEEG); however, it is questionable for widely used low-density EEG (ldEEG) because of inadequate surface sampling. Here, using simulations, we investigate the performance of the two source FC methods, the inverse-based source FC (ISFC) and the cortical partial coherence (CPC). To examine the effects of localization errors of the inverse method on the FC estimation, we simulated an oscillatory source with varying locations and SNRs. To compare the FC estimations by the two methods, we simulated two synchronized sources with varying between-source distance and SNR. The simulations were implemented for hdEEG, mdEEG, and ldEEG. We showed that the performance of both methods deteriorates for deep sources owing to their inaccurate localization and smoothing. The accuracy of both methods improves with the increasing between-source distance. The best ISFC performance was achieved using hd/mdEEG, while the best CPC performance was observed with ldEEG. In conclusion, with hdEEG, ISFC outperforms CPC and therefore should be the preferred method. In the studies based on ldEEG, the CPC is a method of choice. PMID:28727750

  20. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  1. On-board error correction improves IR earth sensor accuracy

    NASA Astrophysics Data System (ADS)

    Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.

    1989-10-01

    Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.

  2. Mobile indoor localization using Kalman filter and trilateration technique

    NASA Astrophysics Data System (ADS)

    Wahid, Abdul; Kim, Su Mi; Choi, Jaeho

    2015-12-01

    In this paper, an indoor localization method based on Kalman filtered RSSI is presented. The indoor communications environment however is rather harsh to the mobiles since there is a substantial number of objects distorting the RSSI signals; fading and interference are main sources of the distortion. In this paper, a Kalman filter is adopted to filter the RSSI signals and the trilateration method is applied to obtain the robust and accurate coordinates of the mobile station. From the indoor experiments using the WiFi stations, we have found that the proposed algorithm can provide a higher accuracy with relatively lower power consumption in comparison to a conventional method.

  3. Localizing intracavitary brachytherapy applicators from cone-beam CT x-ray projections via a novel iterative forward projection matching algorithm.

    PubMed

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2011-02-01

    To present a novel method for reconstructing the 3D pose (position and orientation) of radio-opaque applicators of known but arbitrary shape from a small set of 2D x-ray projections in support of intraoperative brachytherapy planning. The generalized iterative forward projection matching (gIFPM) algorithm finds the six degree-of-freedom pose of an arbitrary rigid object by minimizing the sum-of-squared-intensity differences (SSQD) between the computed and experimentally acquired autosegmented projection of the objects. Starting with an initial estimate of the object's pose, gIFPM iteratively refines the pose parameters (3D position and three Euler angles) until the SSQD converges. The object, here specialized to a Fletcher-Weeks intracavitary brachytherapy (ICB) applicator, is represented by a fine mesh of discrete points derived from complex combinatorial geometric models of the actual applicators. Three pairs of computed and measured projection images with known imaging geometry are used. Projection images of an intrauterine tandem and colpostats were acquired from an ACUITY cone-beam CT digital simulator. An image postprocessing step was performed to create blurred binary applicators only images. To quantify gIFPM accuracy, the reconstructed 3D pose of the applicator model was forward projected and overlaid with the measured images and empirically calculated the nearest-neighbor applicator positional difference for each image pair. In the numerical simulations, the tandem and colpostats positions (x,y,z) and orientations (alpha, beta, gamma) were estimated with accuracies of 0.6 mm and 2 degrees, respectively. For experimentally acquired images of actual applicators, the residual 2D registration error was less than 1.8 mm for each image pair, corresponding to about 1 mm positioning accuracy at isocenter, with a total computation time of less than 1.5 min on a 1 GHz processor. This work describes a novel, accurate, fast, and completely automatic method to localize radio-opaque applicators of arbitrary shape from measured 2D x-ray projections. The results demonstrate approximately 1 mm accuracy while compared against the measured applicator projections. No lateral film is needed. By localizing the applicator internal structure as well as radioactive sources, the effect of intra-applicator and interapplicator attenuation can be included in the resultant dose calculations. Further validation tests using clinically acquired tandem and colpostats images will be performed for the accurate and robust applicator/sources localization in ICB patients.

  4. Localizing intracavitary brachytherapy applicators from cone-beam CT x-ray projections via a novel iterative forward projection matching algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.

    2011-02-15

    Purpose: To present a novel method for reconstructing the 3D pose (position and orientation) of radio-opaque applicators of known but arbitrary shape from a small set of 2D x-ray projections in support of intraoperative brachytherapy planning. Methods: The generalized iterative forward projection matching (gIFPM) algorithm finds the six degree-of-freedom pose of an arbitrary rigid object by minimizing the sum-of-squared-intensity differences (SSQD) between the computed and experimentally acquired autosegmented projection of the objects. Starting with an initial estimate of the object's pose, gIFPM iteratively refines the pose parameters (3D position and three Euler angles) until the SSQD converges. The object, heremore » specialized to a Fletcher-Weeks intracavitary brachytherapy (ICB) applicator, is represented by a fine mesh of discrete points derived from complex combinatorial geometric models of the actual applicators. Three pairs of computed and measured projection images with known imaging geometry are used. Projection images of an intrauterine tandem and colpostats were acquired from an ACUITY cone-beam CT digital simulator. An image postprocessing step was performed to create blurred binary applicators only images. To quantify gIFPM accuracy, the reconstructed 3D pose of the applicator model was forward projected and overlaid with the measured images and empirically calculated the nearest-neighbor applicator positional difference for each image pair. Results: In the numerical simulations, the tandem and colpostats positions (x,y,z) and orientations ({alpha},{beta},{gamma}) were estimated with accuracies of 0.6 mm and 2 deg., respectively. For experimentally acquired images of actual applicators, the residual 2D registration error was less than 1.8 mm for each image pair, corresponding to about 1 mm positioning accuracy at isocenter, with a total computation time of less than 1.5 min on a 1 GHz processor. Conclusions: This work describes a novel, accurate, fast, and completely automatic method to localize radio-opaque applicators of arbitrary shape from measured 2D x-ray projections. The results demonstrate {approx}1 mm accuracy while compared against the measured applicator projections. No lateral film is needed. By localizing the applicator internal structure as well as radioactive sources, the effect of intra-applicator and interapplicator attenuation can be included in the resultant dose calculations. Further validation tests using clinically acquired tandem and colpostats images will be performed for the accurate and robust applicator/sources localization in ICB patients.« less

  5. Pencil-beam redefinition algorithm dose calculations for electron therapy treatment planning

    NASA Astrophysics Data System (ADS)

    Boyd, Robert Arthur

    2001-08-01

    The electron pencil-beam redefinition algorithm (PBRA) of Shiu and Hogstrom has been developed for use in radiotherapy treatment planning (RTP). Earlier studies of Boyd and Hogstrom showed that the PBRA lacked an adequate incident beam model, that PBRA might require improved electron physics, and that no data existed which allowed adequate assessment of the PBRA-calculated dose accuracy in a heterogeneous medium such as one presented by patient anatomy. The hypothesis of this research was that by addressing the above issues the PBRA-calculated dose would be accurate to within 4% or 2 mm in regions of high dose gradients. A secondary electron source was added to the PBRA to account for collimation-scattered electrons in the incident beam. Parameters of the dual-source model were determined from a minimal data set to allow ease of beam commissioning. Comparisons with measured data showed 3% or better dose accuracy in water within the field for cases where 4% accuracy was not previously achievable. A measured data set was developed that allowed an evaluation of PBRA in regions distal to localized heterogeneities. Geometries in the data set included irregular surfaces and high- and low-density internal heterogeneities. The data was estimated to have 1% precision and 2% agreement with accurate, benchmarked Monte Carlo (MC) code. PBRA electron transport was enhanced by modeling local pencil beam divergence. This required fundamental changes to the mathematics of electron transport (divPBRA). Evaluation of divPBRA with the measured data set showed marginal improvement in dose accuracy when compared to PBRA; however, 4% or 2mm accuracy was not achieved by either PBRA version for all data points. Finally, PBRA was evaluated clinically by comparing PBRA- and MC-calculated dose distributions using site-specific patient RTP data. Results show PBRA did not agree with MC to within 4% or 2mm in a small fraction (<3%) of the irradiated volume. Although the hypothesis of the research was shown to be false, the minor dose inaccuracies should have little or no impact on RTP decisions or patient outcome. Therefore, given ease of beam commissioning, documentation of accuracy, and calculational speed, the PBRA should be considered a practical tool for clinical use.

  6. Waves on Thin Plates: A New (Energy Based) Method on Localization

    NASA Astrophysics Data System (ADS)

    Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Lengliné, Olivier; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut

    2016-04-01

    Noisy acoustic signal localization is a difficult problem having a wide range of application. We propose a new localization method applicable for thin plates which is based on energy amplitude attenuation and inversed source amplitude comparison. This inversion is tested on synthetic data using a direct model of Lamb wave propagation and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers, 1 - 26 kHz frequency range). We compare the performance of this technique with classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. The experimental setup consist of a glass / plexiglass plate having dimensions of 80 cm x 40 cm x 1 cm equipped with four accelerometers and an acquisition card. Signals are generated using a steel, glass or polyamide ball (having different sizes) quasi perpendicular hit (from a height of 2-3 cm) on the plate. Signals are captured by sensors placed on the plate on different locations. We measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, array geometry, signal to noise ratio and computational time. We show that this new technique, which is very versatile, works better than conventional techniques over a range of sampling rates 8 kHz - 1 MHz. It is possible to have a decent resolution (3cm mean error) using a very cheap equipment set. The numerical simulations allow us to track the contributions of different error sources in different methods. The effect of the reflections is also included in our simulation by using the imaginary sources outside the plate boundaries. This proposed method can easily be extended for applications in three dimensional environments, to monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).

  7. Dual-head gamma camera system for intraoperative localization of radioactive seeds

    NASA Astrophysics Data System (ADS)

    Arsenali, B.; de Jong, H. W. A. M.; Viergever, M. A.; Dickerscheid, D. B. M.; Beijst, C.; Gilhuijs, K. G. A.

    2015-10-01

    Breast-conserving surgery is a standard option for the treatment of patients with early-stage breast cancer. This form of surgery may result in incomplete excision of the tumor. Iodine-125 labeled titanium seeds are currently used in clinical practice to reduce the number of incomplete excisions. It seems likely that the number of incomplete excisions can be reduced even further if intraoperative information about the location of the radioactive seed is combined with preoperative information about the extent of the tumor. This can be combined if the location of the radioactive seed is established in a world coordinate system that can be linked to the (preoperative) image coordinate system. With this in mind, we propose a radioactive seed localization system which is composed of two static ceiling-suspended gamma camera heads and two parallel-hole collimators. Physical experiments and computer simulations which mimic realistic clinical situations were performed to estimate the localization accuracy (defined as trueness and precision) of the proposed system with respect to collimator-source distance (ranging between 50 cm and 100 cm) and imaging time (ranging between 1 s and 10 s). The goal of the study was to determine whether or not a trueness of 5 mm can be achieved if a collimator-source distance of 50 cm and imaging time of 5 s are used (these specifications were defined by a group of dedicated breast cancer surgeons). The results from the experiments indicate that the location of the radioactive seed can be established with an accuracy of 1.6 mm  ±  0.6 mm if a collimator-source distance of 50 cm and imaging time of 5 s are used (these experiments were performed with a 4.5 cm thick block phantom). Furthermore, the results from the simulations indicate that a trueness of 3.2 mm or less can be achieved if a collimator-source distance of 50 cm and imaging time of 5 s are used (this trueness was achieved for all 14 breast phantoms which were used in this study). Based on these results we conclude that the proposed system can be a valuable tool for (real-time) intraoperative breast cancer localization.

  8. Water Vapor Tracers as Diagnostics of the Regional Hydrologic Cycle

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Schubert, Siegfried; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Numerous studies suggest that local feedback of evaporation on precipitation, or recycling, is a significant source of water for precipitation. Quantitative results on the exact amount of recycling have been difficult to obtain in view of the inherent limitations of diagnostic recycling calculations. The current study describes a calculation of the amount of local and remote sources of water for precipitation, based on the implementation of passive constituent tracers of water vapor (termed water vapor tracers, WVT) in a general circulation model. In this case, the major limitation on the accuracy of the recycling estimates is the veracity of the numerically simulated hydrological cycle, though we note that this approach can also be implemented within the context of a data assimilation system. In this approach, each WVT is associated with an evaporative source region, and tracks the water until it precipitates from the atmosphere. By assuming that the regional water is well mixed with water from other sources, the physical processes that act on the WVT are determined in proportion to those that act on the model's prognostic water vapor. In this way, the local and remote sources of water for precipitation can be computed within the model simulation, and can be validated against the model's prognostic water vapor. Furthermore, estimates of precipitation recycling can be compared with bulk diagnostic approaches. As a demonstration of the method, the regional hydrologic cycles for North America and India are evaluated for six summers (June, July and August) of model simulation. More than 50% of the precipitation in the Midwestern United States came from continental regional tracers, and the local source was the largest of the regional tracers (14%). The Gulf of Mexico and Atlantic 2 regions contributed 18% of the water for Midwestern precipitation, but further analysis suggests that the greater region of the Tropical Atlantic Ocean may also contribute significantly. In general, most North American land regions showed a positive correlation between evaporation and recycling ratio (except the Southeast United States) and negative correlations of recycling ratio with precipitation and moisture transport (except the Southwestern United States). The Midwestern local source is positively correlated with local evaporation, but it is not correlated with water vapor transport. This is contrary to bulk diagnostic estimates of precipitation recycling. In India, the local source of precipitation is a small percentage of the precipitation owing to the dominance of the atmospheric transport of oceanic water. The southern Indian Ocean provides a key source of water for both the Indian continent and the Sahelian region.

  9. Improving IMES Localization Accuracy by Integrating Dead Reckoning Information

    PubMed Central

    Fujii, Kenjiro; Arie, Hiroaki; Wang, Wei; Kaneko, Yuto; Sakamoto, Yoshihiro; Schmitz, Alexander; Sugano, Shigeki

    2016-01-01

    Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES) radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled. PMID:26828492

  10. Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments

    NASA Astrophysics Data System (ADS)

    McMullen, Kyla A.

    Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.

  11. Emission characteristics and chemical components of size-segregated particulate matter in iron and steel industry

    NASA Astrophysics Data System (ADS)

    Jia, Jia; Cheng, Shuiyuan; Yao, Sen; Xu, Tiebing; Zhang, Tingting; Ma, Yuetao; Wang, Hongliang; Duan, Wenjiao

    2018-06-01

    As one of the highest energy consumption and pollution industries, the iron and steel industry is regarded as a most important source of particulate matter emission. In this study, chemical components of size-segregated particulate matters (PM) emitted from different manufacturing units in iron and steel industry were sampled by a comprehensive sampling system. Results showed that the average particle mass concentration was highest in sintering process, followed by puddling, steelmaking and then rolling processes. PM samples were divided into eight size fractions for testing the chemical components, SO42- and NH4+ distributed more into fine particles while most of the Ca2+ was concentrated in coarse particles, the size distribution of mineral elements depended on the raw materials applied. Moreover, local database with PM chemical source profiles of iron and steel industry were built and applied in CMAQ modeling for simulating SO42- and NO3- concentration, results showed that the accuracy of model simulation improved with local chemical source profiles compared to the SPECIATE database. The results gained from this study are expected to be helpful to understand the components of PM in iron and steel industry and contribute to the source apportionment researches.

  12. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  13. A discontinuous Galerkin approach for conservative modeling of fully nonlinear and weakly dispersive wave transformations

    NASA Astrophysics Data System (ADS)

    Sharifian, Mohammad Kazem; Kesserwani, Georges; Hassanzadeh, Yousef

    2018-05-01

    This work extends a robust second-order Runge-Kutta Discontinuous Galerkin (RKDG2) method to solve the fully nonlinear and weakly dispersive flows, within a scope to simultaneously address accuracy, conservativeness, cost-efficiency and practical needs. The mathematical model governing such flows is based on a variant form of the Green-Naghdi (GN) equations decomposed as a hyperbolic shallow water system with an elliptic source term. Practical features of relevance (i.e. conservative modeling over irregular terrain with wetting and drying and local slope limiting) have been restored from an RKDG2 solver to the Nonlinear Shallow Water (NSW) equations, alongside new considerations to integrate elliptic source terms (i.e. via a fourth-order local discretization of the topography) and to enable local capturing of breaking waves (i.e. via adding a detector for switching off the dispersive terms). Numerical results are presented, demonstrating the overall capability of the proposed approach in achieving realistic prediction of nearshore wave processes involving both nonlinearity and dispersion effects within a single model.

  14. Properties of Radio Sources in the FRB 121102 Field

    NASA Astrophysics Data System (ADS)

    Bower, Geoffrey C.; Chatterjee, Shami; Wharton, Robert; Law, Casey J.; Hessels, Jason; Spolaor, Sarah; Abruzzo, Matthew W.; Bassa, Cees; Butler, Bryan J.; Cordes, James M.; Demorest, Paul; Kaspi, Victoria M.; McLaughlin, Maura; Ransom, Scott M.; Scholz, Paul; Seymour, Andrew; Spitler, Laura; Tendulkar, Shriharsh P.; PALFA Survey; VLA+AO FRB121102 Simultaneous Campaign Team; EVN FRB121102 Campaign Team; Realfast Team

    2017-01-01

    Fast radio bursts are millisecond duration radio pulses of unknown origin. With dispersion measures substantially in excess of expected Galactic contributions, FRBs are inferred to originate extragalactically, implying very high luminosities. Models include a wide range of high energy systems such as magnetars, merging neutron star binaries, black holes, and strong stellar magnetic fields driving coherent radio emission. Central to the mystery of FRB origins are the absence of confirmed host objects at any wavelength. This is primarily the result of the poor localization from single dish detection of FRBs. Of the approximately 20 known examples, only one, FRB 121102, has been observed to repeat. This repetition presents an opportunity for detailed follow-up if interferometric localization to arcsecond accuracy can be obtained. The Very Large Array has previously been used to localize individual pulses from pulsars and rotating radio transients to arcsecond localizaiton. We present here the results of radio observations of the field of FRB 121102 that permit us to constrain models of possible progenitors of this bursting source. These observations can characterize active galactic nuclei, stars, and other progenitor objects.

  15. Detecting Large-Scale Brain Networks Using EEG: Impact of Electrode Density, Head Modeling and Source Localization

    PubMed Central

    Liu, Quanying; Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante

    2018-01-01

    Resting state networks (RSNs) in the human brain were recently detected using high-density electroencephalography (hdEEG). This was done by using an advanced analysis workflow to estimate neural signals in the cortex and to assess functional connectivity (FC) between distant cortical regions. FC analyses were conducted either using temporal (tICA) or spatial independent component analysis (sICA). Notably, EEG-RSNs obtained with sICA were very similar to RSNs retrieved with sICA from functional magnetic resonance imaging data. It still remains to be clarified, however, what technological aspects of hdEEG acquisition and analysis primarily influence this correspondence. Here we examined to what extent the detection of EEG-RSN maps by sICA depends on the electrode density, the accuracy of the head model, and the source localization algorithm employed. Our analyses revealed that the collection of EEG data using a high-density montage is crucial for RSN detection by sICA, but also the use of appropriate methods for head modeling and source localization have a substantial effect on RSN reconstruction. Overall, our results confirm the potential of hdEEG for mapping the functional architecture of the human brain, and highlight at the same time the interplay between acquisition technology and innovative solutions in data analysis. PMID:29551969

  16. Achieving perceptually-accurate aural telepresence

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.

    Immersive multimedia requires not only realistic visual imagery but also a perceptually-accurate aural experience. A sound field may be presented simultaneously to a listener via a loudspeaker rendering system using the direct sound from acoustic sources as well as a simulation or "auralization" of room acoustics. Beginning with classical Wave-Field Synthesis (WFS), improvements are made to correct for asymmetries in loudspeaker array geometry. Presented is a new Spatially-Equalized WFS (SE-WFS) technique to maintain the energy-time balance of a simulated room by equalizing the reproduced spectrum at the listener for a distribution of possible source angles. Each reproduced source or reflection is filtered according to its incidence angle to the listener. An SE-WFS loudspeaker array of arbitrary geometry reproduces the sound field of a room with correct spectral and temporal balance, compared with classically-processed WFS systems. Localization accuracy of human listeners in SE-WFS sound fields is quantified by psychoacoustical testing. At a loudspeaker spacing of 0.17 m (equivalent to an aliasing cutoff frequency of 1 kHz), SE-WFS exhibits a localization blur of 3 degrees, nearly equal to real point sources. Increasing the loudspeaker spacing to 0.68 m (for a cutoff frequency of 170 Hz) results in a blur of less than 5 degrees. In contrast, stereophonic reproduction is less accurate with a blur of 7 degrees. The ventriloquist effect is psychometrically investigated to determine the effect of an intentional directional incongruence between audio and video stimuli. Subjects were presented with prerecorded full-spectrum speech and motion video of a talker's head as well as broadband noise bursts with a static image. The video image was displaced from the audio stimulus in azimuth by varying amounts, and the perceived auditory location measured. A strong bias was detectable for small angular discrepancies between audio and video stimuli for separations of less than 8 degrees for speech and less than 4 degrees with a pink noise burst. The results allow for the density of WFS systems to be selected from the required localization accuracy. Also, by exploiting the ventriloquist effect, the angular resolution of an audio rendering may be reduced when combined with spatially-accurate video.

  17. Experimental studies of high-accuracy RFID localization with channel impairments

    NASA Astrophysics Data System (ADS)

    Pauls, Eric; Zhang, Yimin D.

    2015-05-01

    Radio frequency identification (RFID) systems present an incredibly cost-effective and easy-to-implement solution to close-range localization. One of the important applications of a passive RFID system is to determine the reader position through multilateration based on the estimated distances between the reader and multiple distributed reference tags obtained from, e.g., the received signal strength indicator (RSSI) readings. In practice, the achievable accuracy of passive RFID reader localization suffers from many factors, such as the distorted RSSI reading due to channel impairments in terms of the susceptibility to reader antenna patterns and multipath propagation. Previous studies have shown that the accuracy of passive RFID localization can be significantly improved by properly modeling and compensating for such channel impairments. The objective of this paper is to report experimental study results that validate the effectiveness of such approaches for high-accuracy RFID localization. We also examine a number of practical issues arising in the underlying problem that limit the accuracy of reader-tag distance measurements and, therefore, the estimated reader localization. These issues include the variations in tag radiation characteristics for similar tags, effects of tag orientations, and reader RSS quantization and measurement errors. As such, this paper reveals valuable insights of the issues and solutions toward achieving high-accuracy passive RFID localization.

  18. Study of curved and planar frequency-selective surfaces with nonplanar illumination

    NASA Technical Reports Server (NTRS)

    Caroglanian, Armen; Webb, Kevin J.

    1991-01-01

    A locally planar technique (LPT) is investigated for determining the forward-scattered field from a generally shaped inductive frequency-selective surface (FSS) with nonplanar illumination. The results of an experimental study are presented to assess the LPT accuracy. The effects of a nonplanar incident field are determined by comparing the LPT numerical results with a series of experiments with the feed source placed at varying distances from the planar FSS. The limitations of the LPT model due to surface curvature are investigated in an experimental study of the scattered fields from a set of hyperbolic cylinders of different curvatures. From these comparisons, guidelines for applying the locally planar technique are developed.

  19. Magnetoencephalographic accuracy profiles for the detection of auditory pathway sources.

    PubMed

    Bauer, Martin; Trahms, Lutz; Sander, Tilmann

    2015-04-01

    The detection limits for cortical and brain stem sources associated with the auditory pathway are examined in order to analyse brain responses at the limits of the audible frequency range. The results obtained from this study are also relevant to other issues of auditory brain research. A complementary approach consisting of recordings of magnetoencephalographic (MEG) data and simulations of magnetic field distributions is presented in this work. A biomagnetic phantom consisting of a spherical volume filled with a saline solution and four current dipoles is built. The magnetic fields outside of the phantom generated by the current dipoles are then measured for a range of applied electric dipole moments with a planar multichannel SQUID magnetometer device and a helmet MEG gradiometer device. The inclusion of a magnetometer system is expected to be more sensitive to brain stem sources compared with a gradiometer system. The same electrical and geometrical configuration is simulated in a forward calculation. From both the measured and the simulated data, the dipole positions are estimated using an inverse calculation. Results are obtained for the reconstruction accuracy as a function of applied electric dipole moment and depth of the current dipole. We found that both systems can localize cortical and subcortical sources at physiological dipole strength even for brain stem sources. Further, we found that a planar magnetometer system is more suitable if the position of the brain source can be restricted in a limited region of the brain. If this is not the case, a helmet-shaped sensor system offers more accurate source estimation.

  20. A comparison of needle tip localization accuracy using 2D and 3D trans-rectal ultrasound for high-dose-rate prostate cancer brachytherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Hrinivich, W. Thomas; Hoover, Douglas A.; Surry, Kathleen; Edirisinghe, Chandima; Montreuil, Jacques; D'Souza, David; Fenster, Aaron; Wong, Eugene

    2016-03-01

    Background: High-dose-rate brachytherapy (HDR-BT) is a prostate cancer treatment option involving the insertion of hollow needles into the gland through the perineum to deliver a radioactive source. Conventional needle imaging involves indexing a trans-rectal ultrasound (TRUS) probe in the superior/inferior (S/I) direction, using the axial transducer to produce an image set for organ segmentation. These images have limited resolution in the needle insertion direction (S/I), so the sagittal transducer is used to identify needle tips, requiring a manual registration with the axial view. This registration introduces a source of uncertainty in the final segmentations and subsequent treatment plan. Our lab has developed a device enabling 3D-TRUS guided insertions with high S/I spatial resolution, eliminating the need to align axial and sagittal views. Purpose: To compare HDR-BT needle tip localization accuracy between 2D and 3D-TRUS. Methods: 5 prostate cancer patients underwent conventional 2D TRUS guided HDR-BT, during which 3D images were also acquired for post-operative registration and segmentation. Needle end-length measurements were taken, providing a gold standard for insertion depths. Results: 73 needles were analyzed from all 5 patients. Needle tip position differences between imaging techniques was found to be largest in the S/I direction with mean+/-SD of -2.5+/-4.0 mm. End-length measurements indicated that 3D TRUS provided statistically significantly lower mean+/-SD insertion depth error of -0.2+/-3.4 mm versus 2.3+/-3.7 mm with 2D guidance (p < .001). Conclusions: 3D TRUS may provide more accurate HDR-BT needle localization than conventional 2D TRUS guidance for the majority of HDR-BT needles.

  1. Local systematic differences in 2MASS positions

    NASA Astrophysics Data System (ADS)

    Bustos Fierro, I. H.; Calderón, J. H.

    2018-01-01

    We have found that positions in the 2MASS All-sky Catalog of Point Sources show local systematic differences with characteristic length-scales of ˜ 5 to ˜ 8 arcminutes when compared with several catalogs. We have observed that when 2MASS positions are used in the computation of proper motions, the mentioned systematic differences cause systematic errors in the resulting proper motions. We have developed a method to locally rectify 2MASS with respect to UCAC4 in order to diminish the systematic differences between these catalogs. The rectified 2MASS catalog with the proposed method can be regarded as an extension of UCAC4 for astrometry with accuracy ˜ 90 mas in its positions, with negligible systematic errors. Also we show that the use of these rectified positions removes the observed systematic pattern in proper motions derived from original 2MASS positions.

  2. Localization of a Robotic Crawler for CANDU Fuel Channel Inspection

    NASA Astrophysics Data System (ADS)

    Manning, Mark

    This thesis discusses the design and development of a pipe crawling robot for the purpose of CANDU fuel channel inspection. The pipe crawling robot shall be capable of deploying the existing CIGAR (Channel Inspection and Gauging Apparatus for Reactors) sensor head. The main focus of this thesis is the design of the localization system for this robot and the many tests that were completed to demonstrate its accuracy. The proposed localization system consists of three redundant resolver wheels mounted to the robot's frame and two resolvers that are mounted inside a custom made cable drum. This cable drum shall be referred to in this thesis as the emergency retrieval device. This device serves the dual-purpose of providing absolute position measurements (via the cable that is tethered to the robot) as well as retrieving the robot if it is inoperable. The estimated accuracy of the proposed design is demonstrated with the use of a proof-of-concept prototype and a custom made test bench that uses a vision system to provide a more accurate estimate of the robot's position. The only major difference between the proof-of-concept prototype and the proposed solution is that the more expensive radiation hardened components were not used in the proof-of-concept prototype design. For example, the proposed solution shall use radiation hardened resolver wheels, whereas the proof-of-concept prototype used encoder wheels. These encoder wheels provide the same specified accuracy as the radiation hardened resolvers for the most realistic results possible. The rationale behind the design of the proof-of-concept prototype, the proposed final design, the design of the localization system test bench, and the test plan for developing all of the components of the design related to the robot's localization system are discussed in the thesis. The test plan provides a step by step guide to the configuration and optimization of an Unscented Kalman Filter (UKF). The UKF was selected as the ideal sensor fusion algorithm for use in this application. Benchmarking was completed to compare the accuracy achieved by the UKF algorithm to other data fusion algorithms. When compared to other algorithms, the UKF demonstrated the best accuracy when considering all likely sources of error such as sensor failure and surface unevenness. The test results show that the localization system is able to achieve a worst case positional accuracy of +/- 3.6 mm for the robot crawler over the full 6350 mm distance that the robot travels inside the pressure tube. This is extrapolated from the test results completed over the shorter length test bench with simulated surface unevenness. The key benefits of the pipe crawling robot when compared to the current system include: reduced dosage to workers and the reduced outage time. The advantages are due to the fact that the robot can be automated and multiple inspection robots can be deployed simultaneously. The current inspection system is only able to complete one inspection at a time.

  3. Parametric Loop Division for 3D Localization in Wireless Sensor Networks

    PubMed Central

    Ahmad, Tanveer

    2017-01-01

    Localization in Wireless Sensor Networks (WSNs) has been an active topic for more than two decades. A variety of algorithms were proposed to improve the localization accuracy. However, they are either limited to two-dimensional (2D) space, or require specific sensor deployment for proper operations. In this paper, we proposed a three-dimensional (3D) localization scheme for WSNs based on the well-known parametric Loop division (PLD) algorithm. The proposed scheme localizes a sensor node in a region bounded by a network of anchor nodes. By iteratively shrinking that region towards its center point, the proposed scheme provides better localization accuracy as compared to existing schemes. Furthermore, it is cost-effective and independent of environmental irregularity. We provide an analytical framework for the proposed scheme and find its lower bound accuracy. Simulation results shows that the proposed algorithm provides an average localization accuracy of 0.89 m with a standard deviation of 1.2 m. PMID:28737714

  4. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA.

    PubMed

    Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M

    2017-10-01

    Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  5. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA

    NASA Astrophysics Data System (ADS)

    Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.

    2017-10-01

    Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  6. Improvements on the accuracy of beam bugs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Y.J.; Fessenden, T.

    1998-08-17

    At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as ''beam bugs'', have been used throughoutmore » linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug.« less

  7. Improvements on the accuracy of beam bugs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Y J; Fessenden, T

    1998-09-02

    At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as "beam bugs", have been used throughoutmore » linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug.« less

  8. Consider the source: Children link the accuracy of text-based sources to the accuracy of the author.

    PubMed

    Vanderbilt, Kimberly E; Ochoa, Karlena D; Heilbrun, Jayd

    2018-05-06

    The present research investigated whether young children link the accuracy of text-based information to the accuracy of its author. Across three experiments, three- and four-year-olds (N = 231) received information about object labels from accurate and inaccurate sources who provided information both in text and verbally. Of primary interest was whether young children would selectively rely on information provided by more accurate sources, regardless of the form in which the information was communicated. Experiment 1 tested children's trust in text-based information (e.g., books) written by an author with a history of either accurate or inaccurate verbal testimony and found that children showed greater trust in books written by accurate authors. Experiment 2 replicated the findings of Experiment 1 and extended them by showing that children's selective trust in more accurate text-based sources was not dependent on experience trusting or distrusting the author's verbal testimony. Experiment 3 investigated this understanding in reverse by testing children's trust in verbal testimony communicated by an individual who had authored either accurate or inaccurate text-based information. Experiment 3 revealed that children showed greater trust in individuals who had authored accurate rather than inaccurate books. Experiment 3 also demonstrated that children used the accuracy of text-based sources to make inferences about the mental states of the authors. Taken together, these results suggest children do indeed link the reliability of text-based sources to the reliability of the author. Statement of Contribution Existing knowledge Children use sources' prior accuracy to predict future accuracy in face-to-face verbal interactions. Children who are just learning to read show increased trust in text bases (vs. verbal) information. It is unknown whether children consider authors' prior accuracy when judging the accuracy of text-based information. New knowledge added by this article Preschool children track sources' accuracy across communication mediums - from verbal to text-based modalities and vice versa. Children link the reliability of text-based sources to the reliability of the author. © 2018 The British Psychological Society.

  9. Assessing the short-term clock drift of early broadband stations with burst events of the 26 s persistent and localized microseism

    NASA Astrophysics Data System (ADS)

    Xie, Jun; Ni, Sidao; Chu, Risheng; Xia, Yingjie

    2018-01-01

    Accurate seismometer clock plays an important role in seismological studies including earthquake location and tomography. However, some seismic stations may have clock drift larger than 1 s (e.g. GSC in 1992), especially in early days of global seismic networks. The 26 s Persistent Localized (PL) microseism event in the Gulf of Guinea sometime excites strong and coherent signals, and can be used as repeating source for assessing stability of seismometer clocks. Taking station GSC, PAS and PFO in the TERRAscope network as an example, the 26 s PL signal can be easily observed in the ambient noise cross-correlation function between these stations and a remote station OBN with interstation distance about 9700 km. The travel-time variation of this 26 s signal in the ambient noise cross-correlation function is used to infer clock error. A drastic clock error is detected during June 1992 for station GSC, but not found for station PAS and PFO. This short-term clock error is confirmed by both teleseismic and local earthquake records with a magnitude of 25 s. Averaged over the three stations, the accuracy of the ambient noise cross-correlation function method with the 26 s source is about 0.3-0.5 s. Using this PL source, the clock can be validated for historical records of sparsely distributed stations, where the usual ambient noise cross-correlation function of short-period (<20 s) ambient noise might be less effective due to its attenuation over long interstation distances. However, this method suffers from cycling problem, and should be verified by teleseismic/local P waves. Further studies are also needed to investigate whether the 26 s source moves spatially and its effects on clock drift detection.

  10. Computing the Partition Function for Kinetically Trapped RNA Secondary Structures

    PubMed Central

    Lorenz, William A.; Clote, Peter

    2011-01-01

    An RNA secondary structure is locally optimal if there is no lower energy structure that can be obtained by the addition or removal of a single base pair, where energy is defined according to the widely accepted Turner nearest neighbor model. Locally optimal structures form kinetic traps, since any evolution away from a locally optimal structure must involve energetically unfavorable folding steps. Here, we present a novel, efficient algorithm to compute the partition function over all locally optimal secondary structures of a given RNA sequence. Our software, RNAlocopt runs in time and space. Additionally, RNAlocopt samples a user-specified number of structures from the Boltzmann subensemble of all locally optimal structures. We apply RNAlocopt to show that (1) the number of locally optimal structures is far fewer than the total number of structures – indeed, the number of locally optimal structures approximately equal to the square root of the number of all structures, (2) the structural diversity of this subensemble may be either similar to or quite different from the structural diversity of the entire Boltzmann ensemble, a situation that depends on the type of input RNA, (3) the (modified) maximum expected accuracy structure, computed by taking into account base pairing frequencies of locally optimal structures, is a more accurate prediction of the native structure than other current thermodynamics-based methods. The software RNAlocopt constitutes a technical breakthrough in our study of the folding landscape for RNA secondary structures. For the first time, locally optimal structures (kinetic traps in the Turner energy model) can be rapidly generated for long RNA sequences, previously impossible with methods that involved exhaustive enumeration. Use of locally optimal structure leads to state-of-the-art secondary structure prediction, as benchmarked against methods involving the computation of minimum free energy and of maximum expected accuracy. Web server and source code available at http://bioinformatics.bc.edu/clotelab/RNAlocopt/. PMID:21297972

  11. A hybrid localization technique for patient tracking.

    PubMed

    Rodionov, Denis; Kolev, George; Bushminkin, Kirill

    2013-01-01

    Nowadays numerous technologies are employed for tracking patients and assets in hospitals or nursing homes. Each of them has advantages and drawbacks. For example, WiFi localization has relatively good accuracy but cannot be used in case of power outage or in the areas with poor WiFi coverage. Magnetometer positioning or cellular network does not have such problems but they are not as accurate as localization with WiFi. This paper describes technique that simultaneously employs different localization technologies for enhancing stability and average accuracy of localization. The proposed algorithm is based on fingerprinting method paired with data fusion and prediction algorithms for estimating the object location. The core idea of the algorithm is technology fusion using error estimation methods. For testing accuracy and performance of the algorithm testing simulation environment has been implemented. Significant accuracy improvement was showed in practical scenarios.

  12. Underwater Threat Source Localization: Processing Sensor Network TDOAs with a Terascale Optical Core Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barhen, Jacob; Imam, Neena

    2007-01-01

    Revolutionary computing technologies are defined in terms of technological breakthroughs, which leapfrog over near-term projected advances in conventional hardware and software to produce paradigm shifts in computational science. For underwater threat source localization using information provided by a dynamical sensor network, one of the most promising computational advances builds upon the emergence of digital optical-core devices. In this article, we present initial results of sensor network calculations that focus on the concept of signal wavefront time-difference-of-arrival (TDOA). The corresponding algorithms are implemented on the EnLight processing platform recently introduced by Lenslet Laboratories. This tera-scale digital optical core processor is optimizedmore » for array operations, which it performs in a fixed-point-arithmetic architecture. Our results (i) illustrate the ability to reach the required accuracy in the TDOA computation, and (ii) demonstrate that a considerable speed-up can be achieved when using the EnLight 64a prototype processor as compared to a dual Intel XeonTM processor.« less

  13. Three-dimensional single-molecule localization with nanometer accuracy using Metal-Induced Energy Transfer (MIET) imaging

    NASA Astrophysics Data System (ADS)

    Karedla, Narain; Chizhik, Anna M.; Stein, Simon C.; Ruhlandt, Daja; Gregor, Ingo; Chizhik, Alexey I.; Enderlein, Jörg

    2018-05-01

    Our paper presents the first theoretical and experimental study using single-molecule Metal-Induced Energy Transfer (smMIET) for localizing single fluorescent molecules in three dimensions. Metal-Induced Energy Transfer describes the resonant energy transfer from the excited state of a fluorescent emitter to surface plasmons in a metal nanostructure. This energy transfer is strongly distance-dependent and can be used to localize an emitter along one dimension. We have used Metal-Induced Energy Transfer in the past for localizing fluorescent emitters with nanometer accuracy along the optical axis of a microscope. The combination of smMIET with single-molecule localization based super-resolution microscopy that provides nanometer lateral localization accuracy offers the prospect of achieving isotropic nanometer localization accuracy in all three spatial dimensions. We give a thorough theoretical explanation and analysis of smMIET, describe its experimental requirements, also in its combination with lateral single-molecule localization techniques, and present first proof-of-principle experiments using dye molecules immobilized on top of a silica spacer, and of dye molecules embedded in thin polymer films.

  14. Joint source based analysis of multiple brain structures in studying major depressive disorder

    NASA Astrophysics Data System (ADS)

    Ramezani, Mahdi; Rasoulian, Abtin; Hollenstein, Tom; Harkness, Kate; Johnsrude, Ingrid; Abolmaesumi, Purang

    2014-03-01

    We propose a joint Source-Based Analysis (jSBA) framework to identify brain structural variations in patients with Major Depressive Disorder (MDD). In this framework, features representing position, orientation and size (i.e. pose), shape, and local tissue composition are extracted. Subsequently, simultaneous analysis of these features within a joint analysis method is performed to generate the basis sources that show signi cant di erences between subjects with MDD and those in healthy control. Moreover, in a cross-validation leave- one-out experiment, we use a Fisher Linear Discriminant (FLD) classi er to identify individuals within the MDD group. Results show that we can classify the MDD subjects with an accuracy of 76% solely based on the information gathered from the joint analysis of pose, shape, and tissue composition in multiple brain structures.

  15. An ITK framework for deterministic global optimization for medical image registration

    NASA Astrophysics Data System (ADS)

    Dru, Florence; Wachowiak, Mark P.; Peters, Terry M.

    2006-03-01

    Similarity metric optimization is an essential step in intensity-based rigid and nonrigid medical image registration. For clinical applications, such as image guidance of minimally invasive procedures, registration accuracy and efficiency are prime considerations. In addition, clinical utility is enhanced when registration is integrated into image analysis and visualization frameworks, such as the popular Insight Toolkit (ITK). ITK is an open source software environment increasingly used to aid the development, testing, and integration of new imaging algorithms. In this paper, we present a new ITK-based implementation of the DIRECT (Dividing Rectangles) deterministic global optimization algorithm for medical image registration. Previously, it has been shown that DIRECT improves the capture range and accuracy for rigid registration. Our ITK class also contains enhancements over the original DIRECT algorithm by improving stopping criteria, adaptively adjusting a locality parameter, and by incorporating Powell's method for local refinement. 3D-3D registration experiments with ground-truth brain volumes and clinical cardiac volumes show that combining DIRECT with Powell's method improves registration accuracy over Powell's method used alone, is less sensitive to initial misorientation errors, and, with the new stopping criteria, facilitates adequate exploration of the search space without expending expensive iterations on non-improving function evaluations. Finally, in this framework, a new parallel implementation for computing mutual information is presented, resulting in near-linear speedup with two processors.

  16. Adaptive Local Realignment of Protein Sequences.

    PubMed

    DeBlasio, Dan; Kececioglu, John

    2018-06-11

    While mutation rates can vary markedly over the residues of a protein, multiple sequence alignment tools typically use the same values for their scoring-function parameters across a protein's entire length. We present a new approach, called adaptive local realignment, that in contrast automatically adapts to the diversity of mutation rates along protein sequences. This builds upon a recent technique known as parameter advising, which finds global parameter settings for an aligner, to now adaptively find local settings. Our approach in essence identifies local regions with low estimated accuracy, constructs a set of candidate realignments using a carefully-chosen collection of parameter settings, and replaces the region if a realignment has higher estimated accuracy. This new method of local parameter advising, when combined with prior methods for global advising, boosts alignment accuracy as much as 26% over the best default setting on hard-to-align protein benchmarks, and by 6.4% over global advising alone. Adaptive local realignment has been implemented within the Opal aligner using the Facet accuracy estimator.

  17. Calorimetric method for determination of {sup 51}Cr neutrino source activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veretenkin, E. P., E-mail: veretenk@inr.ru; Gavrin, V. N.; Danshin, S. N.

    Experimental study of nonstandard neutrino properties using high-intensity artificial neutrino sources requires the activity of the sources to be determined with high accuracy. In the BEST project, a calorimetric system for measurement of the activity of high-intensity (a few MCi) neutrino sources based on {sup 51}Cr with an accuracy of 0.5–1% is created. In the paper, the main factors affecting the accuracy of determining the neutrino source activity are discussed. The calorimetric system design and the calibration results using a thermal simulator of the source are presented.

  18. Determinants of the accuracy of nursing diagnoses: influence of ready knowledge, knowledge sources, disposition toward critical thinking, and reasoning skills.

    PubMed

    Paans, Wolter; Sermeus, Walter; Nieweg, Roos; van der Schans, Cees

    2010-01-01

    The purpose of this study was to determine how knowledge sources, ready knowledge, and disposition toward critical thinking and reasoning skills influence the accuracy of student nurses' diagnoses. A randomized controlled trial was conducted to determine the influence of knowledge sources. We used the following questionnaires: (a) knowledge inventory, (b) California Critical Thinking Disposition Inventory, and (c) Health Science Reasoning Test (HSRT). The use of knowledge sources had very little influence on the accuracy of nursing diagnoses. Accuracy was significantly related to the analysis domain of the HSRT. Students were unable to operationalize knowledge sources to derive accurate diagnoses and did not effectively use reasoning skills. Copyright 2010 Elsevier Inc. All rights reserved.

  19. Reputation-Based Secure Sensor Localization in Wireless Sensor Networks

    PubMed Central

    He, Jingsha; Xu, Jing; Zhu, Xingye; Zhang, Yuqiang; Zhang, Ting; Fu, Wanqing

    2014-01-01

    Location information of sensor nodes in wireless sensor networks (WSNs) is very important, for it makes information that is collected and reported by the sensor nodes spatially meaningful for applications. Since most current sensor localization schemes rely on location information that is provided by beacon nodes for the regular sensor nodes to locate themselves, the accuracy of localization depends on the accuracy of location information from the beacon nodes. Therefore, the security and reliability of the beacon nodes become critical in the localization of regular sensor nodes. In this paper, we propose a reputation-based security scheme for sensor localization to improve the security and the accuracy of sensor localization in hostile or untrusted environments. In our proposed scheme, the reputation of each beacon node is evaluated based on a reputation evaluation model so that regular sensor nodes can get credible location information from highly reputable beacon nodes to accomplish localization. We also perform a set of simulation experiments to demonstrate the effectiveness of the proposed reputation-based security scheme. And our simulation results show that the proposed security scheme can enhance the security and, hence, improve the accuracy of sensor localization in hostile or untrusted environments. PMID:24982940

  20. A high resolution liquid xenon imaging telescope for 0.3-10 MeV gamma-ray astrophysics: Construction and initial balloon flights

    NASA Technical Reports Server (NTRS)

    Aprile, Elena

    1994-01-01

    An instrument is described which will provide a direct image of gamma-ray line or continuum sources in the energy range 300 keV to 10 MeV. The use of this instrument to study the celestial distribution of the (exp 26)Al isotope by observing the 1.809 MeV deexcitation gamma-ray line is illustrated. The source location accuracy is 2' or better. The imaging telescope is a liquid xenon time projection chamber coupled with a coded aperture mask (LXe-CAT). This instrument will confirm and extend the COMPTEL observations from the Compton Gamma-Ray Observatory (CGRO) with an improved capability for identifying the actual Galactic source or sources of (exp 26)Al, which are currently not known with certainty. sources currently under consideration include red giants on the asymptotic giant branch (AGB), novae, Type 1b or Type 2 supernovae, Wolf-Rayet stars and cosmic-rays interacting in molecular clouds. The instrument could also identify a local source of the celestial 1.809 MeV gamma-ray line, such as a recent nearby supernova.

  1. Local Sampling of the Wigner Function at Telecom Wavelength with Loss-Tolerant Detection of Photon Statistics.

    PubMed

    Harder, G; Silberhorn, Ch; Rehacek, J; Hradil, Z; Motka, L; Stoklasa, B; Sánchez-Soto, L L

    2016-04-01

    We report the experimental point-by-point sampling of the Wigner function for nonclassical states created in an ultrafast pulsed type-II parametric down-conversion source. We use a loss-tolerant time-multiplexed detector based on a fiber-optical setup and a pair of photon-number-resolving avalanche photodiodes. By capitalizing on an expedient data-pattern tomography, we assess the properties of the light states with outstanding accuracy. The method allows us to reliably infer the squeezing of genuine two-mode states without any phase reference.

  2. Localization of rainfall and determination its intensity in the lower layers of the troposphere from the measurements of local RF transmitter characteristics

    NASA Astrophysics Data System (ADS)

    Podhorský, Dušan; Fabo, Peter

    2016-12-01

    The article deals with a method of acquiring the temporal and spatial distribution of local precipitation from measurement of performance characteristics of local sources of high frequency electromagnetic radiation in the 1-3GHz frequency range in the lower layers of the troposphere up to 100 m. The method was experimentally proven by monitoring the GSM G2 base stations of cell phone providers in the frequency range of 920-960MHz using methods of frequential and spatial diversity reception. Modification of the SART method for localization of precipitation was also proposed. The achieved results allow us to obtain the timeframe of the intensity of local precipitation in the observed area with a temporal resolution of 10 sec. A spatial accuracy of 100m in localization of precipitation is expected, after a network of receivers is built. The acquired data can be used as one of the inputs for meteorological forecasting models, in agriculture, hydrology as a supplementary method to ombrograph stations and measurements for the weather radar network, in transportation as part of a warning system and in many other areas.

  3. A Novel System for Correction of Relative Angular Displacement between Airborne Platform and UAV in Target Localization

    PubMed Central

    Liu, Chenglong; Liu, Jinghong; Song, Yueming; Liang, Huaidan

    2017-01-01

    This paper provides a system and method for correction of relative angular displacements between an Unmanned Aerial Vehicle (UAV) and its onboard strap-down photoelectric platform to improve localization accuracy. Because the angular displacements have an influence on the final accuracy, by attaching a measuring system to the platform, the texture image of platform base bulkhead can be collected in a real-time manner. Through the image registration, the displacement vector of the platform relative to its bulkhead can be calculated to further determine angular displacements. After being decomposed and superposed on the three attitude angles of the UAV, the angular displacements can reduce the coordinate transformation errors and thus improve the localization accuracy. Even a simple kind of method can improve the localization accuracy by 14.3%. PMID:28273845

  4. A Novel System for Correction of Relative Angular Displacement between Airborne Platform and UAV in Target Localization.

    PubMed

    Liu, Chenglong; Liu, Jinghong; Song, Yueming; Liang, Huaidan

    2017-03-04

    This paper provides a system and method for correction of relative angular displacements between an Unmanned Aerial Vehicle (UAV) and its onboard strap-down photoelectric platform to improve localization accuracy. Because the angular displacements have an influence on the final accuracy, by attaching a measuring system to the platform, the texture image of platform base bulkhead can be collected in a real-time manner. Through the image registration, the displacement vector of the platform relative to its bulkhead can be calculated to further determine angular displacements. After being decomposed and superposed on the three attitude angles of the UAV, the angular displacements can reduce the coordinate transformation errors and thus improve the localization accuracy. Even a simple kind of method can improve the localization accuracy by 14.3%.

  5. Accuracy of indexing coverage information as reported by serials sources.

    PubMed Central

    Eldredge, J D

    1993-01-01

    This article reports on the accuracy of indexing service coverage information listed in three serials sources: Ulrich's International Periodicals Directory, SERLINE, and The Serials Directory. The titles studied were randomly selected journals that began publication in either 1981 or 1986. Aggregate results reveal that these serials sources perform at 92%, 97%, and 95% levels of accuracy respectively. When the results are analyzed by specific indexing services by year, the performance scores ranged from 80% to 100%. All three serials sources tend to underreport index coverage. The author advances five recommendations for improving index coverage accuracy and four specific proposals for future research. The results suggest that, for the immediate future, librarians should treat index coverage information reported in these three serials sources with some skepticism. PMID:8251971

  6. Matters of accuracy and conventionality: prior accuracy guides children's evaluations of others' actions.

    PubMed

    Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed

    2013-03-01

    Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clément, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and 4-year-olds were asked to endorse and imitate one of two actors performing an unfamiliar action, one actor who was unconventional but successful and one who was conventional but unsuccessful. These data demonstrated that children preferred endorsing and imitating the unconventional but successful actor. Results suggest that when the accuracy and conventionality of a source are put into conflict, children may give priority to accuracy over conventionality when estimating the source's reliability and, ultimately, when deciding who to trust.

  7. Local residue coupling strategies by neural network for InSAR phase unwrapping

    NASA Astrophysics Data System (ADS)

    Refice, Alberto; Satalino, Giuseppe; Chiaradia, Maria T.

    1997-12-01

    Phase unwrapping is one of the toughest problems in interferometric SAR processing. The main difficulties arise from the presence of point-like error sources, called residues, which occur mainly in close couples due to phase noise. We present an assessment of a local approach to the resolution of these problems by means of a neural network. Using a multi-layer perceptron, trained with the back- propagation scheme on a series of simulated phase images, fashion the best pairing strategies for close residue couples. Results show that god efficiencies and accuracies can have been obtained, provided a sufficient number of training examples are supplied. Results show that good efficiencies and accuracies can be obtained, provided a sufficient number of training examples are supplied. The technique is tested also on real SAR ERS-1/2 tandem interferometric images of the Matera test site, showing a good reduction of the residue density. The better results obtained by use of the neural network as far as local criteria are adopted appear justified given the probabilistic nature of the noise process on SAR interferometric phase fields and allows to outline a specifically tailored implementation of the neural network approach as a very fast pre-processing step intended to decrease the residue density and give sufficiently clean images to be processed further by more conventional techniques.

  8. Simulation of the Tsunami Resulting from the M 9.2 2004 Sumatra-Andaman Earthquake - Dynamic Rupture vs. Seismic Inversion Source Model

    NASA Astrophysics Data System (ADS)

    Vater, Stefan; Behrens, Jörn

    2017-04-01

    Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.

  9. Electrical source localization by LORETA in patients with epilepsy: Confirmation by postoperative MRI

    PubMed Central

    Akdeniz, Gülsüm

    2016-01-01

    Background: Few studies have been conducted that have compared electrical source localization (ESL) results obtained by analyzing ictal patterns in scalp electroencephalogram (EEG) with the brain areas that are found to be responsible for seizures using other brain imaging techniques. Additionally, adequate studies have not been performed to confirm the accuracy of ESL methods. Materials and Methods: In this study, ESL was conducted using LORETA (Low Resolution Brain Electromagnetic Tomography) in 9 patients with lesions apparent on magnetic resonance imaging (MRI) and in 6 patients who did not exhibit lesions on their MRIs. EEGs of patients who underwent surgery for epilepsy and had follow-ups for at least 1 year after operations were analyzed for ictal spike, rhythmic, paroxysmal fast, and obscured EEG activities. Epileptogenic zones identified in postoperative MRIs were then compared with localizations obtained by LORETA model we employed. Results: We found that brain areas determined via ESL were in concordance with resected brain areas for 13 of the 15 patients evaluated, and those 13 patients were post-operatively determined as being seizure-free. Conclusion: ESL, which is a noninvasive technique, may contribute to the correct delineation of epileptogenic zones in patients who will eventually undergo surgery to treat epilepsy, (regardless of neuroimaging status). Moreover, ESL may aid in deciding on the number and localization of intracranial electrodes to be used in patients who are candidates for invasive recording. PMID:27011626

  10. Electrical source localization by LORETA in patients with epilepsy: Confirmation by postoperative MRI.

    PubMed

    Akdeniz, Gülsüm

    2016-01-01

    Few studies have been conducted that have compared electrical source localization (ESL) results obtained by analyzing ictal patterns in scalp electroencephalogram (EEG) with the brain areas that are found to be responsible for seizures using other brain imaging techniques. Additionally, adequate studies have not been performed to confirm the accuracy of ESL methods. In this study, ESL was conducted using LORETA (Low Resolution Brain Electromagnetic Tomography) in 9 patients with lesions apparent on magnetic resonance imaging (MRI) and in 6 patients who did not exhibit lesions on their MRIs. EEGs of patients who underwent surgery for epilepsy and had follow-ups for at least 1 year after operations were analyzed for ictal spike, rhythmic, paroxysmal fast, and obscured EEG activities. Epileptogenic zones identified in postoperative MRIs were then compared with localizations obtained by LORETA model we employed. We found that brain areas determined via ESL were in concordance with resected brain areas for 13 of the 15 patients evaluated, and those 13 patients were post-operatively determined as being seizure-free. ESL, which is a noninvasive technique, may contribute to the correct delineation of epileptogenic zones in patients who will eventually undergo surgery to treat epilepsy, (regardless of neuroimaging status). Moreover, ESL may aid in deciding on the number and localization of intracranial electrodes to be used in patients who are candidates for invasive recording.

  11. Spatiotemporal source tuning filter bank for multiclass EEG based brain computer interfaces.

    PubMed

    Acharya, Soumyadipta; Mollazadeh, Moshen; Murari, Kartikeya; Thakor, Nitish

    2006-01-01

    Non invasive brain-computer interfaces (BCI) allow people to communicate by modulating features of their electroencephalogram (EEG). Spatiotemporal filtering has a vital role in multi-class, EEG based BCI. In this study, we used a novel combination of principle component analysis, independent component analysis and dipole source localization to design a spatiotemporal multiple source tuning (SPAMSORT) filter bank, each channel of which was tuned to the activity of an underlying dipole source. Changes in the event-related spectral perturbation (ERSP) were measured and used to train a linear support vector machine to classify between four classes of motor imagery tasks (left hand, right hand, foot and tongue) for one subject. ERSP values were significantly (p<0.01) different across tasks and better (p<0.01) than conventional spatial filtering methods (large Laplacian and common average reference). Classification resulted in an average accuracy of 82.5%. This approach could lead to promising BCI applications such as control of a prosthesis with multiple degrees of freedom.

  12. Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index.

    PubMed

    Xue, Wufeng; Zhang, Lei; Mou, Xuanqin; Bovik, Alan C

    2014-02-01

    It is an important task to faithfully evaluate the perceptual quality of output images in many applications, such as image compression, image restoration, and multimedia streaming. A good image quality assessment (IQA) model should not only deliver high quality prediction accuracy, but also be computationally efficient. The efficiency of IQA metrics is becoming particularly important due to the increasing proliferation of high-volume visual data in high-speed networks. We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). The image gradients are sensitive to image distortions, while different local structures in a distorted image suffer different degrees of degradations. This motivates us to explore the use of global variation of gradient based local quality map for overall image quality prediction. We find that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality. The resulting GMSD algorithm is much faster than most state-of-the-art IQA methods, and delivers highly competitive prediction accuracy. MATLAB source code of GMSD can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm.

  13. Evaluation of Event-Based Algorithms for Optical Flow with Ground-Truth from Inertial Measurement Sensor

    PubMed Central

    Rueckauer, Bodo; Delbruck, Tobi

    2016-01-01

    In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS). For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240 × 180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS). This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera. PMID:27199639

  14. We Can Have It All: Improved Surveillance Outcomes and Decreased Personnel Costs Associated With Electronic Reportable Disease Surveillance, North Carolina, 2010

    PubMed Central

    DiBiase, Lauren; Fangman, Mary T.; Fleischauer, Aaron T.; Waller, Anna E.; MacDonald, Pia D. M.

    2013-01-01

    Objectives. We assessed the timeliness, accuracy, and cost of a new electronic disease surveillance system at the local health department level. We describe practices associated with lower cost and better surveillance timeliness and accuracy. Methods. Interviews conducted May through August 2010 with local health department (LHD) staff at a simple random sample of 30 of 100 North Carolina counties provided information on surveillance practices and costs; we used surveillance system data to calculate timeliness and accuracy. We identified LHDs with best timeliness and accuracy and used these categories to compare surveillance practices and costs. Results. Local health departments in the top tertiles for surveillance timeliness and accuracy had a lower cost per case reported than LHDs with lower timeliness and accuracy ($71 and $124 per case reported, respectively; P = .03). Best surveillance practices fell into 2 domains: efficient use of the electronic surveillance system and use of surveillance data for local evaluation and program management. Conclusions. Timely and accurate surveillance can be achieved in the setting of restricted funding experienced by many LHDs. Adopting best surveillance practices may improve both efficiency and public health outcomes. PMID:24134385

  15. Multiscale Methods for Nuclear Reactor Analysis

    NASA Astrophysics Data System (ADS)

    Collins, Benjamin S.

    The ability to accurately predict local pin powers in nuclear reactors is necessary to understand the mechanisms that cause fuel pin failure during steady state and transient operation. In the research presented here, methods are developed to improve the local solution using high order methods with boundary conditions from a low order global solution. Several different core configurations were tested to determine the improvement in the local pin powers compared to the standard techniques, that use diffusion theory and pin power reconstruction (PPR). Two different multiscale methods were developed and analyzed; the post-refinement multiscale method and the embedded multiscale method. The post-refinement multiscale methods use the global solution to determine boundary conditions for the local solution. The local solution is solved using either a fixed boundary source or an albedo boundary condition; this solution is "post-refinement" and thus has no impact on the global solution. The embedded multiscale method allows the local solver to change the global solution to provide an improved global and local solution. The post-refinement multiscale method is assessed using three core designs. When the local solution has more energy groups, the fixed source method has some difficulties near the interface: however the albedo method works well for all cases. In order to remedy the issue with boundary condition errors for the fixed source method, a buffer region is used to act as a filter, which decreases the sensitivity of the solution to the boundary condition. Both the albedo and fixed source methods benefit from the use of a buffer region. Unlike the post-refinement method, the embedded multiscale method alters the global solution. The ability to change the global solution allows for refinement in areas where the errors in the few group nodal diffusion are typically large. The embedded method is shown to improve the global solution when it is applied to a MOX/LEU assembly interface, the fuel/reflector interface, and assemblies where control rods are inserted. The embedded method also allows for multiple solution levels to be applied in a single calculation. The addition of intermediate levels to the solution improves the accuracy of the method. Both multiscale methods considered here have benefits and drawbacks, but both can provide improvements over the current PPR methodology.

  16. A line-source method for aligning on-board and other pinhole SPECT systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-12-15

    Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems.Methods: An alignment model consisting of multiple alignmentmore » parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot.Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist.Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.« less

  17. A line-source method for aligning on-board and other pinhole SPECT systems

    PubMed Central

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-01-01

    Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. Methods: An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC. PMID:24320537

  18. A line-source method for aligning on-board and other pinhole SPECT systems.

    PubMed

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-12-01

    In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system-to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)-is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.

  19. Conventional and reciprocal approaches to the inverse dipole localization problem for N(20)-P (20) somatosensory evoked potentials.

    PubMed

    Finke, Stefan; Gulrajani, Ramesh M; Gotman, Jean; Savard, Pierre

    2013-01-01

    The non-invasive localization of the primary sensory hand area can be achieved by solving the inverse problem of electroencephalography (EEG) for N(20)-P(20) somatosensory evoked potentials (SEPs). This study compares two different mathematical approaches for the computation of transfer matrices used to solve the EEG inverse problem. Forward transfer matrices relating dipole sources to scalp potentials are determined via conventional and reciprocal approaches using individual, realistically shaped head models. The reciprocal approach entails calculating the electric field at the dipole position when scalp electrodes are reciprocally energized with unit current-scalp potentials are obtained from the scalar product of this electric field and the dipole moment. Median nerve stimulation is performed on three healthy subjects and single-dipole inverse solutions for the N(20)-P(20) SEPs are then obtained by simplex minimization and validated against the primary sensory hand area identified on magnetic resonance images. Solutions are presented for different time points, filtering strategies, boundary-element method discretizations, and skull conductivity values. Both approaches produce similarly small position errors for the N(20)-P(20) SEP. Position error for single-dipole inverse solutions is inherently robust to inaccuracies in forward transfer matrices but dependent on the overlapping activity of other neural sources. Significantly smaller time and storage requirements are the principal advantages of the reciprocal approach. Reduced computational requirements and similar dipole position accuracy support the use of reciprocal approaches over conventional approaches for N(20)-P(20) SEP source localization.

  20. An information-theoretic approach to designing the plane spacing for multifocal plane microscopy

    PubMed Central

    Tahmasbi, Amir; Ram, Sripad; Chao, Jerry; Abraham, Anish V.; Ward, E. Sally; Ober, Raimund J.

    2015-01-01

    Multifocal plane microscopy (MUM) is a 3D imaging modality which enables the localization and tracking of single molecules at high spatial and temporal resolution by simultaneously imaging distinct focal planes within the sample. MUM overcomes the depth discrimination problem of conventional microscopy and allows high accuracy localization of a single molecule in 3D along the z-axis. An important question in the design of MUM experiments concerns the appropriate number of focal planes and their spacings to achieve the best possible 3D localization accuracy along the z-axis. Ideally, it is desired to obtain a 3D localization accuracy that is uniform over a large depth and has small numerical values, which guarantee that the single molecule is continuously detectable. Here, we address this concern by developing a plane spacing design strategy based on the Fisher information. In particular, we analyze the Fisher information matrix for the 3D localization problem along the z-axis and propose spacing scenarios termed the strong coupling and the weak coupling spacings, which provide appropriate 3D localization accuracies. Using these spacing scenarios, we investigate the detectability of the single molecule along the z-axis and study the effect of changing the number of focal planes on the 3D localization accuracy. We further review a software module we recently introduced, the MUMDesignTool, that helps to design the plane spacings for a MUM setup. PMID:26113764

  1. Comparative study of shear wave-based elastography techniques in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Zvietcovich, Fernando; Rolland, Jannick P.; Yao, Jianing; Meemon, Panomsak; Parker, Kevin J.

    2017-03-01

    We compare five optical coherence elastography techniques able to estimate the shear speed of waves generated by one and two sources of excitation. The first two techniques make use of one piezoelectric actuator in order to produce a continuous shear wave propagation or a tone-burst propagation (TBP) of 400 Hz over a gelatin tissue-mimicking phantom. The remaining techniques utilize a second actuator located on the opposite side of the region of interest in order to create three types of interference patterns: crawling waves, swept crawling waves, and standing waves, depending on the selection of the frequency difference between the two actuators. We evaluated accuracy, contrast to noise ratio, resolution, and acquisition time for each technique during experiments. Numerical simulations were also performed in order to support the experimental findings. Results suggest that in the presence of strong internal reflections, single source methods are more accurate and less variable when compared to the two-actuator methods. In particular, TBP reports the best performance with an accuracy error <4.1%. Finally, the TBP was tested in a fresh chicken tibialis anterior muscle with a localized thermally ablated lesion in order to evaluate its performance in biological tissue.

  2. Nucleon matrix elements from lattice QCD with all-mode-averaging and a domain-decomposed solver: An exploratory study

    NASA Astrophysics Data System (ADS)

    von Hippel, Georg; Rae, Thomas D.; Shintani, Eigo; Wittig, Hartmut

    2017-01-01

    We study the performance of all-mode-averaging (AMA) when used in conjunction with a locally deflated SAP-preconditioned solver, determining how to optimize the local block sizes and number of deflation fields in order to minimize the computational cost for a given level of overall statistical accuracy. We find that AMA enables a reduction of the statistical error on nucleon charges by a factor of around two at the same cost when compared to the standard method. As a demonstration, we compute the axial, scalar and tensor charges of the nucleon in Nf = 2 lattice QCD with non-perturbatively O(a)-improved Wilson quarks, using O(10,000) measurements to pursue the signal out to source-sink separations of ts ∼ 1.5 fm. Our results suggest that the axial charge is suffering from a significant amount (5-10%) of excited-state contamination at source-sink separations of up to ts ∼ 1.2 fm, whereas the excited-state contamination in the scalar and tensor charges seems to be small.

  3. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    PubMed Central

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  4. SHIELD: FITGALAXY -- A Software Package for Automatic Aperture Photometry of Extended Sources

    NASA Astrophysics Data System (ADS)

    Marshall, Melissa

    2013-01-01

    Determining the parameters of extended sources, such as galaxies, is a common but time-consuming task. Finding a photometric aperture that encompasses the majority of the flux of a source and identifying and excluding contaminating objects is often done by hand - a lengthy and difficult to reproduce process. To make extracting information from large data sets both quick and repeatable, I have developed a program called FITGALAXY, written in IDL. This program uses minimal user input to automatically fit an aperture to, and perform aperture and surface photometry on, an extended source. FITGALAXY also automatically traces the outlines of surface brightness thresholds and creates surface brightness profiles, which can then be used to determine the radial properties of a source. Finally, the program performs automatic masking of contaminating sources. Masks and apertures can be applied to multiple images (regardless of the WCS solution or plate scale) in order to accurately measure the same source at different wavelengths. I present the fluxes, as measured by the program, of a selection of galaxies from the Local Volume Legacy Survey. I then compare these results with the fluxes given by Dale et al. (2009) in order to assess the accuracy of FITGALAXY.

  5. Initial Investigation of preclinical integrated SPECT and MR imaging.

    PubMed

    Hamamura, Mark J; Ha, Seunghoon; Roeck, Werner W; Wagenaar, Douglas J; Meier, Dirk; Patt, Bradley E; Nalcioglu, Orhan

    2010-02-01

    Single-photon emission computed tomography (SPECT) can provide specific functional information while magnetic resonance imaging (MRI) can provide high-spatial resolution anatomical information as well as complementary functional information. In this study, we utilized a dual modality SPECT/MRI (MRSPECT) system to investigate the integration of SPECT and MRI for improved image accuracy. The MRSPECT system consisted of a cadmium-zinc-telluride (CZT) nuclear radiation detector interfaced with a specialized radiofrequency (RF) coil that was placed within a whole-body 4 T MRI system. The importance of proper corrections for non-uniform detector sensitivity and Lorentz force effects was demonstrated. MRI data were utilized for attenuation correction (AC) of the nuclear projection data and optimized Wiener filtering of the SPECT reconstruction for improved image accuracy. Finally, simultaneous dual-imaging of a nude mouse was performed to demonstrated the utility of co-registration for accurate localization of a radioactive source.

  6. Initial Investigation of Preclinical Integrated SPECT and MR Imaging

    PubMed Central

    Hamamura, Mark J.; Ha, Seunghoon; Roeck, Werner W.; Wagenaar, Douglas J.; Meier, Dirk; Patt, Bradley E.; Nalcioglu, Orhan

    2014-01-01

    Single-photon emission computed tomography (SPECT) can provide specific functional information while magnetic resonance imaging (MRI) can provide high-spatial resolution anatomical information as well as complementary functional information. In this study, we utilized a dual modality SPECT/MRI (MRSPECT) system to investigate the integration of SPECT and MRI for improved image accuracy. The MRSPECT system consisted of a cadmium-zinc-telluride (CZT) nuclear radiation detector interfaced with a specialized radiofrequency (RF) coil that was placed within a whole-body 4 T MRI system. The importance of proper corrections for non-uniform detector sensitivity and Lorentz force effects was demonstrated. MRI data were utilized for attenuation correction (AC) of the nuclear projection data and optimized Wiener filtering of the SPECT reconstruction for improved image accuracy. Finally, simultaneous dual-imaging of a nude mouse was performed to demonstrated the utility of co-registration for accurate localization of a radioactive source. PMID:20082527

  7. Laser Micromachining Fabrication of THz Components

    NASA Technical Reports Server (NTRS)

    DrouetdAubigny, C.; Walker, C.; Jones, B.; Groppi, C.; Papapolymerou, J.; Tavenier, C.

    2001-01-01

    Laser micromachining techniques can be used to fabricate high-quality waveguide structures and quasi-optical components to micrometer accuracies. Successful GHz designs can be directly scaled to THz frequencies. We expect this promising technology to allow the construction of the first fully integrated THz heterodyne imaging arrays. At the University of Arizona, construction of the first laser micromachining system designed for THz waveguide components fabrication has been completed. Once tested and characterized our system will be used to construct prototype THz lx4 focal plane mixer arrays, magic tees, AR coated silicon lenses, local oscillator source phase gratings, filters and more. Our system can micro-machine structures down to a few microns accuracy and up to 6 inches across in a short time. This paper discusses the design and performance of our micromachining system, and illustrates the type, range and performance of components this exciting new technology will make accessible to the THz community.

  8. Impact of Hearing Aid Technology on Outcomes in Daily Life III: Localization.

    PubMed

    Johnson, Jani A; Xu, Jingjing; Cox, Robyn M

    Compared to basic-feature hearing aids, premium-feature hearing aids have more advanced technologies and sophisticated features. The objective of this study was to explore the difference between premium-feature and basic-feature hearing aids in horizontal sound localization in both laboratory and daily life environments. We hypothesized that premium-feature hearing aids would yield better localization performance than basic-feature hearing aids. Exemplars of premium-feature and basic-feature hearing aids from two major manufacturers were evaluated. Forty-five older adults (mean age 70.3 years) with essentially symmetrical mild to moderate sensorineural hearing loss were bilaterally fitted with each of the four pairs of hearing aids. Each pair of hearing aids was worn during a 4-week field trial and then evaluated using laboratory localization tests and a standardized questionnaire. Laboratory localization tests were conducted in a sound-treated room with a 360°, 24-loudspeaker array. Test stimuli were high frequency and low frequency filtered short sentences. The localization test in quiet was designed to assess the accuracy of front/back localization, while the localization test in noise was designed to assess the accuracy of locating sound sources throughout a 360° azimuth in the horizontal plane. Laboratory data showed that unaided localization was not significantly different from aided localization when all hearing aids were combined. Questionnaire data showed that aided localization was significantly better than unaided localization in everyday situations. Regarding the difference between premium-feature and basic-feature hearing aids, laboratory data showed that, overall, the premium-feature hearing aids yielded more accurate localization than the basic-feature hearing aids when high-frequency stimuli were used, and the listening environment was quiet. Otherwise, the premium-feature and basic-feature hearing aids yielded essentially the same performance in other laboratory tests and in daily life. The findings were consistent for both manufacturers. Laboratory tests for two of six major manufacturers showed that premium-feature hearing aids yielded better localization performance than basic-feature hearing aids in one out of four laboratory conditions. There was no difference between the two feature levels in self-reported everyday localization. Effectiveness research with different hearing aid technologies is necessary, and more research with other manufacturers' products is needed. Furthermore, these results confirm previous observations that research findings in laboratory conditions might not translate to everyday life.

  9. Customization of UWB 3D-RTLS Based on the New Uncertainty Model of the AoA Ranging Technique

    PubMed Central

    Jachimczyk, Bartosz; Dziak, Damian; Kulesza, Wlodek J.

    2017-01-01

    The increased potential and effectiveness of Real-time Locating Systems (RTLSs) substantially influence their application spectrum. They are widely used, inter alia, in the industrial sector, healthcare, home care, and in logistic and security applications. The research aims to develop an analytical method to customize UWB-based RTLS, in order to improve their localization performance in terms of accuracy and precision. The analytical uncertainty model of Angle of Arrival (AoA) localization in a 3D indoor space, which is the foundation of the customization concept, is established in a working environment. Additionally, a suitable angular-based 3D localization algorithm is introduced. The paper investigates the following issues: the influence of the proposed correction vector on the localization accuracy; the impact of the system’s configuration and LS’s relative deployment on the localization precision distribution map. The advantages of the method are verified by comparing them with a reference commercial RTLS localization engine. The results of simulations and physical experiments prove the value of the proposed customization method. The research confirms that the analytical uncertainty model is the valid representation of RTLS’ localization uncertainty in terms of accuracy and precision and can be useful for its performance improvement. The research shows, that the Angle of Arrival localization in a 3D indoor space applying the simple angular-based localization algorithm and correction vector improves of localization accuracy and precision in a way that the system challenges the reference hardware advanced localization engine. Moreover, the research guides the deployment of location sensors to enhance the localization precision. PMID:28125056

  10. Customization of UWB 3D-RTLS Based on the New Uncertainty Model of the AoA Ranging Technique.

    PubMed

    Jachimczyk, Bartosz; Dziak, Damian; Kulesza, Wlodek J

    2017-01-25

    The increased potential and effectiveness of Real-time Locating Systems (RTLSs) substantially influence their application spectrum. They are widely used, inter alia, in the industrial sector, healthcare, home care, and in logistic and security applications. The research aims to develop an analytical method to customize UWB-based RTLS, in order to improve their localization performance in terms of accuracy and precision. The analytical uncertainty model of Angle of Arrival (AoA) localization in a 3D indoor space, which is the foundation of the customization concept, is established in a working environment. Additionally, a suitable angular-based 3D localization algorithm is introduced. The paper investigates the following issues: the influence of the proposed correction vector on the localization accuracy; the impact of the system's configuration and LS's relative deployment on the localization precision distribution map. The advantages of the method are verified by comparing them with a reference commercial RTLS localization engine. The results of simulations and physical experiments prove the value of the proposed customization method. The research confirms that the analytical uncertainty model is the valid representation of RTLS' localization uncertainty in terms of accuracy and precision and can be useful for its performance improvement. The research shows, that the Angle of Arrival localization in a 3D indoor space applying the simple angular-based localization algorithm and correction vector improves of localization accuracy and precision in a way that the system challenges the reference hardware advanced localization engine. Moreover, the research guides the deployment of location sensors to enhance the localization precision.

  11. Clinical Study of Orthogonal-View Phase-Matched Digital Tomosynthesis for Lung Tumor Localization.

    PubMed

    Zhang, You; Ren, Lei; Vergalasova, Irina; Yin, Fang-Fang

    2017-01-01

    Compared to cone-beam computed tomography, digital tomosynthesis imaging has the benefits of shorter scanning time, less imaging dose, and better mechanical clearance for tumor localization in radiation therapy. However, for lung tumors, the localization accuracy of the conventional digital tomosynthesis technique is affected by the lack of depth information and the existence of lung tumor motion. This study investigates the clinical feasibility of using an orthogonal-view phase-matched digital tomosynthesis technique to improve the accuracy of lung tumor localization. The proposed orthogonal-view phase-matched digital tomosynthesis technique benefits from 2 major features: (1) it acquires orthogonal-view projections to improve the depth information in reconstructed digital tomosynthesis images and (2) it applies respiratory phase-matching to incorporate patient motion information into the synthesized reference digital tomosynthesis sets, which helps to improve the localization accuracy of moving lung tumors. A retrospective study enrolling 14 patients was performed to evaluate the accuracy of the orthogonal-view phase-matched digital tomosynthesis technique. Phantom studies were also performed using an anthropomorphic phantom to investigate the feasibility of using intratreatment aggregated kV and beams' eye view cine MV projections for orthogonal-view phase-matched digital tomosynthesis imaging. The localization accuracy of the orthogonal-view phase-matched digital tomosynthesis technique was compared to that of the single-view digital tomosynthesis techniques and the digital tomosynthesis techniques without phase-matching. The orthogonal-view phase-matched digital tomosynthesis technique outperforms the other digital tomosynthesis techniques in tumor localization accuracy for both the patient study and the phantom study. For the patient study, the orthogonal-view phase-matched digital tomosynthesis technique localizes the tumor to an average (± standard deviation) error of 1.8 (0.7) mm for a 30° total scan angle. For the phantom study using aggregated kV-MV projections, the orthogonal-view phase-matched digital tomosynthesis localizes the tumor to an average error within 1 mm for varying magnitudes of scan angles. The pilot clinical study shows that the orthogonal-view phase-matched digital tomosynthesis technique enables fast and accurate localization of moving lung tumors.

  12. Axial Colocalization of Single Molecules with Nanometer Accuracy Using Metal-Induced Energy Transfer.

    PubMed

    Isbaner, Sebastian; Karedla, Narain; Kaminska, Izabela; Ruhlandt, Daja; Raab, Mario; Bohlen, Johann; Chizhik, Alexey; Gregor, Ingo; Tinnefeld, Philip; Enderlein, Jörg; Tsukanov, Roman

    2018-04-11

    Single-molecule localization based super-resolution microscopy has revolutionized optical microscopy and routinely allows for resolving structural details down to a few nanometers. However, there exists a rather large discrepancy between lateral and axial localization accuracy, the latter typically three to five times worse than the former. Here, we use single-molecule metal-induced energy transfer (smMIET) to localize single molecules along the optical axis, and to measure their axial distance with an accuracy of 5 nm. smMIET relies only on fluorescence lifetime measurements and does not require additional complex optical setups.

  13. Matters of Accuracy and Conventionality: Prior Accuracy Guides Children's Evaluations of Others' Actions

    ERIC Educational Resources Information Center

    Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed

    2013-01-01

    Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clement, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and…

  14. Weight Multispectral Reconstruction Strategy for Enhanced Reconstruction Accuracy and Stability With Cerenkov Luminescence Tomography.

    PubMed

    Hongbo Guo; Xiaowei He; Muhan Liu; Zeyu Zhang; Zhenhua Hu; Jie Tian

    2017-06-01

    Cerenkov luminescence tomography (CLT) provides a novel technique for 3-D noninvasive detection of radiopharmaceuticals in living subjects. However, because of the severe scattering of Cerenkov light, the reconstruction accuracy and stability of CLT is still unsatisfied. In this paper, a modified weight multispectral CLT (wmCLT) reconstruction strategy was developed which split the Cerenkov radiation spectrum into several sub-spectral bands and weighted the sub-spectral results to obtain the final result. To better evaluate the property of the wmCLT reconstruction strategy in terms of accuracy, stability and practicability, several numerical simulation experiments and in vivo experiments were conducted and the results obtained were compared with the traditional multispectral CLT (mCLT) and hybrid-spectral CLT (hCLT) reconstruction strategies. The numerical simulation results indicated that wmCLT strategy significantly improved the accuracy of Cerenkov source localization and intensity quantitation and exhibited good stability in suppressing noise in numerical simulation experiments. And the comparison of the results achieved from different in vivo experiments further indicated significant improvement of the wmCLT strategy in terms of the shape recovery of the bladder and the spatial resolution of imaging xenograft tumors. Overall the strategy reported here will facilitate the development of nuclear and optical molecular tomography in theoretical study.

  15. Locating arbitrarily time-dependent sound sources in three dimensional space in real time.

    PubMed

    Wu, Sean F; Zhu, Na

    2010-08-01

    This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.

  16. The effect of transponder motion on the accuracy of the Calypso Electromagnetic localization system.

    PubMed

    Murphy, Martin J; Eidens, Richard; Vertatschitsch, Edward; Wright, J Nelson

    2008-09-01

    To determine position and velocity-dependent effects in the overall accuracy of the Calypso Electromagnetic localization system, under conditions that emulate transponder motion during normal free breathing. Three localization transponders were mounted on a remote-controlled turntable that could move the transponders along a circular trajectory at speeds up to 3 cm/s. A stationary calibration established the coordinates of multiple points on each transponder's circular path. Position measurements taken while the transponders were in motion at a constant speed were then compared with the stationary coordinates. No statistically significant changes in the transponder positions in (x,y,z) were detected when the transponders were in motion. The accuracy of the localization system is unaffected by transponder motion.

  17. Development, testing, and applications of site-specific tsunami inundation models for real-time forecasting

    NASA Astrophysics Data System (ADS)

    Tang, L.; Titov, V. V.; Chamberlin, C. D.

    2009-12-01

    The study describes the development, testing and applications of site-specific tsunami inundation models (forecast models) for use in NOAA's tsunami forecast and warning system. The model development process includes sensitivity studies of tsunami wave characteristics in the nearshore and inundation, for a range of model grid setups, resolutions and parameters. To demonstrate the process, four forecast models in Hawaii, at Hilo, Kahului, Honolulu, and Nawiliwili are described. The models were validated with fourteen historical tsunamis and compared with numerical results from reference inundation models of higher resolution. The accuracy of the modeled maximum wave height is greater than 80% when the observation is greater than 0.5 m; when the observation is below 0.5 m the error is less than 0.3 m. The error of the modeled arrival time of the first peak is within 3% of the travel time. The developed forecast models were further applied to hazard assessment from simulated magnitude 7.5, 8.2, 8.7 and 9.3 tsunamis based on subduction zone earthquakes in the Pacific. The tsunami hazard assessment study indicates that use of a seismic magnitude alone for a tsunami source assessment is inadequate to achieve such accuracy for tsunami amplitude forecasts. The forecast models apply local bathymetric and topographic information, and utilize dynamic boundary conditions from the tsunami source function database, to provide site- and event-specific coastal predictions. Only by combining a Deep-ocean Assessment and Reporting of Tsunami-constrained tsunami magnitude with site-specific high-resolution models can the forecasts completely cover the evolution of earthquake-generated tsunami waves: generation, deep ocean propagation, and coastal inundation. Wavelet analysis of the tsunami waves suggests the coastal tsunami frequency responses at different sites are dominated by the local bathymetry, yet they can be partially related to the locations of the tsunami sources. The study also demonstrates the nonlinearity between offshore and nearshore maximum wave amplitudes.

  18. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.

  19. Local staging and assessment of colon cancer with 1.5-T magnetic resonance imaging

    PubMed Central

    Blake, Helena; Jeyadevan, Nelesh; Abulafi, Muti; Swift, Ian; Toomey, Paul; Brown, Gina

    2016-01-01

    Objective: The aim of this study was to assess the accuracy of 1.5-T MRI in the pre-operative local T and N staging of colon cancer and identification of extramural vascular invasion (EMVI). Methods: Between 2010 and 2012, 60 patients with adenocarcinoma of the colon were prospectively recruited at 2 centres. 55 patients were included for final analysis. Patients received pre-operative 1.5-T MRI with high-resolution T2 weighted, gadolinium-enhanced T1 weighted and diffusion-weighted images. These were blindly assessed by two expert radiologists. Accuracy of the T-stage, N-stage and EMVI assessment was evaluated using post-operative histology as the gold standard. Results: Results are reported for two readers. Identification of T3 disease demonstrated an accuracy of 71% and 51%, sensitivity of 74% and 42% and specificity of 74% and 83%. Identification of N1 disease demonstrated an accuracy of 57% for both readers, sensitivity of 26% and 35% and specificity of 81% and 74%. Identification of EMVI demonstrated an accuracy of 74% and 69%, sensitivity 63% and 26% and specificity 80% and 91%. Conclusion: 1.5-T MRI achieved a moderate accuracy in the local evaluation of colon cancer, but cannot be recommended to replace CT on the basis of this study. Advances in knowledge: This study confirms that MRI is a viable alternative to CT for the local assessment of colon cancer, but this study does not reproduce the very high accuracy reported in the only other study to assess the accuracy of MRI in colon cancer staging. PMID:27226219

  20. The application of micro-vacuo-certo-contacting ophthalmophanto in X-ray radiosurgery for tumors in an eyeball.

    PubMed

    Li, Shuying; Wang, Yunyan; Hu, Likuan; Liang, Yingchun; Cai, Jing

    2014-11-01

    The large errors of routine localization for eyeball tumors restricted X-ray radiosurgery application, just for the eyeball to turn around. To localize the accuracy site, the micro-vacuo-certo-contacting ophthalmophanto (MVCCOP) method was used. Also, the outcome of patients with tumors in the eyeball was evaluated. In this study, computed tomography (CT) localization accuracy was measured by repeating CT scan using MVCCOP to fix the eyeball in radiosurgery. This study evaluated the outcome of the tumors and the survival of the patients by follow-up. The results indicated that the accuracy of CT localization of Brown-Roberts-Wells (BRW) head ring was 0.65 mm and maximum error was 1.09 mm. The accuracy of target localization of tumors in the eyeball using MVCCOP was 0.87 mm averagely, and the maximum error was 1.19 mm. The errors of fixation of the eyeball were 0.84 mm averagely and 1.17 mm maximally. The total accuracy was 1.34 mm, and 95% confidence accuracy was 2.09 mm. The clinical application of this method in 14 tumor patients showed satisfactory results, and all of the tumors showed the clear rims. The site of ten retinoblastomas was decreased significantly. The local control interval of tumors were 6 ∼ 24 months, median of 10.5 months. The survival of ten patients was 7 ∼ 30 months, median of 16.5 months. Also, the tumors were kept stable or shrank in the other four patients with angioma and melanoma. In conclusion, the MVCCOP is suitable and dependable for X-ray radiosurgery for eyeball tumors. The tumor control and survival of patients are satisfactory, and this method can effectively postpone or avoid extirpation of eyeball.

  1. The effects of aging on ERP correlates of source memory retrieval for self-referential information.

    PubMed

    Dulas, Michael R; Newsome, Rachel N; Duarte, Audrey

    2011-03-04

    Numerous behavioral studies have suggested that normal aging negatively affects source memory accuracy for various kinds of associations. Neuroimaging evidence suggests that less efficient retrieval processing (temporally delayed and attenuated) may contribute to these impairments. Previous aging studies have not compared source memory accuracy and corresponding neural activity for different kinds of source details; namely, those that have been encoded via a more or less effective strategy. Thus, it is not yet known whether encoding source details in a self-referential manner, a strategy suggested to promote successful memory in the young and old, may enhance source memory accuracy and reduce the commonly observed age-related changes in neural activity associated with source memory retrieval. Here, we investigated these issues by using event-related potentials (ERPs) to measure the effects of aging on the neural correlates of successful source memory retrieval ("old-new effects") for objects encoded either self-referentially or self-externally. Behavioral results showed that both young and older adults demonstrated better source memory accuracy for objects encoded self-referentially. ERP results showed that old-new effects onsetted earlier for self-referentially encoded items in both groups and that age-related differences in the onset latency of these effects were reduced for self-referentially, compared to self-externally, encoded items. These results suggest that the implementation of an effective encoding strategy, like self-referential processing, may lead to more efficient retrieval, which in turn may improve source memory accuracy in both young and older adults. Published by Elsevier B.V.

  2. Improved localization accuracy in stochastic super-resolution fluorescence microscopy by K-factor image deshadowing

    PubMed Central

    Ilovitsh, Tali; Meiri, Amihai; Ebeling, Carl G.; Menon, Rajesh; Gerton, Jordan M.; Jorgensen, Erik M.; Zalevsky, Zeev

    2013-01-01

    Localization of a single fluorescent particle with sub-diffraction-limit accuracy is a key merit in localization microscopy. Existing methods such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve localization accuracies of single emitters that can reach an order of magnitude lower than the conventional resolving capabilities of optical microscopy. However, these techniques require a sparse distribution of simultaneously activated fluorophores in the field of view, resulting in larger time needed for the construction of the full image. In this paper we present the use of a nonlinear image decomposition algorithm termed K-factor, which reduces an image into a nonlinear set of contrast-ordered decompositions whose joint product reassembles the original image. The K-factor technique, when implemented on raw data prior to localization, can improve the localization accuracy of standard existing methods, and also enable the localization of overlapping particles, allowing the use of increased fluorophore activation density, and thereby increased data collection speed. Numerical simulations of fluorescence data with random probe positions, and especially at high densities of activated fluorophores, demonstrate an improvement of up to 85% in the localization precision compared to single fitting techniques. Implementing the proposed concept on experimental data of cellular structures yielded a 37% improvement in resolution for the same super-resolution image acquisition time, and a decrease of 42% in the collection time of super-resolution data with the same resolution. PMID:24466491

  3. Passive Sensor Integration for Vehicle Self-Localization in Urban Traffic Environment †

    PubMed Central

    Gu, Yanlei; Hsu, Li-Ta; Kamijo, Shunsuke

    2015-01-01

    This research proposes an accurate vehicular positioning system which can achieve lane-level performance in urban canyons. Multiple passive sensors, which include Global Navigation Satellite System (GNSS) receivers, onboard cameras and inertial sensors, are integrated in the proposed system. As the main source for the localization, the GNSS technique suffers from Non-Line-Of-Sight (NLOS) propagation and multipath effects in urban canyons. This paper proposes to employ a novel GNSS positioning technique in the integration. The employed GNSS technique reduces the multipath and NLOS effects by using the 3D building map. In addition, the inertial sensor can describe the vehicle motion, but has a drift problem as time increases. This paper develops vision-based lane detection, which is firstly used for controlling the drift of the inertial sensor. Moreover, the lane keeping and changing behaviors are extracted from the lane detection function, and further reduce the lateral positioning error in the proposed localization system. We evaluate the integrated localization system in the challenging city urban scenario. The experiments demonstrate the proposed method has sub-meter accuracy with respect to mean positioning error. PMID:26633420

  4. Opacity meter for monitoring exhaust emissions from non-stationary sources

    DOEpatents

    Dec, John Edward

    2000-01-01

    Method and apparatus for determining the opacity of exhaust plumes from moving emissions sources. In operation, a light source is activated at a time prior to the arrival of a diesel locomotive at a measurement point, by means of a track trigger switch or the Automatic Equipment Identification system, such that the opacity measurement is synchronized with the passage of an exhaust plume past the measurement point. A beam of light from the light source passes through the exhaust plume of the locomotive and is detected by a suitable detector, preferably a high-rate photodiode. The light beam is well-collimated and is preferably monochromatic, permitting the use of a narrowband pass filter to discriminate against background light. In order to span a double railroad track and provide a beam which is substantially stronger than background, the light source, preferably a diode laser, must provide a locally intense beam. A high intensity light source is also desirable in order to increase accuracy at the high sampling rates required. Also included is a computer control system useful for data acquisition, manipulation, storage and transmission of opacity data and the identification of the associated diesel engine to a central data collection center.

  5. Single camera photogrammetry system for EEG electrode identification and localization.

    PubMed

    Baysal, Uğur; Sengül, Gökhan

    2010-04-01

    In this study, photogrammetric coordinate measurement and color-based identification of EEG electrode positions on the human head are simultaneously implemented. A rotating, 2MP digital camera about 20 cm above the subject's head is used and the images are acquired at predefined stop points separated azimuthally at equal angular displacements. In order to realize full automation, the electrodes have been labeled by colored circular markers and an electrode recognition algorithm has been developed. The proposed method has been tested by using a plastic head phantom carrying 25 electrode markers. Electrode locations have been determined while incorporating three different methods: (i) the proposed photogrammetric method, (ii) conventional 3D radiofrequency (RF) digitizer, and (iii) coordinate measurement machine having about 6.5 mum accuracy. It is found that the proposed system automatically identifies electrodes and localizes them with a maximum error of 0.77 mm. It is suggested that this method may be used in EEG source localization applications in the human brain.

  6. Combined electroencephalography-functional magnetic resonance imaging and electrical source imaging improves localization of pediatric focal epilepsy.

    PubMed

    Centeno, Maria; Tierney, Tim M; Perani, Suejen; Shamshiri, Elhum A; St Pier, Kelly; Wilkinson, Charlotte; Konn, Daniel; Vulliemoz, Serge; Grouiller, Frédéric; Lemieux, Louis; Pressler, Ronit M; Clark, Christopher A; Cross, J Helen; Carmichael, David W

    2017-08-01

    Surgical treatment in epilepsy is effective if the epileptogenic zone (EZ) can be correctly localized and characterized. Here we use simultaneous electroencephalography-functional magnetic resonance imaging (EEG-fMRI) data to derive EEG-fMRI and electrical source imaging (ESI) maps. Their yield and their individual and combined ability to (1) localize the EZ and (2) predict seizure outcome were then evaluated. Fifty-three children with drug-resistant epilepsy underwent EEG-fMRI. Interictal discharges were mapped using both EEG-fMRI hemodynamic responses and ESI. A single localization was derived from each individual test (EEG-fMRI global maxima [GM]/ESI maximum) and from the combination of both maps (EEG-fMRI/ESI spatial intersection). To determine the localization accuracy and its predictive performance, the individual and combined test localizations were compared to the presumed EZ and to the postsurgical outcome. Fifty-two of 53 patients had significant maps: 47 of 53 for EEG-fMRI, 44 of 53 for ESI, and 34 of 53 for both. The EZ was well characterized in 29 patients; 26 had an EEG-fMRI GM localization that was correct in 11, 22 patients had ESI localization that was correct in 17, and 12 patients had combined EEG-fMRI and ESI that was correct in 11. Seizure outcome following resection was correctly predicted by EEG-fMRI GM in 8 of 20 patients, and by the ESI maximum in 13 of 16. The combined EEG-fMRI/ESI region entirely predicted outcome in 9 of 9 patients, including 3 with no lesion visible on MRI. EEG-fMRI combined with ESI provides a simple unbiased localization that may predict surgery better than each individual test, including in MRI-negative patients. Ann Neurol 2017;82:278-287. © 2017 American Neurological Association.

  7. Horizontal sound localization in cochlear implant users with a contralateral hearing aid.

    PubMed

    Veugen, Lidwien C E; Hendrikse, Maartje M E; van Wanrooij, Marc M; Agterberg, Martijn J H; Chalupper, Josef; Mens, Lucas H M; Snik, Ad F M; John van Opstal, A

    2016-06-01

    Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA). Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli. The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction. We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Local band gap measurements by VEELS of thin film solar cells.

    PubMed

    Keller, Debora; Buecheler, Stephan; Reinhard, Patrick; Pianezzi, Fabian; Pohl, Darius; Surrey, Alexander; Rellinghaus, Bernd; Erni, Rolf; Tiwari, Ayodhya N

    2014-08-01

    This work presents a systematic study that evaluates the feasibility and reliability of local band gap measurements of Cu(In,Ga)Se2 thin films by valence electron energy-loss spectroscopy (VEELS). The compositional gradients across the Cu(In,Ga)Se2 layer cause variations in the band gap energy, which are experimentally determined using a monochromated scanning transmission electron microscope (STEM). The results reveal the expected band gap variation across the Cu(In,Ga)Se2 layer and therefore confirm the feasibility of local band gap measurements of Cu(In,Ga)Se2 by VEELS. The precision and accuracy of the results are discussed based on the analysis of individual error sources, which leads to the conclusion that the precision of our measurements is most limited by the acquisition reproducibility, if the signal-to-noise ratio of the spectrum is high enough. Furthermore, we simulate the impact of radiation losses on the measured band gap value and propose a thickness-dependent correction. In future work, localized band gap variations will be measured on a more localized length scale to investigate, e.g., the influence of chemical inhomogeneities and dopant accumulations at grain boundaries.

  9. Local earthquake interferometry of the IRIS Community Wavefield Experiment, Grant County, Oklahoma

    NASA Astrophysics Data System (ADS)

    Eddy, A. C.; Harder, S. H.

    2017-12-01

    The IRIS Community Wavefield Experiment was deployed in Grant County, located in north central Oklahoma, from June 21 to July 27, 2016. Data from all nodes were recorded at 250 samples per second between June 21 and July 20 along three lines. The main line was 12.5 km long oriented east-west and consisted of 129 nodes. The other two lines were 5.5 km long north-south oriented with 49 nodes each. During this time, approximately 150 earthquakes of magnitude 1.0 to 4.4 were recorded in the surrounding counties of Oklahoma and Kansas. Ideally, sources for local earthquake interferometry should be near surface events that produce high frequency body waves. Unlike ambient noise seismic interferometry (ANSI), which uses days, weeks, or even months of continuously recorded seismic data, local earthquake interferometry uses only short segments ( 2 min.) of data. Interferometry in this case is based on the cross-correlation of body wave surface multiples where the event source is translated to a reference station in the array, which acts as a virtual source. Multiples recorded between the reference station and all other stations can be cross-correlated to produce a clear seismic trace. This process will be repeated with every node acting as the reference station for all events. The resulting shot gather will then be processed and analyzed for quality and accuracy. Successful application of local earthquake interferometry will produce a crustal image with identifiable sedimentary and basement reflectors and possibly a Moho reflection. Economically, local earthquake interferometry could lower the time and resource cost of active and passive seismic surveys while improving subsurface image quality in urban settings or areas of limited access. The applications of this method can potentially be expanded with the inclusion of seismic events with a magnitude of 1.0 or lower.

  10. SU-G-IeP4-14: Prostate Brachytherapy Activity Measurement and Source Localization by Using a Dual Photon Emission Computed Tomography System: A Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, C; Lin, H; Chuang, K

    2016-06-15

    Purpose: To monitor the activity distribution and needle position during and after implantation in operating rooms. Methods: Simulation studies were conducted to assess the feasibility of measurement activity distribution and seed localization using the DuPECT system. The system consists of a LaBr3-based probe and planar detection heads, a collimation system, and a coincidence circuit. The two heads can be manipulated independently. Simplified Yb-169 brachytherapy seeds were used. A water-filled cylindrical phantom with a 40-mm diameter and 40-mm length was used to model a simplified prostate of the Asian man. Two simplified seeds were placed at a radial distance of 10more » mm and tangential distance of 10 mm from the center of the phantom. The probe head was arranged perpendicular to the planar head. Results of various imaging durations were analyzed and the accuracy of the seed localization was assessed by calculating the centroid of the seed. Results: The reconstructed images indicate that the DuPECT can measure the activity distribution and locate the seeds dwelt in different positions intraoperatively. The calculated centroid on average turned out to be accurate within the pixel size of 0.5 mm. The two sources were identified when the duration is longer than 15 s. The sensitivity measured in water was merely 0.07 cps/MBq. Conclusion: Preliminary results show that the measurement of the activity distribution and seed localization are feasible using the DuPECT system intraoperatively. It indicates the DuPECT system has potential to be an approach for dose-distribution-validation. The efficacy of acvtivity distribution measurement and source localization using the DuPECT system will evaluated in more realistic phantom studies (e.g., various attenuation materials and greater number of seeds) in the future investigation.« less

  11. Evaluation Methodology between Globalization and Localization Features Approaches for Skin Cancer Lesions Classification

    NASA Astrophysics Data System (ADS)

    Ahmed, H. M.; Al-azawi, R. J.; Abdulhameed, A. A.

    2018-05-01

    Huge efforts have been put in the developing of diagnostic methods to skin cancer disease. In this paper, two different approaches have been addressed for detection the skin cancer in dermoscopy images. The first approach uses a global method that uses global features for classifying skin lesions, whereas the second approach uses a local method that uses local features for classifying skin lesions. The aim of this paper is selecting the best approach for skin lesion classification. The dataset has been used in this paper consist of 200 dermoscopy images from Pedro Hispano Hospital (PH2). The achieved results are; sensitivity about 96%, specificity about 100%, precision about 100%, and accuracy about 97% for globalization approach while, sensitivity about 100%, specificity about 100%, precision about 100%, and accuracy about 100% for Localization Approach, these results showed that the localization approach achieved acceptable accuracy and better than globalization approach for skin cancer lesions classification.

  12. SU-C-BRC-04: Efficient Dose Calculation Algorithm for FFF IMRT with a Simplified Bivariate Gaussian Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, F; Park, J; Barraclough, B

    2016-06-15

    Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less

  13. Investigations of interference between electromagnetic transponders and wireless MOSFET dosimeters: a phantom study.

    PubMed

    Su, Zhong; Zhang, Lisha; Ramakrishnan, V; Hagan, Michael; Anscher, Mitchell

    2011-05-01

    To evaluate both the Calypso Systems' (Calypso Medical Technologies, Inc., Seattle, WA) localization accuracy in the presence of wireless metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters of dose verification system (DVS, Sicel Technologies, Inc., Morrisville, NC) and the dosimeters' reading accuracy in the presence of wireless electromagnetic transponders inside a phantom. A custom-made, solid-water phantom was fabricated with space for transponders and dosimeters. Two inserts were machined with positioning grooves precisely matching the dimensions of the transponders and dosimeters and were arranged in orthogonal and parallel orientations, respectively. To test the transponder localization accuracy with/without presence of dosimeters (hypothesis 1), multivariate analyses were performed on transponder-derived localization data with and without dosimeters at each preset distance to detect statistically significant localization differences between the control and test sets. To test dosimeter dose-reading accuracy with/without presence of transponders (hypothesis 2), an approach of alternating the transponder presence in seven identical fraction dose (100 cGy) deliveries and measurements was implemented. Two-way analysis of variance was performed to examine statistically significant dose-reading differences between the two groups and the different fractions. A relative-dose analysis method was also used to evaluate transponder impact on dose-reading accuracy after dose-fading effect was removed by a second-order polynomial fit. Multivariate analysis indicated that hypothesis 1 was false; there was a statistically significant difference between the localization data from the control and test sets. However, the upper and lower bounds of the 95% confidence intervals of the localized positional differences between the control and test sets were less than 0.1 mm, which was significantly smaller than the minimum clinical localization resolution of 0.5 mm. For hypothesis 2, analysis of variance indicated that there was no statistically significant difference between the dosimeter readings with and without the presence of transponders. Both orthogonal and parallel configurations had difference of polynomial-fit dose to measured dose values within 1.75%. The phantom study indicated that the Calypso System's localization accuracy was not affected clinically due to the presence of DVS wireless MOSFET dosimeters and the dosimeter-measured doses were not affected by the presence of transponders. Thus, the same patients could be implanted with both transponders and dosimeters to benefit from improved accuracy of radiotherapy treatments offered by conjunctional use of the two systems.

  14. Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo

    NASA Astrophysics Data System (ADS)

    Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.; Kouzes, Richard T.; Kulisek, Jonathan A.; Robinson, Sean M.; Wittman, Richard A.

    2015-10-01

    Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide an estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.

  15. Acoustic Network Localization and Interpretation of Infrasonic Pulses from Lightning

    NASA Astrophysics Data System (ADS)

    Arechiga, R. O.; Johnson, J. B.; Badillo, E.; Michnovicz, J. C.; Thomas, R. J.; Edens, H. E.; Rison, W.

    2011-12-01

    We improve on the localization accuracy of thunder sources and identify infrasonic pulses that are correlated across a network of acoustic arrays. We attribute these pulses to electrostatic charge relaxation (collapse of the electric field) and attempt to model their spatial extent and acoustic source strength. Toward this objective we have developed a single audio range (20-15,000 Hz) acoustic array and a 4-station network of broadband (0.01-500 Hz) microphone arrays with aperture of ~45 m. The network has an aperture of 1700 m and was installed during the summers of 2009-2011 in the Magdalena mountains of New Mexico, an area that is subject to frequent lightning activity. We are exploring a new technique based on inverse theory that integrates information from the audio range and the network of broadband acoustic arrays to locate thunder sources more accurately than can be achieved with a single array. We evaluate the performance of the technique by comparing the location of thunder sources with RF sources located by the lightning mapping array (LMA) of Langmuir Laboratory at New Mexico Tech. We will show results of this technique for lightning flashes that occurred in the vicinity of our network of acoustic arrays and over the LMA. We will use acoustic network detection of infrasonic pulses together with LMA data and electric field measurements to estimate the spatial distribution of the charge (within the cloud) that is used to produce a lightning flash, and will try to quantify volumetric charges (charge magnitude) within clouds.

  16. The role of blood vessels in high-resolution volume conductor head modeling of EEG.

    PubMed

    Fiederer, L D J; Vorwerk, J; Lucka, F; Dannhauer, M; Yang, S; Dümpelmann, M; Schulze-Bonhage, A; Aertsen, A; Speck, O; Wolters, C H; Ball, T

    2016-03-01

    Reconstruction of the electrical sources of human EEG activity at high spatio-temporal accuracy is an important aim in neuroscience and neurological diagnostics. Over the last decades, numerous studies have demonstrated that realistic modeling of head anatomy improves the accuracy of source reconstruction of EEG signals. For example, including a cerebro-spinal fluid compartment and the anisotropy of white matter electrical conductivity were both shown to significantly reduce modeling errors. Here, we for the first time quantify the role of detailed reconstructions of the cerebral blood vessels in volume conductor head modeling for EEG. To study the role of the highly arborized cerebral blood vessels, we created a submillimeter head model based on ultra-high-field-strength (7T) structural MRI datasets. Blood vessels (arteries and emissary/intraosseous veins) were segmented using Frangi multi-scale vesselness filtering. The final head model consisted of a geometry-adapted cubic mesh with over 17×10(6) nodes. We solved the forward model using a finite-element-method (FEM) transfer matrix approach, which allowed reducing computation times substantially and quantified the importance of the blood vessel compartment by computing forward and inverse errors resulting from ignoring the blood vessels. Our results show that ignoring emissary veins piercing the skull leads to focal localization errors of approx. 5 to 15mm. Large errors (>2cm) were observed due to the carotid arteries and the dense arterial vasculature in areas such as in the insula or in the medial temporal lobe. Thus, in such predisposed areas, errors caused by neglecting blood vessels can reach similar magnitudes as those previously reported for neglecting white matter anisotropy, the CSF or the dura - structures which are generally considered important components of realistic EEG head models. Our findings thus imply that including a realistic blood vessel compartment in EEG head models will be helpful to improve the accuracy of EEG source analyses particularly when high accuracies in brain areas with dense vasculature are required. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Localization of insulinomas to regions of the pancreas by intraarterial calcium stimulation: the NIH experience.

    PubMed

    Guettier, Jean-Marc; Kam, Anthony; Chang, Richard; Skarulis, Monica C; Cochran, Craig; Alexander, H Richard; Libutti, Steven K; Pingpank, James F; Gorden, Phillip

    2009-04-01

    Selective intraarterial calcium injection of the major pancreatic arteries with hepatic venous sampling [calcium arterial stimulation (CaStim)] has been used as a localizing tool for insulinomas at the National Institutes of Health (NIH) since 1989. The accuracy of this technique for localizing insulinomas was reported for all cases until 1996. The aim of the study was to assess the accuracy and track record of the CaStim over time and in the context of evolving technology and to review issues related to result interpretation and procedure complications. CaStim was the only invasive preoperative localization modality used at our center. Endoscopic ultrasound (US) was not studied. We conducted a retrospective case review at a referral center. Twenty-nine women and 16 men (mean age, 47 yr; range, 13-78) were diagnosed with an insulinoma from 1996-2008. A supervised fast was conducted to confirm the diagnosis of insulinoma. US, computed tomography (CT), magnetic resonance imaging (MRI), and CaStim were used as preoperative localization studies. Localization predicted by each preoperative test was compared to surgical localization for accuracy. We measured the accuracy of US, CT, MRI, and CaStim for localization of insulinomas preoperatively. All 45 patients had surgically proven insulinomas. Thirty-eight of 45 (84%) localized to the correct anatomical region by CaStim. In five of 45 (11%) patients, the CaStim was falsely negative. Two of 45 (4%) had false-positive localizations. The CaStim has remained vastly superior to abdominal US, CT, or MRI over time as a preoperative localizing tool for insulinomas. The utility of the CaStim for this purpose and in this setting is thus validated.

  18. Using Kepler for Tool Integration in Microarray Analysis Workflows.

    PubMed

    Gan, Zhuohui; Stowe, Jennifer C; Altintas, Ilkay; McCulloch, Andrew D; Zambon, Alexander C

    Increasing numbers of genomic technologies are leading to massive amounts of genomic data, all of which requires complex analysis. More and more bioinformatics analysis tools are being developed by scientist to simplify these analyses. However, different pipelines have been developed using different software environments. This makes integrations of these diverse bioinformatics tools difficult. Kepler provides an open source environment to integrate these disparate packages. Using Kepler, we integrated several external tools including Bioconductor packages, AltAnalyze, a python-based open source tool, and R-based comparison tool to build an automated workflow to meta-analyze both online and local microarray data. The automated workflow connects the integrated tools seamlessly, delivers data flow between the tools smoothly, and hence improves efficiency and accuracy of complex data analyses. Our workflow exemplifies the usage of Kepler as a scientific workflow platform for bioinformatics pipelines.

  19. A Review of Distributed Control Techniques for Power Quality Improvement in Micro-grids

    NASA Astrophysics Data System (ADS)

    Zeeshan, Hafiz Muhammad Ali; Nisar, Fatima; Hassan, Ahmad

    2017-05-01

    Micro-grid is typically visualized as a small scale local power supply network dependent on distributed energy resources (DERs) that can operate simultaneously with grid as well as in standalone manner. The distributed generator of a micro-grid system is usually a converter-inverter type topology acting as a non-linear load, and injecting harmonics into the distribution feeder. Hence, the negative effects on power quality by the usage of distributed generation sources and components are clearly witnessed. In this paper, a review of distributed control approaches for power quality improvement is presented which encompasses harmonic compensation, loss mitigation and optimum power sharing in multi-source-load distributed power network. The decentralized subsystems for harmonic compensation and active-reactive power sharing accuracy have been analysed in detail. Results have been validated to be consistent with IEEE standards.

  20. Technical Note: Evaluation of the systematic accuracy of a frameless, multiple image modality guided, linear accelerator based stereotactic radiosurgery system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, N., E-mail: nwen1@hfhs.org; Snyder, K. C.; Qin, Y.

    2016-05-15

    Purpose: To evaluate the total systematic accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy and intermodality difference was determined by delivering radiation to an end-to-end prototype phantom, in which the targets were localized using optical surface monitoring system (OSMS), electromagnetic beacon-based tracking (Calypso®), cone-beam CT, “snap-shot” planar x-ray imaging, and a robotic couch. Six IMRT plans with jaw tracking and a flattening filter free beam were used to study the dosimetric accuracy for intracranial and spinal stereotactic radiosurgery treatment. Results: End-to-end localization accuracy of the system evaluated with the end-to-end phantom was 0.5 ± 0.2more » mm with a maximum deviation of 0.9 mm over 90 measurements (including jaw, MLC, and cone measurements for both auto and manual fusion) for single isocenter, single target treatment, 0.6 ± 0.4 mm for multitarget treatment with shared isocenter. Residual setup errors were within 0.1 mm for OSMS, and 0.3 mm for Calypso. Dosimetric evaluation based on absolute film dosimetry showed greater than 90% pass rate for all cases using a gamma criteria of 3%/1 mm. Conclusions: The authors’ experience demonstrates that the localization accuracy of the frameless image-guided system is comparable to robotic or invasive frame based radiosurgery systems.« less

  1. SU-E-T-366: Clinical Implementation of MR-Guided Vaginal Cylinder Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owrangi, A; Jolly, S; Balter, J

    2014-06-01

    Purpose: To evaluate the accuracy of MR-based vaginal brachytherapy source localization using an in-house MR-visible marker versus the alignment of an applicator model to MR images. Methods: Three consecutive patients undergoing vaginal HDR brachytherapy with a plastic cylinder were scanned with both CT and MRI (including T1- and T2- weighted images). An MR-visible source localization marker, consisting of a sealed thin catheter filled with either water (for T2 contrast) or Gd-doped water (for T1 contrast), was assembled shortly before scanning. Clinically, the applicator channel was digitized on CT with an x-ray marker. To evaluate the efficacy of MR-based applicator reconstruction,more » each MR image volume was aligned locally to the CT images based on the region containing the cylinder. Applicator digitization was performed on the MR images using (1) the MR visible marker and (2) alignment of an applicator surface model from Varian's Brachytherapy Planning software to the MRI images. Resulting source positions were compared with the original CT digitization. Results: Although the source path was visualized by the MR marker, the applicator tip proved difficult to identify due to challenges in achieving a watertight seal. This resulted in observed displacements of the catheter tip, at times >1cm. Deviations between the central source positions identified via aligning the applicator surface model to MR and using the xray marker on CT ranged from 0.07 – 0.19 cm and 0.07 – 0.20 cm on T1- weighted and T2-weighted images, respectively. Conclusion: Based on the current study, aligning the applicator model to MRI provides a practical, current approach to perform MR-based brachytherapy planning. Further study is needed to produce catheters with reliably and reproducibly identifiable tips. Attempts are being made to improve catheter seals, as well as to increase the viscosity of the contrast material to decrease fluid mobility inside the catheter.« less

  2. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    PubMed Central

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  3. Localization accuracy of sphere fiducials in computed tomography images

    NASA Astrophysics Data System (ADS)

    Kobler, Jan-Philipp; Díaz Díaz, Jesus; Fitzpatrick, J. Michael; Lexow, G. Jakob; Majdani, Omid; Ortmaier, Tobias

    2014-03-01

    In recent years, bone-attached robots and microstereotactic frames have attracted increasing interest due to the promising targeting accuracy they provide. Such devices attach to a patient's skull via bone anchors, which are used as landmarks during intervention planning as well. However, as simulation results reveal, the performance of such mechanisms is limited by errors occurring during the localization of their bone anchors in preoperatively acquired computed tomography images. Therefore, it is desirable to identify the most suitable fiducials as well as the most accurate method for fiducial localization. We present experimental results of a study focusing on the fiducial localization error (FLE) of spheres. Two phantoms equipped with fiducials made from ferromagnetic steel and titanium, respectively, are used to compare two clinically available imaging modalities (multi-slice CT (MSCT) and cone-beam CT (CBCT)), three localization algorithms as well as two methods for approximating the FLE. Furthermore, the impact of cubic interpolation applied to the images is investigated. Results reveal that, generally, the achievable localization accuracy in CBCT image data is significantly higher compared to MSCT imaging. The lowest FLEs (approx. 40 μm) are obtained using spheres made from titanium, CBCT imaging, template matching based on cross correlation for localization, and interpolating the images by a factor of sixteen. Nevertheless, the achievable localization accuracy of spheres made from steel is only slightly inferior. The outcomes of the presented study will be valuable considering the optimization of future microstereotactic frame prototypes as well as the operative workflow.

  4. Accuracy-preserving source term quadrature for third-order edge-based discretization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Liu, Yi

    2017-09-01

    In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.

  5. Groundwater source contamination mechanisms: Physicochemical profile clustering, risk factor analysis and multivariate modelling

    NASA Astrophysics Data System (ADS)

    Hynds, Paul; Misstear, Bruce D.; Gill, Laurence W.; Murphy, Heather M.

    2014-04-01

    An integrated domestic well sampling and "susceptibility assessment" programme was undertaken in the Republic of Ireland from April 2008 to November 2010. Overall, 211 domestic wells were sampled, assessed and collated with local climate data. Based upon groundwater physicochemical profile, three clusters have been identified and characterised by source type (borehole or hand-dug well) and local geological setting. Statistical analysis indicates that cluster membership is significantly associated with the prevalence of bacteria (p = 0.001), with mean Escherichia coli presence within clusters ranging from 15.4% (Cluster-1) to 47.6% (Cluster-3). Bivariate risk factor analysis shows that on-site septic tank presence was the only risk factor significantly associated (p < 0.05) with bacterial presence within all clusters. Point agriculture adjacency was significantly associated with both borehole-related clusters. Well design criteria were associated with hand-dug wells and boreholes in areas characterised by high permeability subsoils, while local geological setting was significant for hand-dug wells and boreholes in areas dominated by low/moderate permeability subsoils. Multivariate susceptibility models were developed for all clusters, with predictive accuracies of 84% (Cluster-1) to 91% (Cluster-2) achieved. Septic tank setback was a common variable within all multivariate models, while agricultural sources were also significant, albeit to a lesser degree. Furthermore, well liner clearance was a significant factor in all models, indicating that direct surface ingress is a significant well contamination mechanism. Identification and elucidation of cluster-specific contamination mechanisms may be used to develop improved overall risk management and wellhead protection strategies, while also informing future remediation and maintenance efforts.

  6. New Maximum Tsunami Inundation Maps for Use by Local Emergency Planners in the State of California, USA

    NASA Astrophysics Data System (ADS)

    Wilson, R. I.; Barberopoulou, A.; Miller, K. M.; Goltz, J. D.; Synolakis, C. E.

    2008-12-01

    A consortium of tsunami hydrodynamic modelers, geologic hazard mapping specialists, and emergency planning managers is producing maximum tsunami inundation maps for California, covering most residential and transient populated areas along the state's coastline. The new tsunami inundation maps will be an upgrade from the existing maps for the state, improving on the resolution, accuracy, and coverage of the maximum anticipated tsunami inundation line. Thirty-five separate map areas covering nearly one-half of California's coastline were selected for tsunami modeling using the MOST (Method of Splitting Tsunami) model. From preliminary evaluations of nearly fifty local and distant tsunami source scenarios, those with the maximum expected hazard for a particular area were input to MOST. The MOST model was run with a near-shore bathymetric grid resolution varying from three arc-seconds (90m) to one arc-second (30m), depending on availability. Maximum tsunami "flow depth" and inundation layers were created by combining all modeled scenarios for each area. A method was developed to better define the location of the maximum inland penetration line using higher resolution digital onshore topographic data from interferometric radar sources. The final inundation line for each map area was validated using a combination of digital stereo photography and fieldwork. Further verification of the final inundation line will include ongoing evaluation of tsunami sources (seismic and submarine landslide) as well as comparison to the location of recorded paleotsunami deposits. Local governmental agencies can use these new maximum tsunami inundation lines to assist in the development of their evacuation routes and emergency response plans.

  7. Source characterization and modeling development for monoenergetic-proton radiography experiments on OMEGA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manuel, M. J.-E.; Zylstra, A. B.; Rinderknecht, H. G.

    2012-06-15

    A monoenergetic proton source has been characterized and a modeling tool developed for proton radiography experiments at the OMEGA [T. R. Boehly et al., Opt. Comm. 133, 495 (1997)] laser facility. Multiple diagnostics were fielded to measure global isotropy levels in proton fluence and images of the proton source itself provided information on local uniformity relevant to proton radiography experiments. Global fluence uniformity was assessed by multiple yield diagnostics and deviations were calculated to be {approx}16% and {approx}26% of the mean for DD and D{sup 3}He fusion protons, respectively. From individual fluence images, it was found that the angular frequenciesmore » of Greater-Than-Or-Equivalent-To 50 rad{sup -1} contributed less than a few percent to local nonuniformity levels. A model was constructed using the Geant4 [S. Agostinelli et al., Nuc. Inst. Meth. A 506, 250 (2003)] framework to simulate proton radiography experiments. The simulation implements realistic source parameters and various target geometries. The model was benchmarked with the radiographs of cold-matter targets to within experimental accuracy. To validate the use of this code, the cold-matter approximation for the scattering of fusion protons in plasma is discussed using a typical laser-foil experiment as an example case. It is shown that an analytic cold-matter approximation is accurate to within Less-Than-Or-Equivalent-To 10% of the analytic plasma model in the example scenario.« less

  8. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  9. Factors affecting basket catheter detection of real and phantom rotors in the atria: A computational study.

    PubMed

    Martinez-Mateu, Laura; Romero, Lucia; Ferrer-Albero, Ana; Sebastian, Rafael; Rodríguez Matas, José F; Jalife, José; Berenfeld, Omer; Saiz, Javier

    2018-03-01

    Anatomically based procedures to ablate atrial fibrillation (AF) are often successful in terminating paroxysmal AF. However, the ability to terminate persistent AF remains disappointing. New mechanistic approaches use multiple-electrode basket catheter mapping to localize and target AF drivers in the form of rotors but significant concerns remain about their accuracy. We aimed to evaluate how electrode-endocardium distance, far-field sources and inter-electrode distance affect the accuracy of localizing rotors. Sustained rotor activation of the atria was simulated numerically and mapped using a virtual basket catheter with varying electrode densities placed at different positions within the atrial cavity. Unipolar electrograms were calculated on the entire endocardial surface and at each of the electrodes. Rotors were tracked on the interpolated basket phase maps and compared with the respective atrial voltage and endocardial phase maps, which served as references. Rotor detection by the basket maps varied between 35-94% of the simulation time, depending on the basket's position and the electrode-to-endocardial wall distance. However, two different types of phantom rotors appeared also on the basket maps. The first type was due to the far-field sources and the second type was due to interpolation between the electrodes; increasing electrode density decreased the incidence of the second but not the first type of phantom rotors. In the simulations study, basket catheter-based phase mapping detected rotors even when the basket was not in full contact with the endocardial wall, but always generated a number of phantom rotors in the presence of only a single real rotor, which would be the desired ablation target. Phantom rotors may mislead and contribute to failure in AF ablation procedures.

  10. Factors affecting basket catheter detection of real and phantom rotors in the atria: A computational study

    PubMed Central

    Romero, Lucia; Rodríguez Matas, José F.; Berenfeld, Omer; Saiz, Javier

    2018-01-01

    Anatomically based procedures to ablate atrial fibrillation (AF) are often successful in terminating paroxysmal AF. However, the ability to terminate persistent AF remains disappointing. New mechanistic approaches use multiple-electrode basket catheter mapping to localize and target AF drivers in the form of rotors but significant concerns remain about their accuracy. We aimed to evaluate how electrode-endocardium distance, far-field sources and inter-electrode distance affect the accuracy of localizing rotors. Sustained rotor activation of the atria was simulated numerically and mapped using a virtual basket catheter with varying electrode densities placed at different positions within the atrial cavity. Unipolar electrograms were calculated on the entire endocardial surface and at each of the electrodes. Rotors were tracked on the interpolated basket phase maps and compared with the respective atrial voltage and endocardial phase maps, which served as references. Rotor detection by the basket maps varied between 35–94% of the simulation time, depending on the basket’s position and the electrode-to-endocardial wall distance. However, two different types of phantom rotors appeared also on the basket maps. The first type was due to the far-field sources and the second type was due to interpolation between the electrodes; increasing electrode density decreased the incidence of the second but not the first type of phantom rotors. In the simulations study, basket catheter-based phase mapping detected rotors even when the basket was not in full contact with the endocardial wall, but always generated a number of phantom rotors in the presence of only a single real rotor, which would be the desired ablation target. Phantom rotors may mislead and contribute to failure in AF ablation procedures. PMID:29505583

  11. Waveform inversion of acoustic waves for explosion yield estimation

    DOE PAGES

    Kim, K.; Rodgers, A. J.

    2016-07-08

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  12. Waveform inversion of acoustic waves for explosion yield estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K.; Rodgers, A. J.

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  13. Optimizing Tsunami Forecast Model Accuracy

    NASA Astrophysics Data System (ADS)

    Whitmore, P.; Nyland, D. L.; Huang, P. Y.

    2015-12-01

    Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.

  14. Acoustic localization at large scales: a promising method for grey wolf monitoring.

    PubMed

    Papin, Morgane; Pichenot, Julian; Guérold, François; Germain, Estelle

    2018-01-01

    The grey wolf ( Canis lupus ) is naturally recolonizing its former habitats in Europe where it was extirpated during the previous two centuries. The management of this protected species is often controversial and its monitoring is a challenge for conservation purposes. However, this elusive carnivore can disperse over long distances in various natural contexts, making its monitoring difficult. Moreover, methods used for collecting signs of presence are usually time-consuming and/or costly. Currently, new acoustic recording tools are contributing to the development of passive acoustic methods as alternative approaches for detecting, monitoring, or identifying species that produce sounds in nature, such as the grey wolf. In the present study, we conducted field experiments to investigate the possibility of using a low-density microphone array to localize wolves at a large scale in two contrasting natural environments in north-eastern France. For scientific and social reasons, the experiments were based on a synthetic sound with similar acoustic properties to howls. This sound was broadcast at several sites. Then, localization estimates and the accuracy were calculated. Finally, linear mixed-effects models were used to identify the factors that influenced the localization accuracy. Among 354 nocturnal broadcasts in total, 269 were recorded by at least one autonomous recorder, thereby demonstrating the potential of this tool. Besides, 59 broadcasts were recorded by at least four microphones and used for acoustic localization. The broadcast sites were localized with an overall mean accuracy of 315 ± 617 (standard deviation) m. After setting a threshold for the temporal error value associated with the estimated coordinates, some unreliable values were excluded and the mean accuracy decreased to 167 ± 308 m. The number of broadcasts recorded was higher in the lowland environment, but the localization accuracy was similar in both environments, although it varied significantly among different nights in each study area. Our results confirm the potential of using acoustic methods to localize wolves with high accuracy, in different natural environments and at large spatial scales. Passive acoustic methods are suitable for monitoring the dynamics of grey wolf recolonization and so, will contribute to enhance conservation and management plans.

  15. Marker-Based Multi-Sensor Fusion Indoor Localization System for Micro Air Vehicles.

    PubMed

    Xing, Boyang; Zhu, Quanmin; Pan, Feng; Feng, Xiaoxue

    2018-05-25

    A novel multi-sensor fusion indoor localization algorithm based on ArUco marker is designed in this paper. The proposed ArUco mapping algorithm can build and correct the map of markers online with Grubbs criterion and K-mean clustering, which avoids the map distortion due to lack of correction. Based on the conception of multi-sensor information fusion, the federated Kalman filter is utilized to synthesize the multi-source information from markers, optical flow, ultrasonic and the inertial sensor, which can obtain a continuous localization result and effectively reduce the position drift due to the long-term loss of markers in pure marker localization. The proposed algorithm can be easily implemented in a hardware of one Raspberry Pi Zero and two STM32 micro controllers produced by STMicroelectronics (Geneva, Switzerland). Thus, a small-size and low-cost marker-based localization system is presented. The experimental results show that the speed estimation result of the proposed system is better than Px4flow, and it has the centimeter accuracy of mapping and positioning. The presented system not only gives satisfying localization precision, but also has the potential to expand other sensors (such as visual odometry, ultra wideband (UWB) beacon and lidar) to further improve the localization performance. The proposed system can be reliably employed in Micro Aerial Vehicle (MAV) visual localization and robotics control.

  16. Community assessment of tropical tree biomass: challenges and opportunities for REDD.

    PubMed

    Theilade, Ida; Rutishauser, Ervan; Poulsen, Michael K

    2015-12-01

    REDD+ programs rely on accurate forest carbon monitoring. Several REDD+ projects have recently shown that local communities can monitor above ground biomass as well as external professionals, but at lower costs. However, the precision and accuracy of carbon monitoring conducted by local communities have rarely been assessed in the tropics. The aim of this study was to investigate different sources of error in tree biomass measurements conducted by community monitors and determine the effect on biomass estimates. Furthermore, we explored the potential of local ecological knowledge to assess wood density and botanical identification of trees. Community monitors were able to measure tree DBH accurately, but some large errors were found in girth measurements of large and odd-shaped trees. Monitors with experience from the logging industry performed better than monitors without previous experience. Indeed, only experienced monitors were able to discriminate trees with low wood densities. Local ecological knowledge did not allow consistent tree identification across monitors. Future REDD+ programmes may benefit from the systematic training of local monitors in tree DBH measurement, with special attention given to large and odd-shaped trees. A better understanding of traditional classification systems and concepts is required for local tree identifications and wood density estimates to become useful in monitoring of biomass and tree diversity.

  17. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    PubMed

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E

    2017-04-15

    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (p<0.001) for predicting the task being performed within each scan using artifact-cleaned components. The NMF algorithms, which suppressed negative BOLD signal, had the poorest accuracy compared to the ICA and sparse coding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (p<0.001). Lower classification accuracy occurred when the extracted spatial maps contained more CSF regions (p<0.001). The success of sparse coding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Beyond the double banana: improved recognition of temporal lobe seizures in long-term EEG.

    PubMed

    Rosenzweig, Ivana; Fogarasi, András; Johnsen, Birger; Alving, Jørgen; Fabricius, Martin Ejler; Scherg, Michael; Neufeld, Miri Y; Pressler, Ronit; Kjaer, Troels W; van Emde Boas, Walter; Beniczky, Sándor

    2014-02-01

    To investigate whether extending the 10-20 array with 6 electrodes in the inferior temporal chain and constructing computed montages increases the diagnostic value of ictal EEG activity originating in the temporal lobe. In addition, the accuracy of computer-assisted spectral source analysis was investigated. Forty EEG samples were reviewed by 7 EEG experts in various montages (longitudinal and transversal bipolar, common average, source derivation, source montage, current source density, and reference-free montages) using 2 electrode arrays (10-20 and the extended one). Spectral source analysis used source montage to calculate density spectral array, defining the earliest oscillatory onset. From this, phase maps were calculated for localization. The reference standard was the decision of the multidisciplinary epilepsy surgery team on the seizure onset zone. Clinical performance was compared with the double banana (longitudinal bipolar montage, 10-20 array). Adding the inferior temporal electrode chain, computed montages (reference free, common average, and source derivation), and voltage maps significantly increased the sensitivity. Phase maps had the highest sensitivity and identified ictal activity at earlier time-point than visual inspection. There was no significant difference concerning specificity. The findings advocate for the use of these digital EEG technology-derived analysis methods in clinical practice.

  19. High accuracy satellite drag model (HASDM)

    NASA Astrophysics Data System (ADS)

    Storz, M.; Bowman, B.; Branson, J.

    The dominant error source in the force models used to predict low perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying high-resolution density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal, semidiurnal and terdiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index a p to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low perigee satellites.

  20. High accuracy satellite drag model (HASDM)

    NASA Astrophysics Data System (ADS)

    Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent

    The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.

  1. What Drives Saline Circulation Cells in Coastal Aquifers? An Energy Balance for Density-Driven Groundwater Systems

    NASA Astrophysics Data System (ADS)

    Harvey, C. F.; Michael, H. A.

    2017-12-01

    We formulate the energy balance for coastal groundwater systems and apply it to: (1) Explain the energy driving offshore saline circulation cells, and; (2) Assess the accuracy of numerical simulations of coastal groundwater systems. The flow of fresh groundwater to the ocean is driven by the loss of potential energy as groundwater drops from the elevation of the inland watertable, where recharge occurs, to discharge at sea level. This freshwater flow creates an underlying circulation cell of seawater, drawn into coastal aquifers offshore and discharging near shore, that adds to total submarine groundwater discharge. The saline water in the circulation cell enters and exits the aquifer through the sea floor at the same hydraulic potential. Existing theory explains that the saline circulation cell is driven by mixing of fresh and saline without any additional source of potential or mechanical power. This explanation raises a basic thermodynamic question: what is the source of energy that drives the saline circulation cell? Here, we resolve this question by building upon Hubbert's conception of hydraulic potential to formulate an energy balance for density-dependent flow and salt transport through an aquifer. We show that, because local energy dissipation within the aquifer is proportional to the square of the groundwater velocity, more groundwater flow may be driven through an aquifer for a given energy input if local variations in velocity are smoothed. Our numerical simulations of coastal groundwater systems show that dispersion of salt across the fresh-saline interface spreads flow over larger volumes of the aquifer, smoothing the velocity field, and increasing total flow and submarine groundwater discharge without consuming more power. The energy balance also provides a criterion, in addition to conventional mass balances, for judging the accuracy of numerical solutions of non-linear density-dependent flow problems. Our results show that some numerical simulations of saline circulation converge to excellent balances of both mass and energy, but that other simulations may poorly balance energy even after converging to a good mass balance. Thus, the energy balance can be used to identify incorrect simulations that pass convential mass balance criteria for accuracy.

  2. Adaptive Kalman filter for indoor localization using Bluetooth Low Energy and inertial measurement unit.

    PubMed

    Yoon, Paul K; Zihajehzadeh, Shaghayegh; Bong-Soo Kang; Park, Edward J

    2015-08-01

    This paper proposes a novel indoor localization method using the Bluetooth Low Energy (BLE) and an inertial measurement unit (IMU). The multipath and non-line-of-sight errors from low-power wireless localization systems commonly result in outliers, affecting the positioning accuracy. We address this problem by adaptively weighting the estimates from the IMU and BLE in our proposed cascaded Kalman filter (KF). The positioning accuracy is further improved with the Rauch-Tung-Striebel smoother. The performance of the proposed algorithm is compared against that of the standard KF experimentally. The results show that the proposed algorithm can maintain high accuracy for position tracking the sensor in the presence of the outliers.

  3. Effect of anisoplanatism on the measurement accuracy of an extended-source Hartmann-Shack wavefront sensor

    NASA Astrophysics Data System (ADS)

    Woeger, Friedrich; Rimmele, Thomas

    2009-10-01

    We analyze the effect of anisoplanatic atmospheric turbulence on the measurement accuracy of an extended-source Hartmann-Shack wavefront sensor (HSWFS). We have numerically simulated an extended-source HSWFS, using a scenery of the solar surface that is imaged through anisoplanatic atmospheric turbulence and imaging optics. Solar extended-source HSWFSs often use cross-correlation algorithms in combination with subpixel shift finding algorithms to estimate the wavefront gradient, two of which were tested for their effect on the measurement accuracy. We find that the measurement error of an extended-source HSWFS is governed mainly by the optical geometry of the HSWFS, employed subpixel finding algorithm, and phase anisoplanatism. Our results show that effects of scintillation anisoplanatism are negligible when cross-correlation algorithms are used.

  4. The Effect of Contraceptive Knowledge Source upon Knowledge Accuracy and Contraceptive Behavior.

    ERIC Educational Resources Information Center

    Pope, A. J.; And Others

    1985-01-01

    The purpose of this investigation was to determine the relationship of the source of contraceptive knowledge to contraceptive knowledge accuracy and contraceptive behavior of college freshmen. Results and implications for health educators are discussed. (MT)

  5. Adaptive near-field beamforming techniques for sound source imaging.

    PubMed

    Cho, Yong Thung; Roan, Michael J

    2009-02-01

    Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.

  6. Alzheimer's disease can spare local metacognition despite global anosognosia: revisiting the confidence-accuracy relationship in episodic memory.

    PubMed

    Gallo, David A; Cramer, Stefanie J; Wong, Jessica T; Bennett, David A

    2012-07-01

    Alzheimer's disease (AD) can impair metacognition in addition to more basic cognitive functions like memory. However, while global metacognitive inaccuracies are well documented (i.e., low deficit awareness, or anosognosia), the evidence is mixed regarding the effects of AD on local or task-based metacognitive judgments. Here we investigated local metacognition with respect to the confidence-accuracy relationship in episodic memory (i.e., metamemory). AD and control participants studied pictures of common objects and their verbal labels, and then took forced-choice picture recollection tests using the verbal labels as retrieval cues. We found that item-based confidence judgments discriminated between accurate and inaccurate recollection responses in both groups, implicating relatively spared metamemory in AD. By contrast, there was evidence for global metacognitive deficiencies, as AD participants underestimated the severity of their everyday problems compared to an informant's assessment. Within the AD group, individual differences in global metacognition were related to recollection accuracy, and global metacognition for everyday memory problems was related to task-based metacognitive accuracy. These findings suggest that AD can spare the confidence-accuracy relationship in recollection tasks, and that global and local metacognition measures tap overlapping neuropsychological processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. The Accuracy of Perceptions of Education Finance Information: How Well Local Leaders Understand Local Communities

    ERIC Educational Resources Information Center

    De Luca, Barbara M.; Hinshaw, Steven A.; Ziswiler, Korrin

    2013-01-01

    The purpose for this research was to determine the accuracy of the perceptions of school administrators and community leaders regarding education finance information. School administrators and community leaders in this research project included members of three groups: public school administrators, other public school leaders, and leaders in the…

  8. STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.

    PubMed

    Bossuyt, Patrick M; Reitsma, Johannes B; Bruns, David E; Gatsonis, Constantine A; Glasziou, Paul P; Irwig, Les; Lijmer, Jeroen G; Moher, David; Rennie, Drummond; de Vet, Henrica C W; Kressel, Herbert Y; Rifai, Nader; Golub, Robert M; Altman, Douglas G; Hooft, Lotty; Korevaar, Daniël A; Cohen, Jérémie F

    2015-12-01

    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies.

  9. A distributed transmit beamforming synchronization strategy for multi-element radar systems

    NASA Astrophysics Data System (ADS)

    Xiao, Manlin; Li, Xingwen; Xu, Jikang

    2017-02-01

    The distributed transmit beamforming has recently been discussed as an energy-effective technique in wireless communication systems. A common ground of various techniques is that the destination node transmits a beacon signal or feedback to assist source nodes to synchronize signals. However, this approach is not appropriate for a radar system since the destination is a non-cooperative target of an unknown location. In our paper, we propose a novel synchronization strategy for a distributed multiple-element beamfoming radar system. Source nodes estimate parameters of beacon signals transmitted from others to get their local synchronization information. The channel information of the phase propagation delay is transmitted to nodes via the reflected beacon signals as well. Next, each node generates appropriate parameters to form a beamforming signal at the target. Transmit beamforming signals of all nodes will combine coherently at the target compensating for different propagation delay. We analyse the influence of the local oscillation accuracy and the parameter estimation errors on the performance of the proposed synchronization scheme. The results of numerical simulations illustrate that this synchronization scheme is effective to enable the transmit beamforming in a distributed multi-element radar system.

  10. Experimental Evaluation of the High-Speed Motion Vector Measurement by Combining Synthetic Aperture Array Processing with Constrained Least Square Method

    NASA Astrophysics Data System (ADS)

    Yokoyama, Ryouta; Yagi, Shin-ichi; Tamura, Kiyoshi; Sato, Masakazu

    2009-07-01

    Ultrahigh speed dynamic elastography has promising potential capabilities in applying clinical diagnosis and therapy of living soft tissues. In order to realize the ultrahigh speed motion tracking at speeds of over thousand frames per second, synthetic aperture (SA) array signal processing technology must be introduced. Furthermore, the overall system performance should overcome the fine quantitative evaluation in accuracy and variance of echo phase changes distributed across a tissue medium. On spatial evaluation of local phase changes caused by pulsed excitation on a tissue phantom, investigation was made with the proposed SA signal system utilizing different virtual point sources that were generated by an array transducer to probe each component of local tissue displacement vectors. The final results derived from the cross-correlation method (CCM) brought about almost the same performance as obtained by the constrained least square method (LSM) extended to successive echo frames. These frames were reconstructed by SA processing after the real-time acquisition triggered by the pulsed irradiation from a point source. The continuous behavior of spatial motion vectors demonstrated the dynamic generation and traveling of the pulsed shear wave at a speed of one thousand frames per second.

  11. Effects of land cover, topography, and built structure on seasonal water quality at multiple spatial scales.

    PubMed

    Pratt, Bethany; Chang, Heejun

    2012-03-30

    The relationship among land cover, topography, built structure and stream water quality in the Portland Metro region of Oregon and Clark County, Washington areas, USA, is analyzed using ordinary least squares (OLS) and geographically weighted (GWR) multiple regression models. Two scales of analysis, a sectional watershed and a buffer, offered a local and a global investigation of the sources of stream pollutants. Model accuracy, measured by R(2) values, fluctuated according to the scale, season, and regression method used. While most wet season water quality parameters are associated with urban land covers, most dry season water quality parameters are related topographic features such as elevation and slope. GWR models, which take into consideration local relations of spatial autocorrelation, had stronger results than OLS regression models. In the multiple regression models, sectioned watershed results were consistently better than the sectioned buffer results, except for dry season pH and stream temperature parameters. This suggests that while riparian land cover does have an effect on water quality, a wider contributing area needs to be included in order to account for distant sources of pollutants. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Localization and Quantification of Trace-gas Fugitive Emissions Using a Portable Optical Spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Eric; Teng, Chu; van Kessel, Theodore

    We present a portable optical spectrometer for fugitive emissions monitoring of methane (CH4). The sensor operation is based on tunable diode laser absorption spectroscopy (TDLAS), using a 5 cm open path design, and targets the 2ν3 R(4) CH4 transition at 6057.1 cm-1 (1651 nm) to avoid cross-talk with common interfering atmospheric constituents. Sensitivity analysis indicates a normalized precision of 2.0 ppmv∙Hz-1/2, corresponding to a noise-equivalent absorption (NEA) of 4.4×10-6 Hz-1/2 and minimum detectible absorption (MDA) coefficient of αmin = 8.8×10-7 cm-1∙Hz-1/2. Our TDLAS sensor is deployed at the Methane Emissions Technology Evaluation Center (METEC) at Colorado State University (CSU) formore » initial demonstration of single-sensor based source localization and quantification of CH4 fugitive emissions. The TDLAS sensor is concurrently deployed with a customized chemi-resistive metal-oxide (MOX) sensor for accuracy benchmarking, demonstrating good visual correlation of the concentration time-series. Initial angle-of-arrival (AOA) results will be shown, and development towards source magnitude estimation will be described.« less

  13. Ambient Seismic Source Inversion in a Heterogeneous Earth: Theory and Application to the Earth's Hum

    NASA Astrophysics Data System (ADS)

    Ermert, Laura; Sager, Korbinian; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas

    2017-11-01

    The sources of ambient seismic noise are extensively studied both to better understand their influence on ambient noise tomography and related techniques, and to infer constraints on their excitation mechanisms. Here we develop a gradient-based inversion method to infer the space-dependent and time-varying source power spectral density of the Earth's hum from cross correlations of continuous seismic data. The precomputation of wavefields using spectral elements allows us to account for both finite-frequency sensitivity and for three-dimensional Earth structure. Although similar methods have been proposed previously, they have not yet been applied to data to the best of our knowledge. We apply this method to image the seasonally varying sources of Earth's hum during North and South Hemisphere winter. The resulting models suggest that hum sources are localized, persistent features that occur at Pacific coasts or shelves and in the North Atlantic during North Hemisphere winter, as well as South Pacific coasts and several distinct locations in the Southern Ocean in South Hemisphere winter. The contribution of pelagic sources from the central North Pacific cannot be constrained. Besides improving the accuracy of noise source locations through the incorporation of finite-frequency effects and 3-D Earth structure, this method may be used in future cross-correlation waveform inversion studies to provide initial source models and source model updates.

  14. Source levels of social sounds in migrating humpback whales (Megaptera novaeangliae).

    PubMed

    Dunlop, Rebecca A; Cato, Douglas H; Noad, Michael J; Stokes, Dale M

    2013-07-01

    The source level of an animal sound is important in communication, since it affects the distance over which the sound is audible. Several measurements of source levels of whale sounds have been reported, but the accuracy of many is limited because the distance to the source and the acoustic transmission loss were estimated rather than measured. This paper presents measurements of source levels of social sounds (surface-generated and vocal sounds) of humpback whales from a sample of 998 sounds recorded from 49 migrating humpback whale groups. Sources were localized using a wide baseline five hydrophone array and transmission loss was measured for the site. Social vocalization source levels were found to range from 123 to 183 dB re 1 μPa @ 1 m with a median of 158 dB re 1 μPa @ 1 m. Source levels of surface-generated social sounds ("breaches" and "slaps") were narrower in range (133 to 171 dB re 1 μPa @ 1 m) but slightly higher in level (median of 162 dB re 1 μPa @ 1 m) compared to vocalizations. The data suggest that group composition has an effect on group vocalization source levels in that singletons and mother-calf-singing escort groups tend to vocalize at higher levels compared to other group compositions.

  15. Classification of driver fatigue in an electroencephalography-based countermeasure system with source separation module.

    PubMed

    Rifai Chai; Naik, Ganesh R; Tran, Yvonne; Sai Ho Ling; Craig, Ashley; Nguyen, Hung T

    2015-08-01

    An electroencephalography (EEG)-based counter measure device could be used for fatigue detection during driving. This paper explores the classification of fatigue and alert states using power spectral density (PSD) as a feature extractor and fuzzy swarm based-artificial neural network (ANN) as a classifier. An independent component analysis of entropy rate bound minimization (ICA-ERBM) is investigated as a novel source separation technique for fatigue classification using EEG analysis. A comparison of the classification accuracy of source separator versus no source separator is presented. Classification performance based on 43 participants without the inclusion of the source separator resulted in an overall sensitivity of 71.67%, a specificity of 75.63% and an accuracy of 73.65%. However, these results were improved after the inclusion of a source separator module, resulting in an overall sensitivity of 78.16%, a specificity of 79.60% and an accuracy of 78.88% (p <; 0.05).

  16. Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry

    NASA Astrophysics Data System (ADS)

    Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki

    2015-08-01

    In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.

  17. A new method of measuring gravitational acceleration in an undergraduate laboratory program

    NASA Astrophysics Data System (ADS)

    Wang, Qiaochu; Wang, Chang; Xiao, Yunhuan; Schulte, Jurgen; Shi, Qingfan

    2018-01-01

    This paper presents a high accuracy method to measure gravitational acceleration in an undergraduate laboratory program. The experiment is based on water in a cylindrical vessel rotating about its vertical axis at a constant speed. The water surface forms a paraboloid whose focal length is related to rotational period and gravitational acceleration. This experimental setup avoids classical source errors in determining the local value of gravitational acceleration, so prevalent in the common simple pendulum and inclined plane experiments. The presented method combines multiple physics concepts such as kinematics, classical mechanics and geometric optics, offering the opportunity for lateral as well as project-based learning.

  18. RSS Fingerprint Based Indoor Localization Using Sparse Representation with Spatio-Temporal Constraint

    PubMed Central

    Piao, Xinglin; Zhang, Yong; Li, Tingshu; Hu, Yongli; Liu, Hao; Zhang, Ke; Ge, Yun

    2016-01-01

    The Received Signal Strength (RSS) fingerprint-based indoor localization is an important research topic in wireless network communications. Most current RSS fingerprint-based indoor localization methods do not explore and utilize the spatial or temporal correlation existing in fingerprint data and measurement data, which is helpful for improving localization accuracy. In this paper, we propose an RSS fingerprint-based indoor localization method by integrating the spatio-temporal constraints into the sparse representation model. The proposed model utilizes the inherent spatial correlation of fingerprint data in the fingerprint matching and uses the temporal continuity of the RSS measurement data in the localization phase. Experiments on the simulated data and the localization tests in the real scenes show that the proposed method improves the localization accuracy and stability effectively compared with state-of-the-art indoor localization methods. PMID:27827882

  19. The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control

    ERIC Educational Resources Information Center

    Page, A.; Moreno, R.; Candelas, P.; Belmar, F.

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…

  20. Measuring true localization accuracy in super resolution microscopy with DNA-origami nanostructures

    NASA Astrophysics Data System (ADS)

    Reuss, Matthias; Fördős, Ferenc; Blom, Hans; Öktem, Ozan; Högberg, Björn; Brismar, Hjalmar

    2017-02-01

    A common method to assess the performance of (super resolution) microscopes is to use the localization precision of emitters as an estimate for the achieved resolution. Naturally, this is widely used in super resolution methods based on single molecule stochastic switching. This concept suffers from the fact that it is hard to calibrate measures against a real sample (a phantom), because true absolute positions of emitters are almost always unknown. For this reason, resolution estimates are potentially biased in an image since one is blind to true position accuracy, i.e. deviation in position measurement from true positions. We have solved this issue by imaging nanorods fabricated with DNA-origami. The nanorods used are designed to have emitters attached at each end in a well-defined and highly conserved distance. These structures are widely used to gauge localization precision. Here, we additionally determined the true achievable localization accuracy and compared this figure of merit to localization precision values for two common super resolution microscope methods STED and STORM.

  1. Local indicators of geocoding accuracy (LIGA): theory and application

    PubMed Central

    Jacquez, Geoffrey M; Rommel, Robert

    2009-01-01

    Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately follow the underlying population distribution increases perturbability and introduces error into the spatial weights matrix. In some studies positional error may not impact the statistical results, and in others it might invalidate the results. We therefore must understand the relationships between positional accuracy and the perturbability of the spatial weights in order to have confidence in a study's results. PMID:19863795

  2. Accuracy of off-line bioluminescence imaging to localize targets in preclinical radiation research.

    PubMed

    Tuli, Richard; Armour, Michael; Surmak, Andrew; Reyes, Juvenal; Iordachita, Iulian; Patterson, Michael; Wong, John

    2013-04-01

    In this study, we investigated the accuracy of using off-line bioluminescence imaging (BLI) and tomography (BLT) to guide irradiation of small soft tissue targets on a small animal radiation research platform (SARRP) with on-board cone beam CT (CBCT) capability. A small glass bulb containing BL cells was implanted as a BL source in the abdomen of 11 mouse carcasses. Bioluminescence imaging and tomography were acquired for each carcass. Six carcasses were setup visually without immobilization and 5 were restrained in position with tape. All carcasses were setup in treatment position on the SARRP where the centroid position of the bulb on CBCT was taken as "truth". In the 2D visual setup, the carcass was setup by aligning the point of brightest luminescence with the vertical beam axis. In the CBCT assisted setup, the pose of the carcass on CBCT was aligned with that on the 2D BL image for setup. For both 2D setup methods, the offset of the bulb centroid on CBCT from the vertical beam axis was measured. In the BLT-CBCT fusion method, the 3D torso on BLT and CBCT was registered and the 3D offset of the respective source centroids was calculated. The setup results were independent of the carcass being immobilized or not due to the onset of rigor mortis. The 2D offset of the perceived BL source position from the CBCT bulb position was 2.3 mm ± 1.3 mm. The 3D offset between BLT and CBCT was 1.5 mm ± 0.9 mm. Given the rigidity of the carcasses, the setup results represent the best that can be achieved with off-line 2D BLI and 3D BLT. The setup uncertainty would require the use of undesirably large margin of 4-5 mm. The results compel the implementation of on-board BLT capability on the SARRP to eliminate setup error and to improve BLT accuracy.

  3. Accuracy of Off-Line Bioluminescence Imaging to Localize Targets in Preclinical Radiation Research

    PubMed Central

    Tuli, Richard; Armour, Michael; Surmak, Andrew; Reyes, Juvenal; Iordachita, Iulian; Patterson, Michael; Wong, John

    2013-01-01

    In this study, we investigated the accuracy of using off-line bioluminescence imaging (BLI) and tomography (BLT) to guide irradiation of small soft tissue targets on a small animal radiation research platform (SARRP) with on-board cone beam CT (CBCT) capability. A small glass bulb containing BL cells was implanted as a BL source in the abdomen of 11 mouse carcasses. Bioluminescence imaging and tomography were acquired for each carcass. Six carcasses were setup visually without immobilization and 5 were restrained in position with tape. All carcasses were setup in treatment position on the SARRP where the centroid position of the bulb on CBCT was taken as “truth”. In the 2D visual setup, the carcass was setup by aligning the point of brightest luminescence with the vertical beam axis. In the CBCT assisted setup, the pose of the carcass on CBCT was aligned with that on the 2D BL image for setup. For both 2D setup methods, the offset of the bulb centroid on CBCT from the vertical beam axis was measured. In the BLT-CBCT fusion method, the 3D torso on BLT and CBCT was registered and the 3D offset of the respective source centroids was calculated. The setup results were independent of the carcass being immobilized or not due to the onset of rigor mortis. The 2D offset of the perceived BL source position from the CBCT bulb position was 2.3 mm ± 1.3 mm. The 3D offset between BLT and CBCT was 1.5 mm ± 0.9 mm. Given the rigidity of the carcasses, the setup results represent the best that can be achieved with off-line 2D BLI and 3D BLT. The setup uncertainty would require the use of undesirably large margin of 4–5 mm. The results compel the implementation of on-board BLT capability on the SARRP to eliminate setup error and to improve BLT accuracy. PMID:23578189

  4. An Improved Compressive Sensing and Received Signal Strength-Based Target Localization Algorithm with Unknown Target Population for Wireless Local Area Networks.

    PubMed

    Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang

    2017-05-30

    In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.

  5. Accuracy of colonoscopy in localizing colonic cancer.

    PubMed

    Stanciu, C; Trifan, Anca; Khder, Saad Alla

    2007-01-01

    It is important to establish the precise localization of colonic cancer preoperatively; while colonoscopy is regarded as the diagnostic gold standard for colorectal cancer, its ability to localize the tumor is less reliable. To define the accuracy of colonoscopy in identifying the location of colonic cancer. All of the patients who had a colorectal cancer diagnosed by colonoscopy at the Institute of Gastroenterology and Hepatology, Iaşi and subsequently received a surgical intervention at three teaching hospitals in Iaşi, between January 2001 and December 2005, were included in this study. Endoscopic records and operative notes were carefully reviewed, and tumor localization was recorded. There were 161 patients (89 men, 72 women, aged 61.3 +/- 12.8 years) who underwent conventional surgery for colon cancer detected by colonoscopy during the study period. Twenty-two patients (13.66%) had erroneous colonoscopic localization of the tumors. The overall accuracy of preoperative colonoscopic localization was 87.58%. Colonoscopy is an accurate, reliable method for locating colon cancer, although additional techniques (i.e., endoscopic tattooing) should be performed at least for small lesions.

  6. Validation of luminescent source reconstruction using spectrally resolved bioluminescence images

    NASA Astrophysics Data System (ADS)

    Virostko, John M.; Powers, Alvin C.; Jansen, E. D.

    2008-02-01

    This study examines the accuracy of the Living Image® Software 3D Analysis Package (Xenogen, Alameda, CA) in reconstruction of light source depth and intensity. Constant intensity light sources were placed in an optically homogeneous medium (chicken breast). Spectrally filtered images were taken at 560, 580, 600, 620, 640, and 660 nanometers. The Living Image® Software 3D Analysis Package was employed to reconstruct source depth and intensity using these spectrally filtered images. For sources shallower than the mean free path of light there was proportionally higher inaccuracy in reconstruction. For sources deeper than the mean free path, the average error in depth and intensity reconstruction was less than 4% and 12%, respectively. The ability to distinguish multiple sources decreased with increasing source depth and typically required a spatial separation of twice the depth. The constant intensity light sources were also implanted in mice to examine the effect of optical inhomogeneity. The reconstruction accuracy suffered in inhomogeneous tissue with accuracy influenced by the choice of optical properties used in reconstruction.

  7. On the angular error of intensity vector based direction of arrival estimation in reverberant sound fields.

    PubMed

    Levin, Dovid; Habets, Emanuël A P; Gannot, Sharon

    2010-10-01

    An acoustic vector sensor provides measurements of both the pressure and particle velocity of a sound field in which it is placed. These measurements are vectorial in nature and can be used for the purpose of source localization. A straightforward approach towards determining the direction of arrival (DOA) utilizes the acoustic intensity vector, which is the product of pressure and particle velocity. The accuracy of an intensity vector based DOA estimator in the presence of noise has been analyzed previously. In this paper, the effects of reverberation upon the accuracy of such a DOA estimator are examined. It is shown that particular realizations of reverberation differ from an ideal isotropically diffuse field, and induce an estimation bias which is dependent upon the room impulse responses (RIRs). The limited knowledge available pertaining the RIRs is expressed statistically by employing the diffuse qualities of reverberation to extend Polack's statistical RIR model. Expressions for evaluating the typical bias magnitude as well as its probability distribution are derived.

  8. Comparative study of landslides susceptibility mapping methods: Multi-Criteria Decision Making (MCDM) and Artificial Neural Network (ANN)

    NASA Astrophysics Data System (ADS)

    Salleh, S. A.; Rahman, A. S. A. Abd; Othman, A. N.; Mohd, W. M. N. Wan

    2018-02-01

    As different approach produces different results, it is crucial to determine the methods that are accurate in order to perform analysis towards the event. This research aim is to compare the Rank Reciprocal (MCDM) and Artificial Neural Network (ANN) analysis techniques in determining susceptible zones of landslide hazard. The study is based on data obtained from various sources such as local authority; Dewan Bandaraya Kuala Lumpur (DBKL), Jabatan Kerja Raya (JKR) and other agencies. The data were analysed and processed using Arc GIS. The results were compared by quantifying the risk ranking and area differential. It was also compared with the zonation map classified by DBKL. The results suggested that ANN method gives better accuracy compared to MCDM with 18.18% higher accuracy assessment of the MCDM approach. This indicated that ANN provides more reliable results and it is probably due to its ability to learn from the environment thus portraying realistic and accurate result.

  9. Coupled auralization and virtual video for immersive multimedia displays

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian

    2003-04-01

    The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.

  10. Relationship between strong-motion array parameters and the accuracy of source inversion and physical waves

    USGS Publications Warehouse

    Iida, M.; Miyatake, T.; Shimazaki, K.

    1990-01-01

    We develop general rules for a strong-motion array layout on the basis of our method of applying a prediction analysis to a source inversion scheme. A systematic analysis is done to obtain a relationship between fault-array parameters and the accuracy of a source inversion. Our study of the effects of various physical waves indicates that surface waves at distant stations contribute significantly to the inversion accuracy for the inclined fault plane, whereas only far-field body waves at both small and large distances contribute to the inversion accuracy for the vertical fault, which produces more phase interference. These observations imply the adequacy of the half-space approximation used throughout our present study and suggest rules for actual array designs. -from Authors

  11. Investigations of interference between electromagnetic transponders and wireless MOSFET dosimeters: A phantom study

    PubMed Central

    Su, Zhong; Zhang, Lisha; Ramakrishnan, V.; Hagan, Michael; Anscher, Mitchell

    2011-01-01

    Purpose: To evaluate both the Calypso Systems’ (Calypso Medical Technologies, Inc., Seattle, WA) localization accuracy in the presence of wireless metal–oxide–semiconductor field-effect transistor (MOSFET) dosimeters of dose verification system (DVS, Sicel Technologies, Inc., Morrisville, NC) and the dosimeters’ reading accuracy in the presence of wireless electromagnetic transponders inside a phantom.Methods: A custom-made, solid-water phantom was fabricated with space for transponders and dosimeters. Two inserts were machined with positioning grooves precisely matching the dimensions of the transponders and dosimeters and were arranged in orthogonal and parallel orientations, respectively. To test the transponder localization accuracy with∕without presence of dosimeters (hypothesis 1), multivariate analyses were performed on transponder-derived localization data with and without dosimeters at each preset distance to detect statistically significant localization differences between the control and test sets. To test dosimeter dose-reading accuracy with∕without presence of transponders (hypothesis 2), an approach of alternating the transponder presence in seven identical fraction dose (100 cGy) deliveries and measurements was implemented. Two-way analysis of variance was performed to examine statistically significant dose-reading differences between the two groups and the different fractions. A relative-dose analysis method was also used to evaluate transponder impact on dose-reading accuracy after dose-fading effect was removed by a second-order polynomial fit.Results: Multivariate analysis indicated that hypothesis 1 was false; there was a statistically significant difference between the localization data from the control and test sets. However, the upper and lower bounds of the 95% confidence intervals of the localized positional differences between the control and test sets were less than 0.1 mm, which was significantly smaller than the minimum clinical localization resolution of 0.5 mm. For hypothesis 2, analysis of variance indicated that there was no statistically significant difference between the dosimeter readings with and without the presence of transponders. Both orthogonal and parallel configurations had difference of polynomial-fit dose to measured dose values within 1.75%.Conclusions: The phantom study indicated that the Calypso System’s localization accuracy was not affected clinically due to the presence of DVS wireless MOSFET dosimeters and the dosimeter-measured doses were not affected by the presence of transponders. Thus, the same patients could be implanted with both transponders and dosimeters to benefit from improved accuracy of radiotherapy treatments offered by conjunctional use of the two systems. PMID:21776780

  12. Local classifier weighting by quadratic programming.

    PubMed

    Cevikalp, Hakan; Polikar, Robi

    2008-10-01

    It has been widely accepted that the classification accuracy can be improved by combining outputs of multiple classifiers. However, how to combine multiple classifiers with various (potentially conflicting) decisions is still an open problem. A rich collection of classifier combination procedures -- many of which are heuristic in nature -- have been developed for this goal. In this brief, we describe a dynamic approach to combine classifiers that have expertise in different regions of the input space. To this end, we use local classifier accuracy estimates to weight classifier outputs. Specifically, we estimate local recognition accuracies of classifiers near a query sample by utilizing its nearest neighbors, and then use these estimates to find the best weights of classifiers to label the query. The problem is formulated as a convex quadratic optimization problem, which returns optimal nonnegative classifier weights with respect to the chosen objective function, and the weights ensure that locally most accurate classifiers are weighted more heavily for labeling the query sample. Experimental results on several data sets indicate that the proposed weighting scheme outperforms other popular classifier combination schemes, particularly on problems with complex decision boundaries. Hence, the results indicate that local classification-accuracy-based combination techniques are well suited for decision making when the classifiers are trained by focusing on different regions of the input space.

  13. Developing Local Oral Reading Fluency Cut Scores for Predicting High-Stakes Test Performance

    ERIC Educational Resources Information Center

    Grapin, Sally L.; Kranzler, John H.; Waldron, Nancy; Joyce-Beaulieu, Diana; Algina, James

    2017-01-01

    This study evaluated the classification accuracy of a second grade oral reading fluency curriculum-based measure (R-CBM) in predicting third grade state test performance. It also compared the long-term classification accuracy of local and publisher-recommended R-CBM cut scores. Participants were 266 students who were divided into a calibration…

  14. AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, Sean A.; Murphy, Tara; Lo, Kitty K., E-mail: s.farrell@physics.usyd.edu.au

    In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of amore » random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.« less

  15. Photonics walking up a human hair

    NASA Astrophysics Data System (ADS)

    Zeng, Hao; Parmeggiani, Camilla; Martella, Daniele; Wasylczyk, Piotr; Burresi, Matteo; Wiersma, Diederik S.

    2016-03-01

    While animals have access to sugars as energy source, this option is generally not available to artificial machines and robots. Energy delivery is thus the bottleneck for creating independent robots and machines, especially on micro- and nano- meter length scales. We have found a way to produce polymeric nano-structures with local control over the molecular alignment, which allowed us to solve the above issue. By using a combination of polymers, of which part is optically sensitive, we can create complex functional structures with nanometer accuracy, responsive to light. In particular, this allowed us to realize a structure that can move autonomously over surfaces (it can "walk") using the environmental light as its energy source. The robot is only 60 μm in total length, thereby smaller than any known terrestrial walking species, and it is capable of random, directional walking and rotating on different dry surfaces.

  16. Proof of age required--estimating age in adults without birth records.

    PubMed

    Phillips, Christine; Narayanasamy, Shanti

    2010-07-01

    Many adults from refugee source countries do not have documents of birth, either because they have been lost in flight, or because the civil infrastructure is too fragile to support routine recording of birth. In Western countries, date of birth is used as a basic identifier, and access to services and support tends to be age regulated. Doctors are not infrequently asked to write formal reports estimating the true age of adult refugees; however, there are no existing guidelines to assist in this task. To provide an overview of methods to estimate age in living adults, and outline recommendations for best practice. Age should be estimated through physical examination; life history, matching local or national events with personal milestones; and existing nonformal documents. Accuracy of age estimation should be subject to three tests: biological plausibility, historical plausibility, and corroboration from reputable sources.

  17. The effects of inter-cavity separation on optical coupling in dielectric bispheres.

    PubMed

    Ashili, Shashanka P; Astratov, Vasily N; Sykes, E Charles H

    2006-10-02

    The optical coupling between two size-mismatched spheres was studied by using one sphere as a local source of light with whispering gallery modes (WGMs) and detecting the intensity of the light scattered by a second sphere playing the part of a receiver of electromagnetic energy. We developed techniques to control inter-cavity gap sizes between microspheres with ~30nm accuracy. We demonstrate high efficiencies (up to 0.2-0.3) of coupling between two separated cavities with strongly detuned eigenstates. At small separations (<1 microm) between the spheres, the mechanism of coupling is interpreted in terms of the Fano resonance between discrete level (true WGMs excited in a source sphere) and a continuum of "quasi"-WGMs with distorted shape which can be induced in the receiving sphere. At larger separations the spectra detected from the receiving sphere originate from scattering of the radiative modes.

  18. Hg-201 (+) CO-Magnetometer for HG-199(+) Trapped Ion Space Atomic Clocks

    NASA Technical Reports Server (NTRS)

    Burt, Eric A. (Inventor); Taghavi, Shervin (Inventor); Tjoelker, Robert L. (Inventor)

    2011-01-01

    Local magnetic field strength in a trapped ion atomic clock is measured in real time, with high accuracy and without degrading clock performance, and the measurement is used to compensate for ambient magnetic field perturbations. First and second isotopes of an element are co-located within the linear ion trap. The first isotope has a resonant microwave transition between two hyperfine energy states, and the second isotope has a resonant Zeeman transition. Optical sources emit ultraviolet light that optically pump both isotopes. A microwave radiation source simultaneously emits microwave fields resonant with the first isotope's clock transition and the second isotope's Zeeman transition, and an optical detector measures the fluorescence from optically pumping both isotopes. The second isotope's Zeeman transition provides the measure of magnetic field strength, and the measurement is used to compensate the first isotope's clock transition or to adjust the applied C-field to reduce the effects of ambient magnetic field perturbations.

  19. Swept-frequency feedback interferometry using terahertz frequency QCLs: a method for imaging and materials analysis.

    PubMed

    Rakić, Aleksandar D; Taimre, Thomas; Bertling, Karl; Lim, Yah Leng; Dean, Paul; Indjin, Dragan; Ikonić, Zoran; Harrison, Paul; Valavanis, Alexander; Khanna, Suraj P; Lachab, Mohammad; Wilson, Stephen J; Linfield, Edmund H; Davies, A Giles

    2013-09-23

    The terahertz (THz) frequency quantum cascade laser (QCL) is a compact source of high-power radiation with a narrow intrinsic linewidth. As such, THz QCLs are extremely promising sources for applications including high-resolution spectroscopy, heterodyne detection, and coherent imaging. We exploit the remarkable phase-stability of THz QCLs to create a coherent swept-frequency delayed self-homodyning method for both imaging and materials analysis, using laser feedback interferometry. Using our scheme we obtain amplitude-like and phase-like images with minimal signal processing. We determine the physical relationship between the operating parameters of the laser under feedback and the complex refractive index of the target and demonstrate that this coherent detection method enables extraction of complex refractive indices with high accuracy. This establishes an ultimately compact and easy-to-implement THz imaging and materials analysis system, in which the local oscillator, mixer, and detector are all combined into a single laser.

  20. An FBG acoustic emission source locating system based on PHAT and GA

    NASA Astrophysics Data System (ADS)

    Shen, Jing-shi; Zeng, Xiao-dong; Li, Wei; Jiang, Ming-shun

    2017-09-01

    Using the acoustic emission locating technology to monitor the health of the structure is important for ensuring the continuous and healthy operation of the complex engineering structures and large mechanical equipment. In this paper, four fiber Bragg grating (FBG) sensors are used to establish the sensor array to locate the acoustic emission source. Firstly, the nonlinear locating equations are established based on the principle of acoustic emission, and the solution of these equations is transformed into an optimization problem. Secondly, time difference extraction algorithm based on the phase transform (PHAT) weighted generalized cross correlation provides the necessary conditions for the accurate localization. Finally, the genetic algorithm (GA) is used to solve the optimization model. In this paper, twenty points are tested in the marble plate surface, and the results show that the absolute locating error is within the range of 10 mm, which proves the accuracy of this locating method.

  1. Calibration Of An Active Mammosite Using A Low Activity Sr-90 Radioactive Source

    NASA Astrophysics Data System (ADS)

    Winston, Jacquelyn

    2007-03-01

    The latest involvement of the Brachytherapy research group of the medical physics program at Hampton University is in the development of a scintillating fiber based detector for the breast cancer specific Mammosite (balloon device) from Cytyc Inc. Recent data were acquired at a local hospital to evaluate the possibility of measuring the dose distribution during breast Brachytherapy cancer treatments with this device. Since sub-millimeter accuracy in position is required, precision of the device relies on the accurate calibration of the scintillating fiber element. As part of a collaboration work, data were acquired for that purpose at Hampton University and subsequently analyzed at Morgan State University. An 8 mm diameter strontium-90 radioactive field source with a low activity of 25 μCi was used along with a dedicated LabView data acquisition system. We will discuss the data collected and address some of the features of this novel system.

  2. Calibration Of An Active Mammosite Using A Low Activity Sr-90 Radioactive Source

    NASA Astrophysics Data System (ADS)

    Winston, Jacquelyn

    2006-03-01

    The latest involvement of the Brachytherapy research group of the medical physics program at Hampton University is in the development of a scintillator fiber based detector for the breast cancer specific Mammosite (balloon device) from Cytyc Inc. Recent data were acquired at a local hospital to evaluate the possibility of measuring the dose distribution during breast Brachytherapy cancer treatments with this device. Since sub-millimeter accuracy in position is required, precision of the device relies on the accurate calibration of the scintillating fiber element. As part of a collaboration work, data were acquired for that purpose at Hampton University and subsequently analyzed at Morgan State University. An 8 mm diameter strontium-90 radioactive field source with a low activity of 25 μCi was used along with a dedicated LabView data acquisition system. We will discuss the data collected and address some of the features of this novel system.

  3. Linking Deep Astrometric Standards to the ICRF

    NASA Astrophysics Data System (ADS)

    Frey, S.; Platais, I.; Fey, A. L.

    2007-07-01

    The next-generation large aperature and large field-of-view telescopes will address fundamantal questions of astrophysica and cosmology such as the nature of dark matter and dark energy. For a variety of applications, the CCD mosaic detectors in the focal plane arrays require astronomic calibrationat the milli-arcsecond (mas) level. The existing optical reference frames are insufficient to support such calibrations. To address this problem, deep optical astronomic fields are being established near the Galactic plane. In order to achiev a 5-10-mas or better positional accuracyfor the Deepp Astrometric Standards (DAS), and to obtain bsolute stellar proper motions for the study of Galactic structure, it is crucial to link these fields to the International Celestial Reference Frame (ICRF). To this end, we selected 15 candidate compact extragalactic radio sources in the Gemini-Orion-Taurus (GOT) field. These sources were observed with the European VLBI Network (EVN) at 5 GHz in phase-reference mode. The bright compact calibrator source J0603+2159 and seven other sources were detected and imaged at the angular resolution of -1.5-8 mas. Relative astrometric positions were derived for these sources at a milli-arcsecond accuracy level. The detection of the optical counterparts of these extragalactic radio sources will allow us to establish a direct link to the ICRF locally in the GOT field.

  4. Determining Hypocentral Parameters for Local Earthquakes in 1-D Using a Genetic Algorithm and Two-point ray tracing

    NASA Astrophysics Data System (ADS)

    Kim, W.; Hahm, I.; Ahn, S. J.; Lim, D. H.

    2005-12-01

    This paper introduces a powerful method for determining hypocentral parameters for local earthquakes in 1-D using a genetic algorithm (GA) and two-point ray tracing. Using existing algorithms to determine hypocentral parameters is difficult, because these parameters can vary based on initial velocity models. We developed a new method to solve this problem by applying a GA to an existing algorithm, HYPO-71 (Lee and Larh, 1975). The original HYPO-71 algorithm was modified by applying two-point ray tracing and a weighting factor with respect to the takeoff angle at the source to reduce errors from the ray path and hypocenter depth. Artificial data, without error, were generated by computer using two-point ray tracing in a true model, in which velocity structure and hypocentral parameters were known. The accuracy of the calculated results was easily determined by comparing calculated and actual values. We examined the accuracy of this method for several cases by changing the true and modeled layer numbers and thicknesses. The computational results show that this method determines nearly exact hypocentral parameters without depending on initial velocity models. Furthermore, accurate and nearly unique hypocentral parameters were obtained, although the number of modeled layers and thicknesses differed from those in the true model. Therefore, this method can be a useful tool for determining hypocentral parameters in regions where reliable local velocity values are unknown. This method also provides the basic a priori information for 3-D studies. KEY -WORDS: hypocentral parameters, genetic algorithm (GA), two-point ray tracing

  5. Effect of eye position on saccades and neuronal responses to acoustic stimuli in the superior colliculus of the behaving cat.

    PubMed

    Populin, Luis C; Tollin, Daniel J; Yin, Tom C T

    2004-10-01

    We examined the motor error hypothesis of visual and auditory interaction in the superior colliculus (SC), first tested by Jay and Sparks in the monkey. We trained cats to direct their eyes to the location of acoustic sources and studied the effects of eye position on both the ability of cats to localize sounds and the auditory responses of SC neurons with the head restrained. Sound localization accuracy was generally not affected by initial eye position, i.e., accuracy was not proportionally affected by the deviation of the eyes from the primary position at the time of stimulus presentation, showing that eye position is taken into account when orienting to acoustic targets. The responses of most single SC neurons to acoustic stimuli in the intact cat were modulated by eye position in the direction consistent with the predictions of the "motor error" hypothesis, but the shift accounted for only two-thirds of the initial deviation of the eyes. However, when the average horizontal sound localization error, which was approximately 35% of the target amplitude, was taken into account, the magnitude of the horizontal shifts in the SC auditory receptive fields matched the observed behavior. The modulation by eye position was not due to concomitant movements of the external ears, as confirmed by recordings carried out after immobilizing the pinnae of one cat. However, the pattern of modulation after pinnae immobilization was inconsistent with the observations in the intact cat, suggesting that, in the intact animal, information about the position of the pinnae may be taken into account.

  6. STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.

    PubMed

    Bossuyt, Patrick M; Reitsma, Johannes B; Bruns, David E; Gatsonis, Constantine A; Glasziou, Paul P; Irwig, Les; Lijmer, Jeroen G; Moher, David; Rennie, Drummond; de Vet, Henrica C W; Kressel, Herbert Y; Rifai, Nader; Golub, Robert M; Altman, Douglas G; Hooft, Lotty; Korevaar, Daniël A; Cohen, Jérémie F

    2015-12-01

    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies. © 2015 American Association for Clinical Chemistry.

  7. Do knowledge, knowledge sources and reasoning skills affect the accuracy of nursing diagnoses? a randomised study.

    PubMed

    Paans, Wolter; Sermeus, Walter; Nieweg, Roos Mb; Krijnen, Wim P; van der Schans, Cees P

    2012-08-01

    This paper reports a study about the effect of knowledge sources, such as handbooks, an assessment format and a predefined record structure for diagnostic documentation, as well as the influence of knowledge, disposition toward critical thinking and reasoning skills, on the accuracy of nursing diagnoses.Knowledge sources can support nurses in deriving diagnoses. A nurse's disposition toward critical thinking and reasoning skills is also thought to influence the accuracy of his or her nursing diagnoses. A randomised factorial design was used in 2008-2009 to determine the effect of knowledge sources. We used the following instruments to assess the influence of ready knowledge, disposition, and reasoning skills on the accuracy of diagnoses: (1) a knowledge inventory, (2) the California Critical Thinking Disposition Inventory, and (3) the Health Science Reasoning Test. Nurses (n = 249) were randomly assigned to one of four factorial groups, and were instructed to derive diagnoses based on an assessment interview with a simulated patient/actor. The use of a predefined record structure resulted in a significantly higher accuracy of nursing diagnoses. A regression analysis reveals that almost half of the variance in the accuracy of diagnoses is explained by the use of a predefined record structure, a nurse's age and the reasoning skills of `deduction' and `analysis'. Improving nurses' dispositions toward critical thinking and reasoning skills, and the use of a predefined record structure, improves accuracy of nursing diagnoses.

  8. Do knowledge, knowledge sources and reasoning skills affect the accuracy of nursing diagnoses? a randomised study

    PubMed Central

    2012-01-01

    Background This paper reports a study about the effect of knowledge sources, such as handbooks, an assessment format and a predefined record structure for diagnostic documentation, as well as the influence of knowledge, disposition toward critical thinking and reasoning skills, on the accuracy of nursing diagnoses. Knowledge sources can support nurses in deriving diagnoses. A nurse’s disposition toward critical thinking and reasoning skills is also thought to influence the accuracy of his or her nursing diagnoses. Method A randomised factorial design was used in 2008–2009 to determine the effect of knowledge sources. We used the following instruments to assess the influence of ready knowledge, disposition, and reasoning skills on the accuracy of diagnoses: (1) a knowledge inventory, (2) the California Critical Thinking Disposition Inventory, and (3) the Health Science Reasoning Test. Nurses (n = 249) were randomly assigned to one of four factorial groups, and were instructed to derive diagnoses based on an assessment interview with a simulated patient/actor. Results The use of a predefined record structure resulted in a significantly higher accuracy of nursing diagnoses. A regression analysis reveals that almost half of the variance in the accuracy of diagnoses is explained by the use of a predefined record structure, a nurse’s age and the reasoning skills of `deduction’ and `analysis’. Conclusions Improving nurses’ dispositions toward critical thinking and reasoning skills, and the use of a predefined record structure, improves accuracy of nursing diagnoses. PMID:22852577

  9. Ligand Binding Site Detection by Local Structure Alignment and Its Performance Complementarity

    PubMed Central

    Lee, Hui Sun; Im, Wonpil

    2013-01-01

    Accurate determination of potential ligand binding sites (BS) is a key step for protein function characterization and structure-based drug design. Despite promising results of template-based BS prediction methods using global structure alignment (GSA), there is a room to improve the performance by properly incorporating local structure alignment (LSA) because BS are local structures and often similar for proteins with dissimilar global folds. We present a template-based ligand BS prediction method using G-LoSA, our LSA tool. A large benchmark set validation shows that G-LoSA predicts drug-like ligands’ positions in single-chain protein targets more precisely than TM-align, a GSA-based method, while the overall success rate of TM-align is better. G-LoSA is particularly efficient for accurate detection of local structures conserved across proteins with diverse global topologies. Recognizing the performance complementarity of G-LoSA to TM-align and a non-template geometry-based method, fpocket, a robust consensus scoring method, CMCS-BSP (Complementary Methods and Consensus Scoring for ligand Binding Site Prediction), is developed and shows improvement on prediction accuracy. The G-LoSA source code is freely available at http://im.bioinformatics.ku.edu/GLoSA. PMID:23957286

  10. Identification and Classification of Mass Transport Complexes in Offshore Trinidad/Venezuela and Their Potential Anthropogenic Impact as Tsunamigenic Hazards

    NASA Astrophysics Data System (ADS)

    Moscardelli, L.; Wood, L. J.

    2006-12-01

    Several late Pleistocene-age seafloor destabilization events have been identified in the continental margin of eastern offshore Trinidad, of sufficient scale to produce tsunamigenic forces. This area, situated along the obliquely-converging-boundary of the Caribbean/South American plates and proximal to the Orinoco Delta, is characterized by catastrophic shelf-margin processes, intrusive-extrusive mobile shales, and active tectonism. A mega-merged, 10,000km2, 3D seismic survey reveals several mass transport complexes that range in area from 11.3km2 to 2017km2. Historical records indicate that this region has experienced submarine landslide- generated tsunamigenic events, including tsunamis that affected Venezuela during the 1700's-1900's. This work concentrates on defining those ancient deep marine mass transport complexes whose occurrence could potentially triggered tsunamis. Three types of failures are identified; 1) source-attached failures that are fed by shelf edge deltas whose sediment input is controlled by sea-level fluctuations and sedimentation rates, 2) source-detached systems, which occur when upper slope sediments catastrophically fail due to gas hydrate disruptions and/or earthquakes, and 3) locally sourced failures, formed when local instabilities in the sea floor trigger relatively smaller collapses. Such classification of the relationship between slope mass failures and the sourcing regions enables a better understanding of the nature of initiation, length of development history and petrography of such mass transport deposits. Source-detached systems, generated due to sudden sediment remobilizations, are more likely to disrupt the overlying water column causing a rise in tsunamigenic risk. Unlike 2D seismic, 3D seismic enables scientists to calculate more accurate deposit volumes, improve deposit imaging and thus increase the accuracy of physical and computer simulations of mass failure processes.

  11. Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.

    Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide anmore » estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.« less

  12. Verification and classification bias interactions in diagnostic test accuracy studies for fine-needle aspiration biopsy.

    PubMed

    Schmidt, Robert L; Walker, Brandon S; Cohen, Michael B

    2015-03-01

    Reliable estimates of accuracy are important for any diagnostic test. Diagnostic accuracy studies are subject to unique sources of bias. Verification bias and classification bias are 2 sources of bias that commonly occur in diagnostic accuracy studies. Statistical methods are available to estimate the impact of these sources of bias when they occur alone. The impact of interactions when these types of bias occur together has not been investigated. We developed mathematical relationships to show the combined effect of verification bias and classification bias. A wide range of case scenarios were generated to assess the impact of bias components and interactions on total bias. Interactions between verification bias and classification bias caused overestimation of sensitivity and underestimation of specificity. Interactions had more effect on sensitivity than specificity. Sensitivity was overestimated by at least 7% in approximately 6% of the tested scenarios. Specificity was underestimated by at least 7% in less than 0.1% of the scenarios. Interactions between verification bias and classification bias create distortions in accuracy estimates that are greater than would be predicted from each source of bias acting independently. © 2014 American Cancer Society.

  13. Estimated Accuracy of Three Common Trajectory Statistical Methods

    NASA Technical Reports Server (NTRS)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.

  14. A Context-Recognition-Aided PDR Localization Method Based on the Hidden Markov Model

    PubMed Central

    Lu, Yi; Wei, Dongyan; Lai, Qifeng; Li, Wen; Yuan, Hong

    2016-01-01

    Indoor positioning has recently become an important field of interest because global navigation satellite systems (GNSS) are usually unavailable in indoor environments. Pedestrian dead reckoning (PDR) is a promising localization technique for indoor environments since it can be implemented on widely used smartphones equipped with low cost inertial sensors. However, the PDR localization severely suffers from the accumulation of positioning errors, and other external calibration sources should be used. In this paper, a context-recognition-aided PDR localization model is proposed to calibrate PDR. The context is detected by employing particular human actions or characteristic objects and it is matched to the context pre-stored offline in the database to get the pedestrian’s location. The Hidden Markov Model (HMM) and Recursive Viterbi Algorithm are used to do the matching, which reduces the time complexity and saves the storage. In addition, the authors design the turn detection algorithm and take the context of corner as an example to illustrate and verify the proposed model. The experimental results show that the proposed localization method can fix the pedestrian’s starting point quickly and improves the positioning accuracy of PDR by 40.56% at most with perfect stability and robustness at the same time. PMID:27916922

  15. Combined Loadings and Cross-Dimensional Loadings Timeliness of Presentation of Financial Statements of Local Government

    NASA Astrophysics Data System (ADS)

    Muda, I.; Dharsuky, A.; Siregar, H. S.; Sadalia, I.

    2017-03-01

    This study examines the pattern of readiness dimensional accuracy of financial statements of local government in North Sumatra with a routine pattern of two (2) months after the fiscal year ends and patterns of at least 3 (three) months after the fiscal year ends. This type of research is explanatory survey with quantitative methods. The population and the sample used is of local government officials serving local government financial reports. Combined Analysis And Cross-Loadings Loadings are used with statistical tools WarpPLS. The results showed that there was a pattern that varies above dimensional accuracy of the financial statements of local government in North Sumatra.

  16. Phenological records as a complement to aerobiological data

    NASA Astrophysics Data System (ADS)

    Tormo, Rafael; Silva, Inmaculada; Gonzalo, Ángela; Moreno, Alfonsa; Pérez, Remedios; Fernández, Santiago

    2011-01-01

    Phenological studies in combination with aerobiological studies enable one to observe the relationship between the release of pollen and its presence in the atmosphere. To obtain a suitable comparison between the daily variation of airborne pollen concentrations and flowering, it is necessary for the level of accuracy of both sets of data to be as similar as possible. To analyse the correlation between locally observed flowering data and pollen counts in pollen traps in order to set pollen information forecasts, pollen was sampled using a Burkard volumetric pollen trap working continuously from May 1993. For the phenological study we selected the main pollen sources of the six pollen types most abundant in our area: Cupressaceae, Platanus, Quercus, Plantago, Olea, and Poaceae with a total of 35 species. We selected seven sites to register flowering or pollination, two with semi-natural vegetation, the rest being urban sites. The sites were visited weekly from March to June in 2007, and from January to June in 2008 and 2009. Pollen shedding was checked at each visit, and recorded as the percentage of flowers or microsporangia in that state. There was an association between flowering phenology and airborne pollen records for some of the pollen types ( Platanus, Quercus, Olea and Plantago). Nevertheless, for the other types (Cupressaceae and Poaceae) the flowering and airborne pollen peaks did not coincide, with up to 1 week difference in phase. Some arguments are put forward in explanation of this phenomenon. Phenological studies have shown that airborne pollen results from both local and distant sources, although the pollen peaks usually appear when local sources are shedding the greatest amounts of pollen. Resuspension phenomena are probably more important than long-distance transport in explaining the presence of airborne pollen outside the flowering period. This information could be used to improve pollen forecasts.

  17. Short-Period Surface Wave Based Seismic Event Relocation

    NASA Astrophysics Data System (ADS)

    White-Gaynor, A.; Cleveland, M.; Nyblade, A.; Kintner, J. A.; Homman, K.; Ammon, C. J.

    2017-12-01

    Accurate and precise seismic event locations are essential for a broad range of geophysical investigations. Superior location accuracy generally requires calibration with ground truth information, but superb relative location precision is often achievable independently. In explosion seismology, low-yield explosion monitoring relies on near-source observations, which results in a limited number of observations that challenges our ability to estimate any locations. Incorporating more distant observations means relying on data with lower signal-to-noise ratios. For small, shallow events, the short-period (roughly 1/2 to 8 s period) fundamental-mode and higher-mode Rayleigh waves (including Rg) are often the most stable and visible portion of the waveform at local distances. Cleveland and Ammon [2013] have shown that teleseismic surface waves are valuable observations for constructing precise, relative event relocations. We extend the teleseismic surface wave relocation method, and apply them to near-source distances using Rg observations from the Bighorn Arche Seismic Experiment (BASE) and the Earth Scope USArray Transportable Array (TA) seismic stations. Specifically, we present relocation results using short-period fundamental- and higher-mode Rayleigh waves (Rg) in a double-difference relative event relocation for 45 delay-fired mine blasts and 21 borehole chemical explosions. Our preliminary efforts are to explore the sensitivity of the short-period surface waves to local geologic structure, source depth, explosion magnitude (yield), and explosion characteristics (single-shot vs. distributed source, etc.). Our results show that Rg and the first few higher-mode Rayleigh wave observations can be used to constrain the relative locations of shallow low-yield events.

  18. The optical afterglow of the short gamma-ray burst GRB 050709.

    PubMed

    Hjorth, Jens; Watson, Darach; Fynbo, Johan P U; Price, Paul A; Jensen, Brian L; Jørgensen, Uffe G; Kubas, Daniel; Gorosabel, Javier; Jakobsson, Páll; Sollerman, Jesper; Pedersen, Kristian; Kouveliotou, Chryssa

    2005-10-06

    It has long been known that there are two classes of gamma-ray bursts (GRBs), mainly distinguished by their durations. The breakthrough in our understanding of long-duration GRBs (those lasting more than approximately 2 s), which ultimately linked them with energetic type Ic supernovae, came from the discovery of their long-lived X-ray and optical 'afterglows', when precise and rapid localizations of the sources could finally be obtained. X-ray localizations have recently become available for short (duration <2 s) GRBs, which have evaded optical detection for more than 30 years. Here we report the first discovery of transient optical emission (R-band magnitude approximately 23) associated with a short burst: GRB 050709. The optical afterglow was localized with subarcsecond accuracy, and lies in the outskirts of a blue dwarf galaxy. The optical and X-ray afterglow properties 34 h after the GRB are reminiscent of the afterglows of long GRBs, which are attributable to synchrotron emission from ultrarelativistic ejecta. We did not, however, detect a supernova, as found in most nearby long GRB afterglows, which suggests a different origin for the short GRBs.

  19. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    NASA Astrophysics Data System (ADS)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  20. Ray propagation in oblate atmospheres. [for Jupiter

    NASA Technical Reports Server (NTRS)

    Hubbard, W. B.

    1976-01-01

    Phinney and Anderson's (1968) exact theory for the inversion of radio-occultation data for planetary atmospheres breaks down seriously when applied to occultations by oblate atmospheres because of departures from Bouguer's law. It has been proposed that this breakdown can be overcome by transforming the theory to a local spherical symmetry which osculates a ray's point of closest approach. The accuracy of this transformation procedure is assessed by evaluating the size of terms which are intrinsic to an oblate atmosphere and which are not eliminated by a local spherical approximation. The departures from Bouguer's law are analyzed, and it is shown that in the lowest-order deviation from that law, the plane of refraction is defined by the normal to the atmosphere at closest approach. In the next order, it is found that the oblateness of the atmosphere 'warps' the ray path out of a single plane, but the effect appears to be negligible for most purposes. It is concluded that there seems to be no source of serious error in making an approximation of local spherical symmetry with the refraction plane defined by the normal at closest approach.

  1. Discretizing singular point sources in hyperbolic wave propagation problems

    DOE PAGES

    Petersson, N. Anders; O'Reilly, Ossian; Sjogreen, Bjorn; ...

    2016-06-01

    Here, we develop high order accurate source discretizations for hyperbolic wave propagation problems in first order formulation that are discretized by finite difference schemes. By studying the Fourier series expansions of the source discretization and the finite difference operator, we derive sufficient conditions for achieving design accuracy in the numerical solution. Only half of the conditions in Fourier space can be satisfied through moment conditions on the source discretization, and we develop smoothness conditions for satisfying the remaining accuracy conditions. The resulting source discretization has compact support in physical space, and is spread over as many grid points as themore » number of moment and smoothness conditions. In numerical experiments we demonstrate high order of accuracy in the numerical solution of the 1-D advection equation (both in the interior and near a boundary), the 3-D elastic wave equation, and the 3-D linearized Euler equations.« less

  2. Coordinate alignment of combined measurement systems using a modified common points method

    NASA Astrophysics Data System (ADS)

    Zhao, G.; Zhang, P.; Xiao, W.

    2018-03-01

    The co-ordinate metrology has been extensively researched for its outstanding advantages in measurement range and accuracy. The alignment of different measurement systems is usually achieved by integrating local coordinates via common points before measurement. The alignment errors would accumulate and significantly reduce the global accuracy, thus need to be minimized. In this thesis, a modified common points method (MCPM) is proposed to combine different traceable system errors of the cooperating machines, and optimize the global accuracy by introducing mutual geometric constraints. The geometric constraints, obtained by measuring the common points in individual local coordinate systems, provide the possibility to reduce the local measuring uncertainty whereby enhance the global measuring certainty. A simulation system is developed in Matlab to analyze the feature of MCPM using the Monto-Carlo method. An exemplary setup is constructed to verify the feasibility and efficiency of the proposed method associated with laser tracker and indoor iGPS systems. Experimental results show that MCPM could significantly improve the alignment accuracy.

  3. Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals

    PubMed Central

    Genzel, Daria; Firzlaff, Uwe; Wiegrebe, Lutz

    2016-01-01

    Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating. PMID:27169504

  4. Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals.

    PubMed

    Genzel, Daria; Firzlaff, Uwe; Wiegrebe, Lutz; MacNeilage, Paul R

    2016-08-01

    Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating. Copyright © 2016 the American Physiological Society.

  5. Correction to Method of Establishing the Absolute Radiometric Accuracy of Remote Sensing Systems While On-orbit Using Characterized Stellar Sources

    NASA Technical Reports Server (NTRS)

    Bowen, Howard S.; Cunningham, Douglas M.

    2007-01-01

    The contents include: 1) Brief history of related events; 2) Overview of original method used to establish absolute radiometric accuracy of remote sensing instruments using stellar sources; and 3) Considerations to improve the stellar calibration approach.

  6. Localization and cooperative communication methods for cognitive radio

    NASA Astrophysics Data System (ADS)

    Duval, Olivier

    We study localization of nearby nodes and cooperative communication for cognitive radios. Cognitive radios sensing their environment to estimate the channel gain between nodes can cooperate and adapt their transmission power to maximize the capacity of the communication between two nodes. We study the end-to-end capacity of a cooperative relaying scheme using orthogonal frequency-division modulation (OFDM) modulation, under power constraints for both the base station and the relay station. The relay uses amplify-and-forward and decode-and-forward cooperative relaying techniques to retransmit messages on a subset of the available subcarriers. The power used in the base station and the relay station transmitters is allocated to maximize the overall system capacity. The subcarrier selection and power allocation are obtained based on convex optimization formulations and an iterative algorithm. Additionally, decode-and-forward relaying schemes are allowed to pair source and relayed subcarriers to increase further the capacity of the system. The proposed techniques outperforms non-selective relaying schemes over a range of relay power budgets. Cognitive radios can be used for opportunistic access of the radio spectrum by detecting spectrum holes left unused by licensed primary users. We introduce a spectrum holes detection approach, which combines blind modulation classification, angle of arrival estimation and number of sources detection. We perform eigenspace analysis to determine the number of sources, and estimate their angles of arrival (AOA). In addition, we classify detected sources as primary or secondary users with their distinct second-orde one-conjugate cyclostationarity features. Extensive simulations carried out indicate that the proposed system identifies and locates individual sources correctly, even at -4 dB signal-to-noise ratios (SNR). In environments with a high density of scatterers, several wireless channels experience nonline-of-sight (NLOS) condition, increasing the localization error, even when the AOA estimate is accurate. We present a real-time localization solver (RTLS) for time-of-arrival (TOA) estimates using ray-tracing methods on the map of the geometry of walls and compare its performance with classical TOA trilateration localization methods. Extensive simulations and field trials for indoor environments show that our method increases the coverage area from 1.9% of the floor to 82.3 % and the accuracy by a 10-fold factor when compared with trilateration. We implemented our ray tracing model in C++ using the CGAL computational geometry algorithm library. We illustrate the real-time property of our RTLS that performs most ray tracing tasks in a preprocessing phase with time and space complexity analyses and profiling of our software.

  7. Evaluation of antibiotic resistance analysis and ribotyping for identification of faecal pollution sources in an urban watershed.

    PubMed

    Moore, D F; Harwood, V J; Ferguson, D M; Lukasik, J; Hannah, P; Getrich, M; Brownell, M

    2005-01-01

    The accuracy of ribotyping and antibiotic resistance analysis (ARA) for prediction of sources of faecal bacterial pollution in an urban southern California watershed was determined using blinded proficiency samples. Antibiotic resistance patterns and HindIII ribotypes of Escherichia coli (n = 997), and antibiotic resistance patterns of Enterococcus spp. (n = 3657) were used to construct libraries from sewage samples and from faeces of seagulls, dogs, cats, horses and humans within the watershed. The three libraries were analysed to determine the accuracy of host source prediction. The internal accuracy of the libraries (average rate of correct classification, ARCC) with six source categories was 44% for E. coli ARA, 69% for E. coli ribotyping and 48% for Enterococcus ARA. Each library's predictive ability towards isolates that were not part of the library was determined using a blinded proficiency panel of 97 E. coli and 99 Enterococcus isolates. Twenty-eight per cent (by ARA) and 27% (by ribotyping) of the E. coli proficiency isolates were assigned to the correct source category. Sixteen per cent were assigned to the same source category by both methods, and 6% were assigned to the correct category. Addition of 2480 E. coli isolates to the ARA library did not improve the ARCC or proficiency accuracy. In contrast, 45% of Enterococcus proficiency isolates were correctly identified by ARA. None of the methods performed well enough on the proficiency panel to be judged ready for application to environmental samples. Most microbial source tracking (MST) studies published have demonstrated library accuracy solely by the internal ARCC measurement. Low rates of correct classification for E. coli proficiency isolates compared with the ARCCs of the libraries indicate that testing of bacteria from samples that are not represented in the library, such as blinded proficiency samples, is necessary to accurately measure predictive ability. The library-based MST methods used in this study may not be suited for determination of the source(s) of faecal pollution in large, urban watersheds.

  8. Echolocation versus echo suppression in humans

    PubMed Central

    Wallmeier, Ludwig; Geßele, Nikodemus; Wiegrebe, Lutz

    2013-01-01

    Several studies have shown that blind humans can gather spatial information through echolocation. However, when localizing sound sources, the precedence effect suppresses spatial information of echoes, and thereby conflicts with effective echolocation. This study investigates the interaction of echolocation and echo suppression in terms of discrimination suppression in virtual acoustic space. In the ‘Listening’ experiment, sighted subjects discriminated between positions of a single sound source, the leading or the lagging of two sources, respectively. In the ‘Echolocation’ experiment, the sources were replaced by reflectors. Here, the same subjects evaluated echoes generated in real time from self-produced vocalizations and thereby discriminated between positions of a single reflector, the leading or the lagging of two reflectors, respectively. Two key results were observed. First, sighted subjects can learn to discriminate positions of reflective surfaces echo-acoustically with accuracy comparable to sound source discrimination. Second, in the Listening experiment, the presence of the leading source affected discrimination of lagging sources much more than vice versa. In the Echolocation experiment, however, the presence of both the lead and the lag strongly affected discrimination. These data show that the classically described asymmetry in the perception of leading and lagging sounds is strongly diminished in an echolocation task. Additional control experiments showed that the effect is owing to both the direct sound of the vocalization that precedes the echoes and owing to the fact that the subjects actively vocalize in the echolocation task. PMID:23986105

  9. Ensemble classification for identifying neighbourhood sources of fugitive dust and associations with observed PM10

    NASA Astrophysics Data System (ADS)

    Khuluse-Makhanya, Sibusisiwe; Stein, Alfred; Breytenbach, André; Gxumisa, Athi; Dudeni-Tlhone, Nontembeko; Debba, Pravesh

    2017-10-01

    In urban areas the deterioration of air quality as a result of fugitive dust receives less attention than the more prominent traffic and industrial emissions. We assessed whether fugitive dust emission sources in the neighbourhood of an air quality monitor are predictors of ambient PM10 concentrations on days characterized by strong local winds. An ensemble maximum likelihood method is developed for land cover mapping in the vicinity of an air quality station using SPOT 6 multi-spectral images. The ensemble maximum likelihood classifier is developed through multiple training iterations for improved accuracy of the bare soil class. Five primary land cover classes are considered, namely built-up areas, vegetation, bare soil, water and 'mixed bare soil' which denotes areas where soil is mixed with either vegetation or synthetic materials. Preliminary validation of the ensemble classifier for the bare soil class results in an accuracy range of 65-98%. Final validation of all classes results in an overall accuracy of 78%. Next, cluster analysis and a varying intercepts regression model are used to assess the statistical association between land cover, a fugitive dust emissions proxy and observed PM10. We found that land cover patterns in the neighbourhood of an air quality station are significant predictors of observed average PM10 concentrations on days when wind speeds are conducive for dust emissions. This study concludes that in the absence of an emissions inventory for ambient particulate matter, PM10 emitted from dust reservoirs can be statistically accounted for by land cover characteristics. This supports the use of land cover data for improved prediction of PM10 at locations without air quality monitoring stations.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wehrschuetz, M., E-mail: martin.wehrschuetz@klinikum-graz.at; Aschauer, M.; Portugaller, H.

    The purpose of this study was to assess interobserver variability and accuracy in the evaluation of renal artery stenosis (RAS) with gadolinium-enhanced MR angiography (MRA) and digital subtraction angiography (DSA) in patients with hypertension. The authors found that source images are more accurate than maximum intensity projection (MIP) for depicting renal artery stenosis. Two independent radiologists reviewed MRA and DSA from 38 patients with hypertension. Studies were postprocessed to display images in MIP and source images. DSA was the standard for comparison in each patient. For each main renal artery, percentage stenosis was estimated for any stenosis detected by themore » two radiologists. To calculate sensitivity, specificity and accuracy, MRA studies and stenoses were categorized as normal, mild (1-39%), moderate (40-69%) or severe ({>=}70%), or occluded. DSA stenosis estimates of 70% or greater were considered hemodynamically significant. Analysis of variance demonstrated that MIP estimates of stenosis were greater than source image estimates for both readers. Differences in estimates for MIP versus DSA reached significance in one reader. The interobserver variance for MIP, source images and DSA was excellent (0.80< {kappa}{<=} 0.90). The specificity of source images was high (97%) but less for MIP (87%); average accuracy was 92% for MIP and 98% for source images. In this study, source images are significantly more accurate than MIP images in one reader with a similar trend was observed in the second reader. The interobserver variability was excellent. When renal artery stenosis is a consideration, high accuracy can only be obtained when source images are examined.« less

  11. Comparison of pelvic phased-array versus endorectal coil magnetic resonance imaging at 3 Tesla for local staging of prostate cancer.

    PubMed

    Kim, Bum Soo; Kim, Tae-Hwan; Kwon, Tae Gyun; Yoo, Eun Sang

    2012-05-01

    Several studies have demonstrated the superiority of endorectal coil magnetic resonance imaging (MRI) over pelvic phased-array coil MRI at 1.5 Tesla for local staging of prostate cancer. However, few have studied which evaluation is more accurate at 3 Tesla MRI. In this study, we compared the accuracy of local staging of prostate cancer using pelvic phased-array coil or endorectal coil MRI at 3 Tesla. Between January 2005 and May 2010, 151 patients underwent radical prostatectomy. All patients were evaluated with either pelvic phased-array coil or endorectal coil prostate MRI prior to surgery (63 endorectal coils and 88 pelvic phased-array coils). Tumor stage based on MRI was compared with pathologic stage. We calculated the specificity, sensitivity and accuracy of each group in the evaluation of extracapsular extension and seminal vesicle invasion. Both endorectal coil and pelvic phased-array coil MRI achieved high specificity, low sensitivity and moderate accuracy for the detection of extracapsular extension and seminal vesicle invasion. There were statistically no differences in specificity, sensitivity and accuracy between the two groups. Overall staging accuracy, sensitivity and specificity were not significantly different between endorectal coil and pelvic phased-array coil MRI.

  12. Establishment of a high accuracy geoid correction model and geodata edge match

    NASA Astrophysics Data System (ADS)

    Xi, Ruifeng

    This research has developed a theoretical and practical methodology for efficiently and accurately determining sub-decimeter level regional geoids and centimeter level local geoids to meet regional surveying and local engineering requirements. This research also provides a highly accurate static DGPS network data pre-processing, post-processing and adjustment method and a procedure for a large GPS network like the state level HRAN project. The research also developed an efficient and accurate methodology to join soil coverages in GIS ARE/INFO. A total of 181 GPS stations has been pre-processed and post-processed to obtain an absolute accuracy better than 1.5cm at 95% of the stations, and at all stations having a 0.5 ppm average relative accuracy. A total of 167 GPS stations in Iowa and around Iowa have been included in the adjustment. After evaluating GEOID96 and GEOID99, a more accurate and suitable geoid model has been established in Iowa. This new Iowa regional geoid model improved the accuracy from a sub-decimeter 10˜20 centimeter to 5˜10 centimeter. The local kinematic geoid model, developed using Kalman filtering, gives results better than third order leveling accuracy requirement with 1.5 cm standard deviation.

  13. Into the deep: Evaluation of SourceTracker for assessment of faecal contamination of coastal waters.

    PubMed

    Henry, Rebekah; Schang, Christelle; Coutts, Scott; Kolotelo, Peter; Prosser, Toby; Crosbie, Nick; Grant, Trish; Cottam, Darren; O'Brien, Peter; Deletic, Ana; McCarthy, David

    2016-04-15

    Faecal contamination of recreational waters is an increasing global health concern. Tracing the source of the contaminant is a vital step towards mitigation and disease prevention. Total 16S rRNA amplicon data for a specific environment (faeces, water, soil) and computational tools such as the Markov-Chain Monte Carlo based SourceTracker can be applied to microbial source tracking (MST) and attribution studies. The current study applied artificial and in-laboratory derived bacterial communities to define the potential and limitations associated with the use of SourceTracker, prior to its application for faecal source tracking at three recreational beaches near Port Phillip Bay (Victoria, Australia). The results demonstrated that at minimum multiple model runs of the SourceTracker modelling tool (i.e. technical replicates) were required to identify potential false positive predictions. The calculation of relative standard deviations (RSDs) for each attributed source improved overall predictive confidence in the results. In general, default parameter settings provided high sensitivity, specificity, accuracy and precision. Application of SourceTracker to recreational beach samples identified treated effluent as major source of human-derived faecal contamination, present in 69% of samples. Site-specific sources, such as raw sewage, stormwater and bacterial populations associated with the Yarra River estuary were also identified. Rainfall and associated sand resuspension at each location correlated with observed human faecal indicators. The results of the optimised SourceTracker analysis suggests that local sources of contamination have the greatest effect on recreational coastal water quality. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Agarwal, Ravi P.; Shaykhian, Gholam Ali

    2007-01-01

    We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned.

  15. Analytical magmatic source modelling from a joint inversion of ground deformation and focal mechanisms data

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Scandura, Danila; Palano, Mimmo; Musumeci, Carla

    2014-05-01

    Seismicity and ground deformation represent the principal geophysical methods for volcano monitoring and provide important constraints on subsurface magma movements. The occurrence of migrating seismic swarms, as observed at several volcanoes worldwide, are commonly associated with dike intrusions. In addition, on active volcanoes, (de)pressurization and/or intrusion of magmatic bodies stress and deform the surrounding crustal rocks, often causing earthquakes randomly distributed in time within a volume extending about 5-10 km from the wall of the magmatic bodies. Despite advances in space-based, geodetic and seismic networks have significantly improved volcano monitoring in the last decades on an increasing worldwide number of volcanoes, quantitative models relating deformation and seismicity are not common. The observation of several episodes of volcanic unrest throughout the world, where the movement of magma through the shallow crust was able to produce local rotation of the ambient stress field, introduces an opportunity to improve the estimate of the parameters of a deformation source. In particular, during these episodes of volcanic unrest a radial pattern of P-axes of the focal mechanism solutions, similar to that of ground deformation, has been observed. Therefore, taking into account additional information from focal mechanisms data, we propose a novel approach to volcanic source modeling based on the joint inversion of deformation and focal plane solutions assuming that both observations are due to the same source. The methodology is first verified against a synthetic dataset of surface deformation and strain within the medium, and then applied to real data from an unrest episode occurred before the May 13th 2008 eruption at Mt. Etna (Italy). The main results clearly indicate as the joint inversion improves the accuracy of the estimated source parameters of about 70%. The statistical tests indicate that the source depth is the parameter with the highest increment of accuracy. In addition a sensitivity analysis confirms that displacements data are more useful to constrain the pressure and the horizontal location of the source than its depth, while the P-axes better constrain the depth estimation.

  16. A Low Complexity System Based on Multiple Weighted Decision Trees for Indoor Localization

    PubMed Central

    Sánchez-Rodríguez, David; Hernández-Morera, Pablo; Quinteiro, José Ma.; Alonso-González, Itziar

    2015-01-01

    Indoor position estimation has become an attractive research topic due to growing interest in location-aware services. Nevertheless, satisfying solutions have not been found with the considerations of both accuracy and system complexity. From the perspective of lightweight mobile devices, they are extremely important characteristics, because both the processor power and energy availability are limited. Hence, an indoor localization system with high computational complexity can cause complete battery drain within a few hours. In our research, we use a data mining technique named boosting to develop a localization system based on multiple weighted decision trees to predict the device location, since it has high accuracy and low computational complexity. The localization system is built using a dataset from sensor fusion, which combines the strength of radio signals from different wireless local area network access points and device orientation information from a digital compass built-in mobile device, so that extra sensors are unnecessary. Experimental results indicate that the proposed system leads to substantial improvements on computational complexity over the widely-used traditional fingerprinting methods, and it has a better accuracy than they have. PMID:26110413

  17. Ultrasound modulation of bioluminescence generated inside a turbid medium

    NASA Astrophysics Data System (ADS)

    Ahmad, Junaid; Jayet, Baptiste; Hill, Philip J.; Mather, Melissa L.; Dehghani, Hamid; Morgan, Stephen P.

    2017-03-01

    In vivo bioluminescence imaging (BLI) has poor spatial resolution owing to strong light scattering by tissue, which also affects quantitative accuracy. This paper proposes a hybrid acousto-optic imaging platform that images bioluminescence modulated at ultrasound (US) frequency inside an optically scattering medium. This produces an US modulated light within the tissue that reduces the effects of light scattering and improves the spatial resolution. The system consists of a continuously excited 3.5 MHz US transducer applied to a tissue like phantom of known optical properties embedded with bio-or chemiluminescent sources that are used to mimic in vivo experiments. Scanning US over the turbid medium modulates the luminescent sources deep inside tissue at several US scan points. These modulated signals are recorded by a photomultiplier tube and lock-in detection to generate a 1D profile. Indeed, high frequency US enables small focal volume to improve spatial resolution, but this leads to lower signal-to-noise ratio. First experimental results show that US enables localization of a small luminescent source (around 2 mm wide) deep ( 20 mm) inside a tissue phantom having a scattering coefficient of 80 cm-1. Two sources separated by 10 mm could be resolved 20 mm inside a chicken breast.

  18. Poster — Thur Eve — 40: Automated Quality Assurance for Remote-Afterloading High Dose Rate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Anthony; Ravi, Ananth

    2014-08-15

    High dose rate (HDR) remote afterloading brachytherapy involves sending a small, high-activity radioactive source attached to a cable to different positions within a hollow applicator implanted in the patient. It is critical that the source position within the applicator and the dwell time of the source are accurate. Daily quality assurance (QA) tests of the positional and dwell time accuracy are essential to ensure that the accuracy of the remote afterloader is not compromised prior to patient treatment. Our centre has developed an automated, video-based QA system for HDR brachytherapy that is dramatically superior to existing diode or film QAmore » solutions in terms of cost, objectivity, positional accuracy, with additional functionalities such as being able to determine source dwell time and transit time of the source. In our system, a video is taken of the brachytherapy source as it is sent out through a position check ruler, with the source visible through a clear window. Using a proprietary image analysis algorithm, the source position is determined with respect to time as it moves to different positions along the check ruler. The total material cost of the video-based system was under $20, consisting of a commercial webcam and adjustable stand. The accuracy of the position measurement is ±0.2 mm, and the time resolution is 30 msec. Additionally, our system is capable of robustly verifying the source transit time and velocity (a test required by the AAPM and CPQR recommendations), which is currently difficult to perform accurately.« less

  19. Radio Follow-up on All Unassociated Gamma-Ray Sources from the Third Fermi Large Area Telescope Source Catalog

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schinzel, Frank K.; Petrov, Leonid; Taylor, Gregory B.

    The third Fermi Large Area Telescope γ -ray source catalog (3FGL) contains over 1000 objects for which there is no known counterpart at other wavelengths. The physical origin of the γ -ray emission from those objects is unknown. Such objects are commonly referred to as unassociated and mostly do not exhibit significant γ -ray flux variability. We performed a survey of all unassociated γ -ray sources found in 3FGL using the Australia Telescope Compact Array and Very Large Array in the range 4.0–10.0 GHz. We found 2097 radio candidates for association with γ -ray sources. The follow-up with very longmore » baseline interferometry for a subset of those candidates yielded 142 new associations with active galactic nuclei that are γ -ray sources, provided alternative associations for seven objects, and improved positions for another 144 known associations to the milliarcsecond level of accuracy. In addition, for 245 unassociated γ -ray sources we did not find a single compact radio source above 2 mJy within 3 σ of their γ -ray localization. A significant fraction of these empty fields, 39%, are located away from the Galactic plane. We also found 36 extended radio sources that are candidates for association with a corresponding γ -ray object, 19 of which are most likely supernova remnants or H ii regions, whereas 17 could be radio galaxies.« less

  20. Millennial-scale variability in the local radiocarbon reservoir age of the Florida Keys reef tract during the Holocene

    NASA Astrophysics Data System (ADS)

    Ashe, E.; Toth, L. T.; Cheng, H.; Edwards, R. L.; Richey, J. N.

    2016-12-01

    The oceanic passage between the Florida Keys and Cuba, known as the Straits of Florida, provides a critical connection between the tropics and northern Atlantic. Changes in the character of water masses transported through this region may ultimately have important impacts on high-latitude climate variability. Although recent studies have documented significant changes in the density of regional surface waters over millennial timescales, little is known about the contribution of local- to regional-scale changes in circulation to surface-water variability. Local variability in the radiocarbon age, ΔR, of surface waters can be used to trace changes in local water-column mixing and/or changes in regional source water over a variety of spatial and temporal scales. We reconstructed "snapshots" of ΔR variability across the Florida Keys reef tract during the last 10,000 years by dating 68 unaltered corals collected from Holocene reef cores with both U-series and radiocarbon techniques. We combined the snapshots of ΔR into a semi-empirical model to develop a robust statistical reconstruction of millennial-scale variability in ΔR on the Florida Keys reef tract. Our model demonstrates that ΔR varied significantly during the Holocene, with relatively high values during the early Holocene and around 3000 years BP and relatively low values around 7000 years BP and at present. We compare the trends in ΔR to existing paleoceanographic reconstructions to evaluate the relative contribution of local upwelling versus changes in source water to the region as a whole in driving local radiocarbon variability, and discuss the importance of these results to our understanding of regional-scale oceanographic and climatic variability during the Holocene. We also discuss the implications of our results for radiocarbon dating of marine samples from south Florida and present a model of ΔR versus 14C age that can be used to improve the accuracy of radiocarbon calibrations from this region.

  1. Sensor-Based Electromagnetic Navigation (Mediguide®): How Accurate Is It? A Phantom Model Study.

    PubMed

    Bourier, Felix; Reents, Tilko; Ammar-Busch, Sonia; Buiatti, Alessandra; Grebmer, Christian; Telishevska, Marta; Brkic, Amir; Semmler, Verena; Lennerz, Carsten; Kaess, Bernhard; Kottmaier, Marc; Kolb, Christof; Deisenhofer, Isabel; Hessling, Gabriele

    2015-10-01

    Data about localization reproducibility as well as spatial and visual accuracy of the new MediGuide® sensor-based electroanatomic navigation technology are scarce. We therefore sought to quantify these parameters based on phantom experiments. A realistic heart phantom was generated in a 3D-Printer. A CT scan was performed on the phantom. The phantom itself served as ground-truth reference to ensure exact and reproducible catheter placement. A MediGuide® catheter was repeatedly tagged at selected positions to assess accuracy of point localization. The catheter was also used to acquire a MediGuide®-scaled geometry in the EnSite Velocity® electroanatomic mapping system. The acquired geometries (MediGuide®-scaled and EnSite Velocity®-scaled) were compared to a CT segmentation of the phantom to quantify concordance. Distances between landmarks were measured in the EnSite Velocity®- and MediGuide®-scaled geometry and the CT dataset for Bland-Altman comparison. The visualization of virtual MediGuide® catheter tips was compared to their corresponding representation on fluoroscopic cine-loops. Point localization accuracy was 0.5 ± 0.3 mm for MediGuide® and 1.4 ± 0.7 mm for EnSite Velocity®. The 3D accuracy of the geometries was 1.1 ± 1.4 mm (MediGuide®-scaled) and 3.2 ± 1.6 mm (not MediGuide®-scaled). The offset between virtual MediGuide® catheter visualization and catheter representation on corresponding fluoroscopic cine-loops was 0.4 ± 0.1 mm. The MediGuide® system shows a very high level of accuracy regarding localization reproducibility as well as spatial and visual accuracy, which can be ascribed to the magnetic field localization technology. The observed offsets between the geometry visualization and the real phantom are below a clinically relevant threshold. © 2015 Wiley Periodicals, Inc.

  2. Time-Resolved Intrafraction Target Translations and Rotations During Stereotactic Liver Radiation Therapy: Implications for Marker-based Localization Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertholet, Jenny, E-mail: jennbe@rm.dk; Worm, Esben S.; Fledelius, Walther

    Purpose: Image guided liver stereotactic body radiation therapy (SBRT) often relies on implanted fiducial markers. The target localization accuracy decreases with increased marker-target distance. This may occur partly because of liver rotations. The aim of this study was to examine time-resolved translations and rotations of liver marker constellations and investigate if time-resolved intrafraction rotational corrections can improve localization accuracy in liver SBRT. Methods and Materials: Twenty-nine patients with 3 implanted markers received SBRT in 3 to 6 fractions. The time-resolved trajectory of each marker was estimated from the projections of 1 to 3 daily cone beam computed tomography scans andmore » used to calculate the translation and rotation of the marker constellation. In all cone beam computed tomography projections, the time-resolved position of each marker was predicted from the position of another surrogate marker by assuming that the marker underwent either (1) the same translation as the surrogate marker; or (2) the same translation as the surrogate marker corrected by the rotation of the marker constellation. The localization accuracy was quantified as the root-mean-square error (RMSE) between the estimated and the actual marker position. For comparison, the RMSE was also calculated when the marker's position was estimated as its mean position for all the projections. Results: The mean translational and rotational range (2nd-98th percentile) was 2.0 mm/3.9° (right-left), 9.2 mm/2.9° (superior-inferior), 4.0 mm/4.0° (anterior-posterior), and 10.5 mm (3-dimensional). Rotational corrections decreased the mean 3-dimensional RMSE from 0.86 mm to 0.54 mm (P<.001) and halved the RMSE increase per millimeter increase in marker distance. Conclusions: Intrafraction rotations during liver SBRT reduce the accuracy of marker-guided target localization. Rotational correction can improve the localization accuracy with a factor of approximately 2 for large marker-target distances.« less

  3. Familiarity in source memory.

    PubMed

    Mollison, Matthew V; Curran, Tim

    2012-09-01

    Familiarity and recollection are thought to be separate processes underlying recognition memory. Event-related potentials (ERPs) dissociate these processes, with an early (approximately 300-500ms) frontal effect relating to familiarity (the FN400) and a later (500-800ms) parietal old/new effect relating to recollection. It has been debated whether source information for a studied item (i.e., contextual associations from when the item was previously encountered) is only accessible through recollection, or whether familiarity can contribute to successful source recognition. It has been shown that familiarity can assist in perceptual source monitoring when the source attribute is an intrinsic property of the item (e.g., an object's surface color), but few studies have examined its contribution to recognizing extrinsic source associations. Extrinsic source associations were examined in three experiments involving memory judgments for pictures of common objects. In Experiment 1, source information was spatial and results suggested that familiarity contributed to accurate source recognition: the FN400 ERP component showed a source accuracy effect, and source accuracy was above chance for items judged to only feel familiar. Source information in Experiment 2 was an extrinsic color association; source accuracy was at chance for familiar items and the FN400 did not differ between correct and incorrect source judgments. Experiment 3 replicated the results using a within-subjects manipulation of spatial vs. color source. Overall, the results suggest that familiarity's contribution to extrinsic source monitoring depends on the type of source information being remembered. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Flood area and damage estimation in Zhejiang, China.

    PubMed

    Liu, Renyi; Liu, Nan

    2002-09-01

    A GIS-based method to estimate flood area and damage is presented in this paper, which is oriented to developing countries like China, where labor is readily available for GIS data collecting, and tools such as, HEC-GeoRAS might not be readily available. At present local authorities in developing countries are often not predisposed to pay for commercial GIS platforms. To calculate flood area, two cases, non-source flood and source flood, are distinguished and a seed-spread algorithm suitable for source-flooding is described. The flood damage estimation is calculated in raster format by overlaying the flood area range with thematic maps and relating this to other socioeconomic data. Several measures used to improve the geometric accuracy and computing efficiency are presented. The management issues related to the application of this method, including the cost-effectiveness of approximate method in practice and supplementing two technical lines (self-programming and adopting commercial GIS software) to each other, are also discussed. The applications show that this approach has practical significance to flood fighting and control in developing countries like China.

  5. Directional Hearing and Sound Source Localization in Fishes.

    PubMed

    Sisneros, Joseph A; Rogers, Peter H

    2016-01-01

    Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.

  6. An automated, open-source pipeline for mass production of digital elevation models (DEMs) from very-high-resolution commercial stereo satellite imagery

    NASA Astrophysics Data System (ADS)

    Shean, David E.; Alexandrov, Oleg; Moratto, Zachary M.; Smith, Benjamin E.; Joughin, Ian R.; Porter, Claire; Morin, Paul

    2016-06-01

    We adapted the automated, open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline a processing workflow for ˜0.5 m ground sample distance (GSD) DigitalGlobe WorldView-1 and WorldView-2 along-track stereo image data, with an overview of ASP capabilities, an evaluation of ASP correlator options, benchmark test results, and two case studies of DEM accuracy. Output DEM products are posted at ˜2 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5 m where appropriate ground-control data are available, with observed standard deviation of ˜0.1-0.5 m for overlapping, co-registered DEMs (n = 14, 17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We are leveraging these resources to produce dense time series and regional mosaics for the Earth's polar regions.

  7. Feasibility of approaches combining sensor and source features in brain-computer interface.

    PubMed

    Ahn, Minkyu; Hong, Jun Hee; Jun, Sung Chan

    2012-02-15

    Brain-computer interface (BCI) provides a new channel for communication between brain and computers through brain signals. Cost-effective EEG provides good temporal resolution, but its spatial resolution is poor and sensor information is blurred by inherent noise. To overcome these issues, spatial filtering and feature extraction techniques have been developed. Source imaging, transformation of sensor signals into the source space through source localizer, has gained attention as a new approach for BCI. It has been reported that the source imaging yields some improvement of BCI performance. However, there exists no thorough investigation on how source imaging information overlaps with, and is complementary to, sensor information. Information (visible information) from the source space may overlap as well as be exclusive to information from the sensor space is hypothesized. Therefore, we can extract more information from the sensor and source spaces if our hypothesis is true, thereby contributing to more accurate BCI systems. In this work, features from each space (sensor or source), and two strategies combining sensor and source features are assessed. The information distribution among the sensor, source, and combined spaces is discussed through a Venn diagram for 18 motor imagery datasets. Additional 5 motor imagery datasets from the BCI Competition III site were examined. The results showed that the addition of source information yielded about 3.8% classification improvement for 18 motor imagery datasets and showed an average accuracy of 75.56% for BCI Competition data. Our proposed approach is promising, and improved performance may be possible with better head model. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Locally adapted NeQuick 2 model performance in European middle latitude ionosphere under different solar, geomagnetic and seasonal conditions

    NASA Astrophysics Data System (ADS)

    Vuković, Josip; Kos, Tomislav

    2017-10-01

    The ionosphere introduces positioning error in Global Navigation Satellite Systems (GNSS). There are several approaches for minimizing the error, with various levels of accuracy and different extents of coverage area. To model the state of the ionosphere in a region containing low number of reference GNSS stations, a locally adapted NeQuick 2 model can be used. Data ingestion updates the model with local level of ionization, enabling it to follow the observed changes of ionization levels. The NeQuick 2 model was adapted to local reference Total Electron Content (TEC) data using single station approach and evaluated using calibrated TEC data derived from 41 testing GNSS stations distributed around the data ingestion point. Its performance was observed in European middle latitudes in different ionospheric conditions of the period between 2011 and 2015. The modelling accuracy was evaluated in four azimuthal quadrants, with coverage radii calculated for three error thresholds: 12, 6 and 3 TEC Units (TECU). Diurnal radii change was observed for groups of days within periods of low and high solar activity and different seasons of the year. The statistical analysis was conducted on those groups of days, revealing trends in each of the groups, similarities between days within groups and the 95th percentile radii as a practically applicable measure of model performance. In almost all cases the modelling accuracy was better than 12 TECU, having the biggest radius from the data ingestion point. Modelling accuracy better than 6 TECU was achieved within reduced radius in all observed periods, while accuracy better than 3 TECU was reached only in summer. The calculated radii and interpolated error levels were presented on maps. That was especially useful in analyzing the model performance during the strongest geomagnetic storms of the observed period, with each of them having unique development and influence on model accuracy. Although some of the storms severely degraded the model accuracy, during most of the disturbed periods the model could be used, but with lower accuracy than in the quiet geomagnetic conditions. The comprehensive analysis of locally adapted NeQuick 2 model performance highlighted the challenges of using the single point data ingestion applied to a large region in middle latitudes and determined the achievable radii for different error thresholds in various ionospheric conditions.

  9. MTSAT: Full Disk - NOAA GOES Geostationary Satellite Server

    Science.gov Websites

    GOES Himawari-8 Indian Ocean Meteosat HEMISPHERIC GOES Atlantic Source | Local GOES West Himawari-8 Meteosat CONTINENTAL PACUS CONUS Source | Local REGIONAL GOES-West Northwest West Central Southwest GOES -East Regional Page Source | Local Pacific Northwest Source | Local Northern Rockies Source | Local

  10. Cross-beam coherence of infrasonic signals at local and regional ranges.

    PubMed

    Alberts, W C Kirkpatrick; Tenney, Stephen M

    2017-11-01

    Signals collected by infrasound arrays require continuous analysis by skilled personnel or by automatic algorithms in order to extract useable information. Typical pieces of information gained by analysis of infrasonic signals collected by multiple sensor arrays are arrival time, line of bearing, amplitude, and duration. These can all be used, often with significant accuracy, to locate sources. A very important part of this chain is associating collected signals across multiple arrays. Here, a pairwise, cross-beam coherence method of signal association is described that allows rapid signal association for high signal-to-noise ratio events captured by multiple infrasound arrays at ranges exceeding 150 km. Methods, test cases, and results are described.

  11. The noise wars in helio- and asteroseismology

    NASA Astrophysics Data System (ADS)

    García, R. A.

    2012-12-01

    During this conference, latest results on helioseismology (both local and global) as well as in asteroseismology have been reviewed, the hottest questions discussed and the future prospects of our field fully debated. A conference so rich in the variety of topics addressed is impossible to be deeply reviewed in a paper. Therefore, I present here my particular view of the field as it is today, concentrating on the solar-like stars and global helioseismology. The link I found to do so is the constant battle in which we are all engaged against the sources of noise that difficult our studies. The noise in the data, the noise in the inversions, the precision and accuracy of our inferred models, \\ldots .

  12. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  13. 40 CFR 51.50 - What definitions apply to this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude data for point sources. The six codes and their definitions are: (1) Coordinate Data Source Code: The... physical piece of or a closely related set of equipment. The EPA's reporting format for a given inventory...

  14. Medial prefrontal cortex supports source memory accuracy for self-referenced items.

    PubMed

    Leshikar, Eric D; Duarte, Audrey

    2012-01-01

    Previous behavioral work suggests that processing information in relation to the self enhances subsequent item recognition. Neuroimaging evidence further suggests that regions along the cortical midline, particularly those of the medial prefrontal cortex (PFC), underlie this benefit. There has been little work to date, however, on the effects of self-referential encoding on source memory accuracy or whether the medial PFC might contribute to source memory for self-referenced materials. In the current study, we used fMRI to measure neural activity while participants studied and subsequently retrieved pictures of common objects superimposed on one of two background scenes (sources) under either self-reference or self-external encoding instructions. Both item recognition and source recognition were better for objects encoded self-referentially than self-externally. Neural activity predictive of source accuracy was observed in the medial PFC (Brodmann area 10) at the time of study for self-referentially but not self-externally encoded objects. The results of this experiment suggest that processing information in relation to the self leads to a mnemonic benefit for source level features, and that activity in the medial PFC contributes to this source memory benefit. This evidence expands the purported role that the medial PFC plays in self-referencing.

  15. Long-Term Effects of Concussion on Electrophysiological Indices of Attention in Varsity College Athletes: An Event-Related Potential and Standardized Low-Resolution Brain Electromagnetic Tomography Approach.

    PubMed

    Ledwidge, Patrick S; Molfese, Dennis L

    2016-12-01

    This study investigated the effects of a past concussion on electrophysiological indices of attention in college athletes. Forty-four varsity football athletes (22 with at least one past concussion) participated in three neuropsychological tests and a two-tone auditory oddball task while undergoing high-density event-related potential (ERP) recording. Athletes previously diagnosed with a concussion experienced their most recent injury approximately 4 years before testing. Previously concussed and control athletes performed equivalently on three neuropsychological tests. Behavioral accuracy and reaction times on the oddball task were also equivalent across groups. However, athletes with a concussion history exhibited significantly larger N2 and P3b amplitudes and longer P3b latencies. Source localization using standardized low-resolution brain electromagnetic tomography indicated that athletes with a history of concussion generated larger electrical current density in the left inferior parietal gyrus compared to control athletes. These findings support the hypothesis that individuals with a past concussion recruit compensatory neural resources in order to meet executive functioning demands. High-density ERP measures combined with source localization provide an important method to detect long-term neural consequences of concussion in the absence of impaired neuropsychological performance.

  16. Long-Term Effects of Concussion on Electrophysiological Indices of Attention in Varsity College Athletes: An Event-Related Potential and Standardized Low-Resolution Brain Electromagnetic Tomography Approach

    PubMed Central

    Molfese, Dennis L.

    2016-01-01

    Abstract This study investigated the effects of a past concussion on electrophysiological indices of attention in college athletes. Forty-four varsity football athletes (22 with at least one past concussion) participated in three neuropsychological tests and a two-tone auditory oddball task while undergoing high-density event-related potential (ERP) recording. Athletes previously diagnosed with a concussion experienced their most recent injury approximately 4 years before testing. Previously concussed and control athletes performed equivalently on three neuropsychological tests. Behavioral accuracy and reaction times on the oddball task were also equivalent across groups. However, athletes with a concussion history exhibited significantly larger N2 and P3b amplitudes and longer P3b latencies. Source localization using standardized low-resolution brain electromagnetic tomography indicated that athletes with a history of concussion generated larger electrical current density in the left inferior parietal gyrus compared to control athletes. These findings support the hypothesis that individuals with a past concussion recruit compensatory neural resources in order to meet executive functioning demands. High-density ERP measures combined with source localization provide an important method to detect long-term neural consequences of concussion in the absence of impaired neuropsychological performance. PMID:27025905

  17. Simulation of Spiral Waves and Point Sources in Atrial Fibrillation with Application to Rotor Localization

    PubMed Central

    Ganesan, Prasanth; Shillieto, Kristina E.; Ghoraani, Behnaz

    2018-01-01

    Cardiac simulations play an important role in studies involving understanding and investigating the mechanisms of cardiac arrhythmias. Today, studies of arrhythmogenesis and maintenance are largely being performed by creating simulations of a particular arrhythmia with high accuracy comparable to the results of clinical experiments. Atrial fibrillation (AF), the most common arrhythmia in the United States and many other parts of the world, is one of the major field where simulation and modeling is largely used. AF simulations not only assist in understanding its mechanisms but also help to develop, evaluate and improve the computer algorithms used in electrophysiology (EP) systems for ablation therapies. In this paper, we begin with a brief overeview of some common techniques used in simulations to simulate two major AF mechanisms – spiral waves (or rotors) and point (or focal) sources. We particularly focus on 2D simulations using Nygren et al.’s mathematical model of human atrial cell. Then, we elucidate an application of the developed AF simulation to an algorithm designed for localizing AF rotors for improving current AF ablation therapies. Our simulation methods and results, along with the other discussions presented in this paper is aimed to provide engineers and professionals with a working-knowledge of application-specific simulations of spirals and foci. PMID:29629398

  18. Monitoring the use and outcomes of new devices and procedures: how does coding affect what Hospital Episode Statistics contribute? Lessons from 12 emerging procedures 2006-10.

    PubMed

    Patrick, Hannah; Sims, Andrew; Burn, Julie; Bousfield, Derek; Colechin, Elaine; Reay, Christopher; Alderson, Neil; Goode, Stephen; Cunningham, David; Campbell, Bruce

    2013-03-01

    New devices and procedures are often introduced into health services when the evidence base for their efficacy and safety is limited. The authors sought to assess the availability and accuracy of routinely collected Hospital Episodes Statistics (HES) data in the UK and their potential contribution to the monitoring of new procedures. Four years of HES data (April 2006-March 2010) were analysed to identify episodes of hospital care involving a sample of 12 new interventional procedures. HES data were cross checked against other relevant sources including national or local registers and manufacturers' information. HES records were available for all 12 procedures during the entire study period. Comparative data sources were available from national (5), local (2) and manufacturer (2) registers. Factors found to affect comparisons were miscoding, alternative coding and inconsistent use of subsidiary codes. The analysis of provider coverage showed that HES is sensitive at detecting centres which carry out procedures, but specificity is poor in some cases. Routinely collected HES data have the potential to support quality improvements and evidence-based commissioning of devices and procedures in health services but achievement of this potential depends upon the accurate coding of procedures.

  19. Performance analysis of the Microsoft Kinect sensor for 2D Simultaneous Localization and Mapping (SLAM) techniques.

    PubMed

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-12-05

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks.

  20. Performance Analysis of the Microsoft Kinect Sensor for 2D Simultaneous Localization and Mapping (SLAM) Techniques

    PubMed Central

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-01-01

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks. PMID:25490595

  1. A Collaborative Secure Localization Algorithm Based on Trust Model in Underwater Wireless Sensor Networks

    PubMed Central

    Han, Guangjie; Liu, Li; Jiang, Jinfang; Shu, Lei; Rodrigues, Joel J.P.C.

    2016-01-01

    Localization is one of the hottest research topics in Underwater Wireless Sensor Networks (UWSNs), since many important applications of UWSNs, e.g., event sensing, target tracking and monitoring, require location information of sensor nodes. Nowadays, a large number of localization algorithms have been proposed for UWSNs. How to improve location accuracy are well studied. However, few of them take location reliability or security into consideration. In this paper, we propose a Collaborative Secure Localization algorithm based on Trust model (CSLT) for UWSNs to ensure location security. Based on the trust model, the secure localization process can be divided into the following five sub-processes: trust evaluation of anchor nodes, initial localization of unknown nodes, trust evaluation of reference nodes, selection of reference node, and secondary localization of unknown node. Simulation results demonstrate that the proposed CSLT algorithm performs better than the compared related works in terms of location security, average localization accuracy and localization ratio. PMID:26891300

  2. Total Variation Diminishing (TVD) schemes of uniform accuracy

    NASA Technical Reports Server (NTRS)

    Hartwich, PETER-M.; Hsu, Chung-Hao; Liu, C. H.

    1988-01-01

    Explicit second-order accurate finite-difference schemes for the approximation of hyperbolic conservation laws are presented. These schemes are nonlinear even for the constant coefficient case. They are based on first-order upwind schemes. Their accuracy is enhanced by locally replacing the first-order one-sided differences with either second-order one-sided differences or central differences or a blend thereof. The appropriate local difference stencils are selected such that they give TVD schemes of uniform second-order accuracy in the scalar, or linear systems, case. Like conventional TVD schemes, the new schemes avoid a Gibbs phenomenon at discontinuities of the solution, but they do not switch back to first-order accuracy, in the sense of truncation error, at extrema of the solution. The performance of the new schemes is demonstrated in several numerical tests.

  3. SU-E-T-268: Proton Radiosurgery End-To-End Testing Using Lucy 3D QA Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, D; Gordon, I; Ghebremedhin, A

    2014-06-01

    Purpose: To check the overall accuracy of proton radiosurgery treatment delivery using ready-made circular collimator inserts and fixed thickness compensating boluses. Methods: Lucy 3D QA phantom (Standard Imaging Inc. WI, USA) inserted with GaFchromicTM film was irradiated with laterally scattered and longitudinally spread-out 126.8 MeV proton beams. The tests followed every step in the proton radiosurgery treatment delivery process: CT scan (GE Lightspeed VCT), target contouring, treatment planning (Odyssey 5.0, Optivus, CA), portal calibration, target localization using robotic couch with image guidance and dose delivery at planned gantry angles. A 2 cm diameter collimator insert in a 4 cm diametermore » radiosurgery cone and a 1.2 cm thick compensating flat bolus were used for all beams. Film dosimetry (RIT114 v5.0, Radiological Imaging Technology, CO, USA) was used to evaluate the accuracy of target localization and relative dose distributions compared to those calculated by the treatment planning system. Results: The localization accuracy was estimated by analyzing the GaFchromic films irradiated at gantry 0, 90 and 270 degrees. We observed 0.5 mm shift in lateral direction (patient left), ±0.9 mm shift in AP direction and ±1.0 mm shift in vertical direction (gantry dependent). The isodose overlays showed good agreement (<2mm, 50% isodose lines) between measured and calculated doses. Conclusion: Localization accuracy depends on gantry sag, CT resolution and distortion, DRRs from treatment planning computer, localization accuracy of image guidance system, fabrication of ready-made aperture and cone housing. The total deviation from the isocenter was 1.4 mm. Dose distribution uncertainty comes from distal end error due to bolus and CT density, in addition to localization error. The planned dose distribution was well matched (>90%) to the measured values 2%/2mm criteria. Our test showed the robustness of our proton radiosurgery treatment delivery system using ready-made collimator inserts and fixed thickness compensating boluses.« less

  4. Accuracy of Dual-Energy Virtual Monochromatic CT Numbers: Comparison between the Single-Source Projection-Based and Dual-Source Image-Based Methods.

    PubMed

    Ueguchi, Takashi; Ogihara, Ryota; Yamada, Sachiko

    2018-03-21

    To investigate the accuracy of dual-energy virtual monochromatic computed tomography (CT) numbers obtained by two typical hardware and software implementations: the single-source projection-based method and the dual-source image-based method. A phantom with different tissue equivalent inserts was scanned with both single-source and dual-source scanners. A fast kVp-switching feature was used on the single-source scanner, whereas a tin filter was used on the dual-source scanner. Virtual monochromatic CT images of the phantom at energy levels of 60, 100, and 140 keV were obtained by both projection-based (on the single-source scanner) and image-based (on the dual-source scanner) methods. The accuracy of virtual monochromatic CT numbers for all inserts was assessed by comparing measured values to their corresponding true values. Linear regression analysis was performed to evaluate the dependency of measured CT numbers on tissue attenuation, method, and their interaction. Root mean square values of systematic error over all inserts at 60, 100, and 140 keV were approximately 53, 21, and 29 Hounsfield unit (HU) with the single-source projection-based method, and 46, 7, and 6 HU with the dual-source image-based method, respectively. Linear regression analysis revealed that the interaction between the attenuation and the method had a statistically significant effect on the measured CT numbers at 100 and 140 keV. There were attenuation-, method-, and energy level-dependent systematic errors in the measured virtual monochromatic CT numbers. CT number reproducibility was comparable between the two scanners, and CT numbers had better accuracy with the dual-source image-based method at 100 and 140 keV. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  5. Combining accuracy assessment of land-cover maps with environmental monitoring programs

    Treesearch

    Stephen V. Stehman; Raymond L. Czaplewski; Sarah M. Nusser; Limin Yang; Zhiliang Zhu

    2000-01-01

    A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring...

  6. Multi-Autonomous Ground-robotic International Challenge (MAGIC) 2010

    DTIC Science & Technology

    2010-12-14

    SLAM technique since this setup, having a LIDAR with long-range high-accuracy measurement capability, allows accurate localization and mapping more...achieve the accuracy of 25cm due to the use of multi-dimensional information. OGM is, similarly to SLAM , carried out by using LIDAR data. The OGM...a result of the development and implementation of the hybrid feature-based/scan-matching Simultaneous Localization and Mapping ( SLAM ) technique, the

  7. Modeling the uncertainty of estimating forest carbon stocks in China

    NASA Astrophysics Data System (ADS)

    Yue, T. X.; Wang, Y. F.; Du, Z. P.; Zhao, M. W.; Zhang, L. L.; Zhao, N.; Lu, M.; Larocque, G. R.; Wilson, J. P.

    2015-12-01

    Earth surface systems are controlled by a combination of global and local factors, which cannot be understood without accounting for both the local and global components. The system dynamics cannot be recovered from the global or local controls alone. Ground forest inventory is able to accurately estimate forest carbon stocks at sample plots, but these sample plots are too sparse to support the spatial simulation of carbon stocks with required accuracy. Satellite observation is an important source of global information for the simulation of carbon stocks. Satellite remote-sensing can supply spatially continuous information about the surface of forest carbon stocks, which is impossible from ground-based investigations, but their description has considerable uncertainty. In this paper, we validated the Lund-Potsdam-Jena dynamic global vegetation model (LPJ), the Kriging method for spatial interpolation of ground sample plots and a satellite-observation-based approach as well as an approach for fusing the ground sample plots with satellite observations and an assimilation method for incorporating the ground sample plots into LPJ. The validation results indicated that both the data fusion and data assimilation approaches reduced the uncertainty of estimating carbon stocks. The data fusion had the lowest uncertainty by using an existing method for high accuracy surface modeling to fuse the ground sample plots with the satellite observations (HASM-SOA). The estimates produced with HASM-SOA were 26.1 and 28.4 % more accurate than the satellite-based approach and spatial interpolation of the sample plots, respectively. Forest carbon stocks of 7.08 Pg were estimated for China during the period from 2004 to 2008, an increase of 2.24 Pg from 1984 to 2008, using the preferred HASM-SOA method.

  8. Northern Hemisphere observations of ICRF sources on the USNO stellar catalogue frame

    NASA Astrophysics Data System (ADS)

    Fienga, A.; Andrei, A. H.

    2004-06-01

    The most recent USNO stellar catalogue, the USNO B1.0 (Monet et al. \\cite{Monet03}), provides positions for 1 042 618 261 objects, with a published astrometric accuracy of 200 mas and five-band magnitudes with a 0.3 mag accuracy. Its completeness is believed to be up to magnitude 21th in V-band. Such a catalogue would be a very good tool for astrometric reduction. This work investigates the accuracy of the USNO B1.0 link to ICRF and give an estimation of its internal and external accuracies by comparison with different catalogues, and by computation of ICRF sources using USNO B1.0 star positions.

  9. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  10. Location estimation in wireless sensor networks using spring-relaxation technique.

    PubMed

    Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M

    2010-01-01

    Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  11. Brain-behavior relationships in source memory: Effects of age and memory ability.

    PubMed

    Meusel, Liesel-Ann; Grady, Cheryl L; Ebert, Patricia E; Anderson, Nicole D

    2017-06-01

    There is considerable evidence for age-related decrements in source memory retrieval, but the literature on the neural correlates of these impairments is mixed. In this study, we used functional magnetic resonance imaging to examine source memory retrieval-related brain activity, and the monotonic relationship between retrieval-related brain activity and source memory accuracy, as a function of both healthy aging (younger vs older) and memory ability within the older adult group (Hi-Old vs Lo-Old). Participants studied lists of word pairs, half visually, half aurally; these were re-presented visually in a scanned test phase and participants indicated if the pair was 'seen' or 'heard' in the study phase. The Lo-Old, but not the Hi-Old, showed source memory performance decrements compared to the Young. During retrieval of source memories, younger and older adults engaged lateral and medial prefrontal cortex (PFC) and medial posterior parietal (and occipital) cortices. The groups differed in how brain activity related to source memory accuracy in dorsal anterior cingulate cortex, precuneus/cuneus, and the inferior parietal cortex; in each of these areas, greater activity was associated with poorer accuracy in the Young, but with higher accuracy in the Hi-Old (anterior cingulate and precuneus/cuneus) and Lo-Old (inferior parietal lobe). Follow-up pairwise group interaction analyses revealed that greater activity in right parahippocampal gyrus was associated with better source memory in the Hi-Old, but not in the Lo-Old. We conclude that older adults recruit additional brain regions to compensate for age-related decline in source memory, but the specific regions involved differ depending on their episodic memory ability. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Variation in Microbial Identification System Accuracy for Yeast Identification Depending on Commercial Source of Sabouraud Dextrose Agar

    PubMed Central

    Kellogg, James A.; Bankert, David A.; Chaturvedi, Vishnu

    1999-01-01

    The accuracy of the Microbial Identification System (MIS; MIDI, Inc.) for identification of yeasts to the species level was compared by using 438 isolates grown on prepoured BBL Sabouraud dextrose agar (SDA) and prepoured Remel SDA. Correct identification was observed for 326 (74%) of the yeasts cultured on BBL SDA versus only 214 (49%) of yeasts grown on Remel SDA (P < 0.001). The commercial source of the SDA used in the MIS procedure significantly influences the system’s accuracy. PMID:10325387

  13. Locating very high energy gamma-ray sources with arcminute accuracy

    NASA Technical Reports Server (NTRS)

    Akerlof, C. W.; Cawley, M. F.; Chantell, M.; Harris, K.; Lawrence, M. A.; Fegan, D. J.; Lang, M. J.; Hillas, A. M.; Jennings, D. G.; Lamb, R. C.

    1991-01-01

    The angular accuracy of gamma-ray detectors is intrinsically limited by the physical processes involved in photon detection. Although a number of pointlike sources were detected by the COS B satellite, only two have been unambiguously identified by time signature with counterparts at longer wavelengths. By taking advantage of the extended longitudinal structure of VHE gamma-ray showers, measurements in the TeV energy range can pinpoint source coordinates to arcminute accuracy. This has now been demonstrated with new data analysis procedures applied to observations of the Crab Nebula using Cherenkov air shower imaging techniques. With two telescopes in coincidence, the individual event circular probable error will be 0.13 deg. The half-cone angle of the field of view is effectively 1 deg.

  14. SU-E-J-37: Feasibility of Utilizing Carbon Fiducials to Increase Localization Accuracy of Lumpectomy Cavity for Partial Breast Irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y; Hieken, T; Mutter, R

    2015-06-15

    Purpose To investigate the feasibility of utilizing carbon fiducials to increase localization accuracy of lumpectomy cavity for partial breast irradiation (PBI). Methods Carbon fiducials were placed intraoperatively in the lumpectomy cavity following resection of breast cancer in 11 patients. The patients were scheduled to receive whole breast irradiation (WBI) with a boost or 3D-conformal PBI. WBI patients were initially setup to skin tattoos using lasers, followed by orthogonal kV on-board-imaging (OBI) matching to bone per clinical practice. Cone beam CT (CBCT) was acquired weekly for offline review. For the boost component of WBI and PBI, patients were setup with lasers,more » followed by OBI matching to fiducials, with final alignment by CBCT matching to fiducials. Using carbon fiducials as a surrogate for the lumpectomy cavity and CBCT matching to fiducials as the gold standard, setup uncertainties to lasers, OBI bone, OBI fiducials, and CBCT breast were compared. Results Minimal imaging artifacts were introduced by fiducials on the planning CT and CBCT. The fiducials were sufficiently visible on OBI for online localization. The mean magnitude and standard deviation of setup errors were 8.4mm ± 5.3 mm (n=84), 7.3mm ± 3.7mm (n=87), 2.2mm ± 1.6mm (n=40) and 4.8mm ± 2.6mm (n=87), for lasers, OBI bone, OBI fiducials and CBCT breast tissue, respectively. Significant migration occurred in one of 39 implanted fiducials in a patient with a large postoperative seroma. Conclusion OBI carbon fiducial-based setup can improve localization accuracy with minimal imaging artifacts. With increased localization accuracy, setup uncertainties can be reduced from 8mm using OBI bone matching to 3mm using OBI fiducial matching for PBI treatment. This work demonstrates the feasibility of utilizing carbon fiducials to increase localization accuracy to the lumpectomy cavity for PBI. This may be particularly attractive for localization in the setting of proton therapy and other scenarios in which metal clips are contraindicated.« less

  15. Indoor Pedestrian Localization Using iBeacon and Improved Kalman Filter.

    PubMed

    Sung, Kwangjae; Lee, Dong Kyu 'Roy'; Kim, Hwangnam

    2018-05-26

    The reliable and accurate indoor pedestrian positioning is one of the biggest challenges for location-based systems and applications. Most pedestrian positioning systems have drift error and large bias due to low-cost inertial sensors and random motions of human being, as well as unpredictable and time-varying radio-frequency (RF) signals used for position determination. To solve this problem, many indoor positioning approaches that integrate the user's motion estimated by dead reckoning (DR) method and the location data obtained by RSS fingerprinting through Bayesian filter, such as the Kalman filter (KF), unscented Kalman filter (UKF), and particle filter (PF), have recently been proposed to achieve higher positioning accuracy in indoor environments. Among Bayesian filtering methods, PF is the most popular integrating approach and can provide the best localization performance. However, since PF uses a large number of particles for the high performance, it can lead to considerable computational cost. This paper presents an indoor positioning system implemented on a smartphone, which uses simple dead reckoning (DR), RSS fingerprinting using iBeacon and machine learning scheme, and improved KF. The core of the system is the enhanced KF called a sigma-point Kalman particle filter (SKPF), which localize the user leveraging both the unscented transform of UKF and the weighting method of PF. The SKPF algorithm proposed in this study is used to provide the enhanced positioning accuracy by fusing positional data obtained from both DR and fingerprinting with uncertainty. The SKPF algorithm can achieve better positioning accuracy than KF and UKF and comparable performance compared to PF, and it can provide higher computational efficiency compared with PF. iBeacon in our positioning system is used for energy-efficient localization and RSS fingerprinting. We aim to design the localization scheme that can realize the high positioning accuracy, computational efficiency, and energy efficiency through the SKPF and iBeacon indoors. Empirical experiments in real environments show that the use of the SKPF algorithm and iBeacon in our indoor localization scheme can achieve very satisfactory performance in terms of localization accuracy, computational cost, and energy efficiency.

  16. Comparison of Pelvic Phased-Array versus Endorectal Coil Magnetic Resonance Imaging at 3 Tesla for Local Staging of Prostate Cancer

    PubMed Central

    Kim, Bum Soo; Kim, Tae-Hwan; Kwon, Tae Gyun

    2012-01-01

    Purpose Several studies have demonstrated the superiority of endorectal coil magnetic resonance imaging (MRI) over pelvic phased-array coil MRI at 1.5 Tesla for local staging of prostate cancer. However, few have studied which evaluation is more accurate at 3 Tesla MRI. In this study, we compared the accuracy of local staging of prostate cancer using pelvic phased-array coil or endorectal coil MRI at 3 Tesla. Materials and Methods Between January 2005 and May 2010, 151 patients underwent radical prostatectomy. All patients were evaluated with either pelvic phased-array coil or endorectal coil prostate MRI prior to surgery (63 endorectal coils and 88 pelvic phased-array coils). Tumor stage based on MRI was compared with pathologic stage. We calculated the specificity, sensitivity and accuracy of each group in the evaluation of extracapsular extension and seminal vesicle invasion. Results Both endorectal coil and pelvic phased-array coil MRI achieved high specificity, low sensitivity and moderate accuracy for the detection of extracapsular extension and seminal vesicle invasion. There were statistically no differences in specificity, sensitivity and accuracy between the two groups. Conclusion Overall staging accuracy, sensitivity and specificity were not significantly different between endorectal coil and pelvic phased-array coil MRI. PMID:22476999

  17. Spectral triangulation: a 3D method for locating single-walled carbon nanotubes in vivo

    NASA Astrophysics Data System (ADS)

    Lin, Ching-Wei; Bachilo, Sergei M.; Vu, Michael; Beckingham, Kathleen M.; Bruce Weisman, R.

    2016-05-01

    Nanomaterials with luminescence in the short-wave infrared (SWIR) region are of special interest for biological research and medical diagnostics because of favorable tissue transparency and low autofluorescence backgrounds in that region. Single-walled carbon nanotubes (SWCNTs) show well-known sharp SWIR spectral signatures and therefore have potential for noninvasive detection and imaging of cancer tumours, when linked to selective targeting agents such as antibodies. However, such applications face the challenge of sensitively detecting and localizing the source of SWIR emission from inside tissues. A new method, called spectral triangulation, is presented for three dimensional (3D) localization using sparse optical measurements made at the specimen surface. Structurally unsorted SWCNT samples emitting over a range of wavelengths are excited inside tissue phantoms by an LED matrix. The resulting SWIR emission is sampled at points on the surface by a scanning fibre optic probe leading to an InGaAs spectrometer or a spectrally filtered InGaAs avalanche photodiode detector. Because of water absorption, attenuation of the SWCNT fluorescence in tissues is strongly wavelength-dependent. We therefore gauge the SWCNT-probe distance by analysing differential changes in the measured SWCNT emission spectra. SWCNT fluorescence can be clearly detected through at least 20 mm of tissue phantom, and the 3D locations of embedded SWCNT test samples are found with sub-millimeter accuracy at depths up to 10 mm. Our method can also distinguish and locate two embedded SWCNT sources at distinct positions.Nanomaterials with luminescence in the short-wave infrared (SWIR) region are of special interest for biological research and medical diagnostics because of favorable tissue transparency and low autofluorescence backgrounds in that region. Single-walled carbon nanotubes (SWCNTs) show well-known sharp SWIR spectral signatures and therefore have potential for noninvasive detection and imaging of cancer tumours, when linked to selective targeting agents such as antibodies. However, such applications face the challenge of sensitively detecting and localizing the source of SWIR emission from inside tissues. A new method, called spectral triangulation, is presented for three dimensional (3D) localization using sparse optical measurements made at the specimen surface. Structurally unsorted SWCNT samples emitting over a range of wavelengths are excited inside tissue phantoms by an LED matrix. The resulting SWIR emission is sampled at points on the surface by a scanning fibre optic probe leading to an InGaAs spectrometer or a spectrally filtered InGaAs avalanche photodiode detector. Because of water absorption, attenuation of the SWCNT fluorescence in tissues is strongly wavelength-dependent. We therefore gauge the SWCNT-probe distance by analysing differential changes in the measured SWCNT emission spectra. SWCNT fluorescence can be clearly detected through at least 20 mm of tissue phantom, and the 3D locations of embedded SWCNT test samples are found with sub-millimeter accuracy at depths up to 10 mm. Our method can also distinguish and locate two embedded SWCNT sources at distinct positions. Electronic supplementary information (ESI) available: Details concerning instrumental design, experimental procedures, related experiments, and triangulation computations, plus a video showing operation of the scanner. See DOI: 10.1039/c6nr01376g

  18. Microseismic event location by master-event waveform stacking

    NASA Astrophysics Data System (ADS)

    Grigoli, F.; Cesca, S.; Dahm, T.

    2016-12-01

    Waveform stacking location methods are nowadays extensively used to monitor induced seismicity monitoring assoiciated with several underground industrial activities such as Mining, Oil&Gas production and Geothermal energy exploitation. In the last decade a significant effort has been spent to develop or improve methodologies able to perform automated seismological analysis for weak events at a local scale. This effort was accompanied by the improvement of monitoring systems, resulting in an increasing number of large microseismicity catalogs. The analysis of microseismicity is challenging, because of the large number of recorded events often characterized by a low signal-to-noise ratio. A significant limitation of the traditional location approaches is that automated picking is often done on each seismogram individually, making little or no use of the coherency information between stations. In order to improve the performance of the traditional location methods, in the last year, alternative approaches have been proposed. These methods exploits the coherence of the waveforms recorded at different stations and do not require any automated picking procedure. The main advantage of this methods relies on their robustness even when the recorded waveforms are very noisy. On the other hand, like any other location method, the location performance strongly depends on the accuracy of the available velocity model. When dealing with inaccurate velocity models, in fact, location results can be affected by large errors. Here we will introduce a new automated waveform stacking location method which is less dependent on the knowledge of the velocity model and presents several benefits, which improve the location accuracy: 1) it accounts for phase delays due to local site effects, e.g. surface topography or variable sediment thickness 2) theoretical velocity model are only used to estimate travel times within the source volume, and not along the whole source-sensor path. We finally compare the location results for both synthetics and real data with those obtained by using classical waveforms stacking approaches.

  19. Advancements in Afterbody Radiative Heating Simulations for Earth Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Panesi, Marco; Brandis, Aaron M.

    2016-01-01

    Four advancements to the simulation of backshell radiative heating for Earth entry are presented. The first of these is the development of a flow field model that treats electronic levels of the dominant backshell radiator, N, as individual species. This is shown to allow improvements in the modeling of electron-ion recombination and two-temperature modeling, which are shown to increase backshell radiative heating by 10 to 40%. By computing the electronic state populations of N within the flow field solver, instead of through the quasi-steady state approximation in the radiation code, the coupling of radiative transition rates to the species continuity equations for the levels of N, including the impact of non-local absorption, becomes feasible. Implementation of this additional level of coupling between the flow field and radiation codes represents the second advancement presented in this work, which is shown to increase the backshell radiation by another 10 to 50%. The impact of radiative transition rates due to non-local absorption indicates the importance of accurate radiation transport in the relatively complex flow geometry of the backshell. This motivates the third advancement, which is the development of a ray-tracing radiation transport approach to compute the radiative transition rates and divergence of the radiative flux at every point for coupling to the flow field, therefore allowing the accuracy of the commonly applied tangent-slab approximation to be assessed for radiative source terms. For the sphere considered at lunar-return conditions, the tangent-slab approximation is shown to provide a sufficient level of accuracy for the radiative source terms, even for backshell cases. This is in contrast to the agreement between the two approaches for computing the radiative flux to the surface, which differ by up to 40%. The final advancement presented is the development of a nonequilibrium model for NO radiation, which provides significant backshell radiation at velocities below 10 km/s. The developed model reduces the nonequilibrium NO radiation by 50% relative to the previous model.

  20. Real-time Neuroimaging and Cognitive Monitoring Using Wearable Dry EEG

    PubMed Central

    Mullen, Tim R.; Kothe, Christian A.E.; Chi, Mike; Ojeda, Alejandro; Kerth, Trevor; Makeig, Scott; Jung, Tzyy-Ping; Cauwenberghs, Gert

    2015-01-01

    Goal We present and evaluate a wearable high-density dry electrode EEG system and an open-source software framework for online neuroimaging and state classification. Methods The system integrates a 64-channel dry EEG form-factor with wireless data streaming for online analysis. A real-time software framework is applied, including adaptive artifact rejection, cortical source localization, multivariate effective connectivity inference, data visualization, and cognitive state classification from connectivity features using a constrained logistic regression approach (ProxConn). We evaluate the system identification methods on simulated 64-channel EEG data. Then we evaluate system performance, using ProxConn and a benchmark ERP method, in classifying response errors in 9 subjects using the dry EEG system. Results Simulations yielded high accuracy (AUC=0.97±0.021) for real-time cortical connectivity estimation. Response error classification using cortical effective connectivity (sdDTF) was significantly above chance with similar performance (AUC) for cLORETA (0.74±0.09) and LCMV (0.72±0.08) source localization. Cortical ERP-based classification was equivalent to ProxConn for cLORETA (0.74±0.16) but significantly better for LCMV (0.82±0.12). Conclusion We demonstrated the feasibility for real-time cortical connectivity analysis and cognitive state classification from high-density wearable dry EEG. Significance This paper is the first validated application of these methods to 64-channel dry EEG. The work addresses a need for robust real-time measurement and interpretation of complex brain activity in the dynamic environment of the wearable setting. Such advances can have broad impact in research, medicine, and brain-computer interfaces. The pipelines are made freely available in the open-source SIFT and BCILAB toolboxes. PMID:26415149

  1. Hydroacoustic Signals Recorded by the International Monitoring System

    NASA Astrophysics Data System (ADS)

    Blackman, D.; de Groot-Hedlin, C.; Orcutt, J.; Harben, P.

    2002-12-01

    Networks of hydrophones, such as the hydroacoustic part of the International Monitoring System (IMS), and hydrophone arrays, such as the U.S. Navy operates, record many types of signals, some of which travel thousands of kilometers in the oceanic sound channel. Abyssal earthquakes generate many such individual events and occasionally occur in swarms. Here we focus on signals generated by other types of sources, illustrating their character with recent data, mostly from the Indian Ocean. Shipping generates signals in the 5-40 Hz band. Large airgun arrays can generate T-waves that travel across an ocean basin if the near-source seafloor has appropriate depth/slope. Airgun array shots from our 2001 experiment were located with an accuracy of 25-40 km at 700-1000 km ranges, using data from a Diego Garcia tripartite sensor station. Shots at greater range (up to 4800 km) were recorded at multiple stations but their higher background noise levels in the 5-30 Hz band resulted in location errors of ~100 km. Imploding glass spheres shattered within the sound channel produce a very impulsive arrival, even after propagating 4400 km. Recordings of the sphere signal have energy concentrated in the band above 40 Hz. Natural sources such as undersea volcanic eruptions and marine mammals also produce signals that are clearly evident in hydrophone recordings. For whales, the frequency range is 20~120Hz and specific patterns of vocalization characterize different species. Volcanic eruptions typically produce intense swarms of acoustic activity that last days-weeks and the source area can migrate tens of kms during the period. The utility of these types of hydroacoustic sources for research and/or monitoring purposes depends on the accuracy with which recordings can be used to locate and quantitatively characterize the source. Oceanic weather, both local and regional, affect background noise levels in key frequency bands at the recording stations. Databases used in forward modeling of propagation and acoustic losses can be sparse in remote regions. Our Indian Ocean results suggest that when bathymetric coverage is poor, predictions for 8 Hz propagation/loss match observations better than those for propagation of 30 Hz signals over 1000-km distances.

  2. Cortical reinstatement and the confidence and accuracy of source memory.

    PubMed

    Thakral, Preston P; Wang, Tracy H; Rugg, Michael D

    2015-04-01

    Cortical reinstatement refers to the overlap between neural activity elicited during the encoding and the subsequent retrieval of an episode, and is held to reflect retrieved mnemonic content. Previous findings have demonstrated that reinstatement effects reflect the quality of retrieved episodic information as this is operationalized by the accuracy of source memory judgments. The present functional magnetic resonance imaging (fMRI) study investigated whether reinstatement-related activity also co-varies with the confidence of accurate source judgments. Participants studied pictures of objects along with their visual or spoken names. At test, they first discriminated between studied and unstudied pictures and then, for each picture judged as studied, they also judged whether it had been paired with a visual or auditory name, using a three-point confidence scale. Accuracy of source memory judgments- and hence the quality of the source-specifying information--was greater for high than for low confidence judgments. Modality-selective retrieval-related activity (reinstatement effects) also co-varied with the confidence of the corresponding source memory judgment. The findings indicate that the quality of the information supporting accurate judgments of source memory is indexed by the relative magnitude of content-selective, retrieval-related neural activity. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. MEG biomarker of Alzheimer's disease: Absence of a prefrontal generator during auditory sensory gating.

    PubMed

    Josef Golubic, Sanja; Aine, Cheryl J; Stephen, Julia M; Adair, John C; Knoefel, Janice E; Supek, Selma

    2017-10-01

    Magnetoencephalography (MEG), a direct measure of neuronal activity, is an underexplored tool in the search for biomarkers of Alzheimer's disease (AD). In this study, we used MEG source estimates of auditory gating generators, nonlinear correlations with neuropsychological results, and multivariate analyses to examine the sensitivity and specificity of gating topology modulation to detect AD. Our results demonstrated the use of MEG localization of a medial prefrontal (mPFC) gating generator as a discrete (binary) detector of AD at the individual level and resulted in recategorizing the participant categories in: (1) controls with mPFC generator localized in response to both the standard and deviant tones; (2) a possible preclinical stage of AD participants (a lower functioning group of controls) in which mPFC activation was localized to the deviant tone only; and (3) symptomatic AD in which mPFC activation was not localized to either the deviant or standard tones. This approach showed a large effect size (0.9) and high accuracy, sensitivity, and specificity (100%) in identifying symptomatic AD patients within a limited research sample. The present results demonstrate high potential of mPFC activation as a noninvasive biomarker of AD pathology during putative preclinical and clinical stages. Hum Brain Mapp 38:5180-5194, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. The effect of interacting dark energy on local measurements of the Hubble constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odderskov, Io; Baldi, Marco; Amendola, Luca, E-mail: isho07@phys.au.dk, E-mail: marco.baldi5@unibo.it, E-mail: l.amendola@thphys.uni-heidelberg.de

    2016-05-01

    In the current state of cosmology, where cosmological parameters are being measured to percent accuracy, it is essential to understand all sources of error to high precision. In this paper we present the results of a study of the local variations in the Hubble constant measured at the distance scale of the Coma Cluster, and test the validity of correcting for the peculiar velocities predicted by gravitational instability theory. The study is based on N-body simulations, and includes models featuring a coupling between dark energy and dark matter, as well as two ΛCDM simulations with different values of σ{sub 8}.more » It is found that the variance in the local flows is significantly larger in the coupled models, which increases the uncertainty in the local measurements of the Hubble constant in these scenarios. By comparing the results from the different simulations, it is found that most of the effect is caused by the higher value of σ{sub 8} in the coupled cosmologies, though this cannot account for all of the additional variance. Given the discrepancy between different estimates of the Hubble constant in the universe today, cosmological models causing a greater cosmic variance is something that we should be aware of.« less

  5. Detection of buried magnetic objects by a SQUID gradiometer system

    NASA Astrophysics Data System (ADS)

    Meyer, Hans-Georg; Hartung, Konrad; Linzen, Sven; Schneider, Michael; Stolz, Ronny; Fried, Wolfgang; Hauspurg, Sebastian

    2009-05-01

    We present a magnetic detection system based on superconducting gradiometric sensors (SQUID gradiometers). The system provides a unique fast mapping of large areas with a high resolution of the magnetic field gradient as well as the local position. A main part of this work is the localization and classification of magnetic objects in the ground by automatic interpretation of geomagnetic field gradients, measured by the SQUID system. In accordance with specific features the field is decomposed into segments, which allow inferences to possible objects in the ground. The global consideration of object describing properties and their optimization using error minimization methods allows the reconstruction of superimposed features and detection of buried objects. The analysis system of measured geomagnetic fields works fully automatically. By a given surface of area-measured gradients the algorithm determines within numerical limits the absolute position of objects including depth with sub-pixel accuracy and allows an arbitrary position and attitude of sources. Several SQUID gradiometer data sets were used to show the applicability of the analysis algorithm.

  6. a Performance Comparison of Feature Detectors for Planetary Rover Mapping and Localization

    NASA Astrophysics Data System (ADS)

    Wan, W.; Peng, M.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Teng, B.; Mao, X.; Zhao, Q.; Xin, X.; Jia, M.

    2017-07-01

    Feature detection and matching are key techniques in computer vision and robotics, and have been successfully implemented in many fields. So far there is no performance comparison of feature detectors and matching methods for planetary mapping and rover localization using rover stereo images. In this research, we present a comprehensive evaluation and comparison of six feature detectors, including Moravec, Förstner, Harris, FAST, SIFT and SURF, aiming for optimal implementation of feature-based matching in planetary surface environment. To facilitate quantitative analysis, a series of evaluation criteria, including distribution evenness of matched points, coverage of detected points, and feature matching accuracy, are developed in the research. In order to perform exhaustive evaluation, stereo images, simulated under different baseline, pitch angle, and interval of adjacent rover locations, are taken as experimental data source. The comparison results show that SIFT offers the best overall performance, especially it is less sensitive to changes of image taken at adjacent locations.

  7. Effects of early focal brain injury on memory for visuospatial patterns: selective deficits of global-local processing.

    PubMed

    Stiles, Joan; Stern, Catherine; Appelbaum, Mark; Nass, Ruth; Trauner, Doris; Hesselink, John

    2008-01-01

    Selective deficits in visuospatial processing are present early in development among children with perinatal focal brain lesions (PL). Children with right hemisphere PL (RPL) are impaired in configural processing, while children with left hemisphere PL (LPL) are impaired in featural processing. Deficits associated with LPL are less pervasive than those observed with RPL, but this difference may reflect the structure of the tasks used for assessment. Many of the tasks used to date may place greater demands on configural processing, thus highlighting this deficit in the RPL group. This study employed a task designed to place comparable demands on configural and featural processing, providing the opportunity to obtain within-task evidence of differential deficit. Sixty-two 5- to 14-year-old children (19 RPL, 19 LPL, and 24 matched controls) reproduced from memory a series of hierarchical forms (large forms composed of small forms). Global- and local-level reproduction accuracy was scored. Controls were equally accurate on global- and local-level reproduction. Children with RPL were selectively impaired on global accuracy, and children with LPL on local accuracy, thus documenting a double dissociation in global-local processing.

  8. An Effective 3D Shape Descriptor for Object Recognition with RGB-D Sensors

    PubMed Central

    Liu, Zhong; Zhao, Changchen; Wu, Xingming; Chen, Weihai

    2017-01-01

    RGB-D sensors have been widely used in various areas of computer vision and graphics. A good descriptor will effectively improve the performance of operation. This article further analyzes the recognition performance of shape features extracted from multi-modality source data using RGB-D sensors. A hybrid shape descriptor is proposed as a representation of objects for recognition. We first extracted five 2D shape features from contour-based images and five 3D shape features over point cloud data to capture the global and local shape characteristics of an object. The recognition performance was tested for category recognition and instance recognition. Experimental results show that the proposed shape descriptor outperforms several common global-to-global shape descriptors and is comparable to some partial-to-global shape descriptors that achieved the best accuracies in category and instance recognition. Contribution of partial features and computational complexity were also analyzed. The results indicate that the proposed shape features are strong cues for object recognition and can be combined with other features to boost accuracy. PMID:28245553

  9. Genetic algorithm-based improved DOA estimation using fourth-order cumulants

    NASA Astrophysics Data System (ADS)

    Ahmed, Ammar; Tufail, Muhammad

    2017-05-01

    Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.

  10. A systematic evaluation of prevalence and diagnostic accuracy of sacroiliac joint interventions.

    PubMed

    Simopoulos, Thomas T; Manchikanti, Laxmaiah; Singh, Vijay; Gupta, Sanjeeva; Hameed, Haroon; Diwan, Sudhir; Cohen, Steven P

    2012-01-01

    The contributions of the sacroiliac joint to low back and lower extremity pain have been a subject of considerable debate and research. It is generally accepted that 10% to 25% of patients with persistent mechanical low back pain below L5 have pain secondary to sacroiliac joint pathology. However, no single historical, physical exam, or radiological feature can definitively establish a diagnosis of sacroiliac joint pain. Based on present knowledge, a proper diagnosis can only be made using controlled diagnostic blocks. The diagnosis and treatment of sacroiliac joint pain continue to be characterized by wide variability and a paucity of the literature. To evaluate the accuracy of diagnostic sacroiliac joint interventions. A systematic review of diagnostic sacroiliac joint interventions. Methodological quality assessment of included studies was performed using Quality Appraisal of Reliability Studies (QAREL). Only diagnostic accuracy studies meeting at least 50% of the designated inclusion criteria were utilized for analysis. Studies scoring less than 50% are presented descriptively and analyzed critically. The level of evidence was classified as good, fair, or poor based on the quality of evidence developed by the United States Preventive Services Task Force (USPSTF). Data sources included relevant literature identified through searches of PubMed and EMBASE from 1966 to December 2011, and manual searches of the bibliographies of known primary and review articles. In this evaluation we utilized controlled local anesthetic blocks using at least 50% pain relief as the reference standard. The evidence is good for the diagnosis of sacroiliac joint pain utilizing controlled comparative local anesthetic blocks. The prevalence of sacroiliac joint pain is estimated to range between 10% and 62% based on the setting; however, the majority of analyzed studies suggest a point prevalence of around 25%, with a false-positive rate for uncontrolled blocks of approximately 20%. The evidence for provocative testing to diagnose sacroiliac joint pain was fair. The evidence for the diagnostic accuracy of imaging is limited. The limitations of this systematic review include a paucity of literature, variations in technique, and variable criterion standards for the diagnosis of sacroiliac joint pain. Based on this systematic review, the evidence for the diagnostic accuracy of sacroiliac joint injections is good, the evidence for provocation maneuvers is fair, and evidence for imaging is limited.

  11. A Support Vector Learning-Based Particle Filter Scheme for Target Localization in Communication-Constrained Underwater Acoustic Sensor Networks

    PubMed Central

    Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping

    2017-01-01

    Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid “particle degeneracy” problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network. PMID:29267252

  12. A Support Vector Learning-Based Particle Filter Scheme for Target Localization in Communication-Constrained Underwater Acoustic Sensor Networks.

    PubMed

    Li, Xinbin; Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping

    2017-12-21

    Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid "particle degeneracy" problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network.

  13. EUV local CDU healing performance and modeling capability towards 5nm node

    NASA Astrophysics Data System (ADS)

    Jee, Tae Kwon; Timoshkov, Vadim; Choi, Peter; Rio, David; Tsai, Yu-Cheng; Yaegashi, Hidetami; Koike, Kyohei; Fonseca, Carlos; Schoofs, Stijn

    2017-10-01

    Both local variability and optical proximity correction (OPC) errors are big contributors to the edge placement error (EPE) budget which is closely related to the device yield. The post-litho contact hole healing will be demonstrated to meet after-etch local variability specifications using a low dose, 30mJ/cm2 dose-to-size, positive tone developed (PTD) resist with relevant throughput in high volume manufacturing (HVM). The total local variability of the node 5nm (N5) contact holes will be characterized in terms of local CD uniformity (LCDU), local placement error (LPE), and contact edge roughness (CER) using a statistical methodology. The CD healing process has complex etch proximity effects, so the OPC prediction accuracy is challenging to meet EPE requirements for the N5. Thus, the prediction accuracy of an after-etch model will be investigated and discussed using ASML Tachyon OPC model.

  14. Localizing the sources of two independent noises: Role of time varying amplitude differences

    PubMed Central

    Yost, William A.; Brown, Christopher A.

    2013-01-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597

  15. Localizing the sources of two independent noises: role of time varying amplitude differences.

    PubMed

    Yost, William A; Brown, Christopher A

    2013-04-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.

  16. Design of an HF-Band RFID System with Multiple Readers and Passive Tags for Indoor Mobile Robot Self-Localization

    PubMed Central

    Mi, Jian; Takahashi, Yasutake

    2016-01-01

    Radio frequency identification (RFID) technology has already been explored for efficient self-localization of indoor mobile robots. A mobile robot equipped with RFID readers detects passive RFID tags installed on the floor in order to locate itself. The Monte-Carlo localization (MCL) method enables the localization of a mobile robot equipped with an RFID system with reasonable accuracy, sufficient robustness and low computational cost. The arrangements of RFID readers and tags and the size of antennas are important design parameters for realizing accurate and robust self-localization using a low-cost RFID system. The design of a likelihood model of RFID tag detection is also crucial for the accurate self-localization. This paper presents a novel design and arrangement of RFID readers and tags for indoor mobile robot self-localization. First, by considering small-sized and large-sized antennas of an RFID reader, we show how the design of the likelihood model affects the accuracy of self-localization. We also design a novel likelihood model by taking into consideration the characteristics of the communication range of an RFID system with a large antenna. Second, we propose a novel arrangement of RFID tags with eight RFID readers, which results in the RFID system configuration requiring much fewer readers and tags while retaining reasonable accuracy of self-localization. We verify the performances of MCL-based self-localization realized using the high-frequency (HF)-band RFID system with eight RFID readers and a lower density of RFID tags installed on the floor based on MCL in simulated and real environments. The results of simulations and real environment experiments demonstrate that our proposed low-cost HF-band RFID system realizes accurate and robust self-localization of an indoor mobile robot. PMID:27483279

  17. Design of an HF-Band RFID System with Multiple Readers and Passive Tags for Indoor Mobile Robot Self-Localization.

    PubMed

    Mi, Jian; Takahashi, Yasutake

    2016-07-29

    Radio frequency identification (RFID) technology has already been explored for efficient self-localization of indoor mobile robots. A mobile robot equipped with RFID readers detects passive RFID tags installed on the floor in order to locate itself. The Monte-Carlo localization (MCL) method enables the localization of a mobile robot equipped with an RFID system with reasonable accuracy, sufficient robustness and low computational cost. The arrangements of RFID readers and tags and the size of antennas are important design parameters for realizing accurate and robust self-localization using a low-cost RFID system. The design of a likelihood model of RFID tag detection is also crucial for the accurate self-localization. This paper presents a novel design and arrangement of RFID readers and tags for indoor mobile robot self-localization. First, by considering small-sized and large-sized antennas of an RFID reader, we show how the design of the likelihood model affects the accuracy of self-localization. We also design a novel likelihood model by taking into consideration the characteristics of the communication range of an RFID system with a large antenna. Second, we propose a novel arrangement of RFID tags with eight RFID readers, which results in the RFID system configuration requiring much fewer readers and tags while retaining reasonable accuracy of self-localization. We verify the performances of MCL-based self-localization realized using the high-frequency (HF)-band RFID system with eight RFID readers and a lower density of RFID tags installed on the floor based on MCL in simulated and real environments. The results of simulations and real environment experiments demonstrate that our proposed low-cost HF-band RFID system realizes accurate and robust self-localization of an indoor mobile robot.

  18. Mapping human health risks from exposure to trace metal contamination of drinking water sources in Pakistan.

    PubMed

    Bhowmik, Avit Kumar; Alamdar, Ambreen; Katsoyiannis, Ioannis; Shen, Heqing; Ali, Nadeem; Ali, Syeda Maria; Bokhari, Habib; Schäfer, Ralf B; Eqani, Syed Ali Musstjab Akber Shah

    2015-12-15

    The consumption of contaminated drinking water is one of the major causes of mortality and many severe diseases in developing countries. The principal drinking water sources in Pakistan, i.e. ground and surface water, are subject to geogenic and anthropogenic trace metal contamination. However, water quality monitoring activities have been limited to a few administrative areas and a nationwide human health risk assessment from trace metal exposure is lacking. Using geographically weighted regression (GWR) and eight relevant spatial predictors, we calculated nationwide human health risk maps by predicting the concentration of 10 trace metals in the drinking water sources of Pakistan and comparing them to guideline values. GWR incorporated local variations of trace metal concentrations into prediction models and hence mitigated effects of large distances between sampled districts due to data scarcity. Predicted concentrations mostly exhibited high accuracy and low uncertainty, and were in good agreement with observed concentrations. Concentrations for Central Pakistan were predicted with higher accuracy than for the North and South. A maximum 150-200 fold exceedance of guideline values was observed for predicted cadmium concentrations in ground water and arsenic concentrations in surface water. In more than 53% (4 and 100% for the lower and upper boundaries of 95% confidence interval (CI)) of the total area of Pakistan, the drinking water was predicted to be at risk of contamination from arsenic, chromium, iron, nickel and lead. The area with elevated risks is inhabited by more than 74 million (8 and 172 million for the lower and upper boundaries of 95% CI) people. Although these predictions require further validation by field monitoring, the results can inform disease mitigation and water resources management regarding potential hot spots. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Comparisons of thermospheric density data sets and models

    NASA Astrophysics Data System (ADS)

    Doornbos, Eelco; van Helleputte, Tom; Emmert, John; Drob, Douglas; Bowman, Bruce R.; Pilinski, Marcin

    During the past decade, continuous long-term data sets of thermospheric density have become available to researchers. These data sets have been derived from accelerometer measurements made by the CHAMP and GRACE satellites and from Space Surveillance Network (SSN) tracking data and related Two-Line Element (TLE) sets. These data have already resulted in a large number of publications on physical interpretation and improvement of empirical density modelling. This study compares four different density data sets and two empirical density models, for the period 2002-2009. These data sources are the CHAMP (1) and GRACE (2) accelerometer measurements, the long-term database of densities derived from TLE data (3), the High Accuracy Satellite Drag Model (4) run by Air Force Space Command, calibrated using SSN data, and the NRLMSISE-00 (5) and Jacchia-Bowman 2008 (6) empirical models. In describing these data sets and models, specific attention is given to differences in the geo-metrical and aerodynamic satellite modelling, applied in the conversion from drag to density measurements, which are main sources of density biases. The differences in temporal and spa-tial resolution of the density data sources are also described and taken into account. With these aspects in mind, statistics of density comparisons have been computed, both as a function of solar and geomagnetic activity levels, and as a function of latitude and local solar time. These statistics give a detailed view of the relative accuracy of the different data sets and of the biases between them. The differences are analysed with the aim at providing rough error bars on the data and models and pinpointing issues which could receive attention in future iterations of data processing algorithms and in future model development.

  20. Determination of optical properties in heterogeneous turbid media using a cylindrical diffusing fiber

    NASA Astrophysics Data System (ADS)

    Dimofte, Andreea; Finlay, Jarod C.; Liang, Xing; Zhu, Timothy C.

    2012-10-01

    For interstitial photodynamic therapy (PDT), cylindrical diffusing fibers (CDFs) are often used to deliver light. This study examines the feasibility and accuracy of using CDFs to characterize the absorption (μa) and reduced scattering (μ‧s) coefficients of heterogeneous turbid media. Measurements were performed in tissue-simulating phantoms with μa between 0.1 and 1 cm-1 and μ‧s between 3 and 10 cm-1 with CDFs 2 to 4 cm in length. Optical properties were determined by fitting the measured light fluence rate profiles at a fixed distance from the CDF axis using a heterogeneous kernel model in which the cylindrical diffusing fiber is treated as a series of point sources. The resulting optical properties were compared with independent measurement using a point source method. In a homogenous medium, we are able to determine the absorption coefficient μa using a value of μ‧s determined a priori (uniform fit) or μ‧s obtained by fitting (variable fit) with standard (maximum) deviations of 6% (18%) and 18% (44%), respectively. However, the CDF method is found to be insensitive to variations in μ‧s, thus requiring a complementary method such as using a point source for determination of μ‧s. The error for determining μa decreases in very heterogeneous turbid media because of the local absorption extremes. The data acquisition time for obtaining the one-dimensional optical properties distribution is less than 8 s. This method can result in dramatically improved accuracy of light fluence rate calculation for CDFs for prostate PDT in vivo when the same model and geometry is used for forward calculations using the extrapolated tissue optical properties.

  1. Medial prefrontal cortex supports source memory accuracy for self-referenced items

    PubMed Central

    Leshikar, Eric D.; Duarte, Audrey

    2013-01-01

    Previous behavioral work suggests that processing information in relation to the self enhances subsequent item recognition. Neuroimaging evidence further suggests that regions along the cortical midline, particularly those of the medial prefrontal cortex, underlie this benefit. There has been little work to date, however, on the effects of self-referential encoding on source memory accuracy or whether the medial prefrontal cortex might contribute to source memory for self-referenced materials. In the current study, we used fMRI to measure neural activity while participants studied and subsequently retrieved pictures of common objects superimposed on one of two background scenes (sources) under either self-reference or self-external encoding instructions. Both item recognition and source recognition were better for objects encoded self-referentially than self-externally. Neural activity predictive of source accuracy was observed in the medial prefrontal cortex (BA 10) at the time of study for self-referentially but not self-externally encoded objects. The results of this experiment suggest that processing information in relation to the self leads to a mnemonic benefit for source level features, and that activity in the medial prefrontal cortex contributes to this source memory benefit. This evidence expands the purported role that the medial prefrontal cortex plays in self-referencing. PMID:21936739

  2. Recollection can be Weak and Familiarity can be Strong

    PubMed Central

    Ingram, Katherine M.; Mickes, Laura; Wixted, John T.

    2012-01-01

    The Remember/Know procedure is widely used to investigate recollection and familiarity in recognition memory, but almost all of the results obtained using that procedure can be readily accommodated by a unidimensional model based on signal-detection theory. The unidimensional model holds that Remember judgments reflect strong memories (associated with high confidence, high accuracy, and fast reaction times), whereas Know judgments reflect weaker memories (associated with lower confidence, lower accuracy, and slower reaction times). Although this is invariably true on average, a new two-dimensional account (the Continuous Dual-Process model) suggests that Remember judgments made with low confidence should be associated with lower old/new accuracy, but higher source accuracy, than Know judgments made with high confidence. We tested this prediction – and found evidence to support it – using a modified Remember/Know procedure in which participants were first asked to indicate a degree of recollection-based or familiarity-based confidence for each word presented on a recognition test and were then asked to recollect the color (red or blue) and screen location (top or bottom) associated with the word at study. For familiarity-based decisions, old/new accuracy increased with old/new confidence, but source accuracy did not (suggesting that stronger old/new memory was supported by higher degrees of familiarity). For recollection-based decisions, both old/new accuracy and source accuracy increased with old/new confidence (suggesting that stronger old/new memory was supported by higher degrees of recollection). These findings suggest that recollection and familiarity are continuous processes and that participants can indicate which process mainly contributed to their recognition decisions. PMID:21967320

  3. Forest carbon in lowland Papua New Guinea: Local variation and the importance of small trees

    PubMed Central

    Vincent, John B; Henning, Bridget; Saulei, Simon; Sosanika, Gibson; Weiblen, George D

    2015-01-01

    Efforts to incentivize the reduction of carbon emissions from deforestation and forest degradation require accurate carbon accounting. The extensive tropical forest of Papua New Guinea (PNG) is a target for such efforts and yet local carbon estimates are few. Previous estimates, based on models of neotropical vegetation applied to PNG forest plots, did not consider such factors as the unique species composition of New Guinea vegetation, local variation in forest biomass, or the contribution of small trees. We analysed all trees >1 cm in diameter at breast height (DBH) in Melanesia's largest forest plot (Wanang) to assess local spatial variation and the role of small trees in carbon storage. Above-ground living biomass (AGLB) of trees averaged 210.72 Mg  ha−1 at Wanang. Carbon storage at Wanang was somewhat lower than in other lowland tropical forests, whereas local variation among 1-ha subplots and the contribution of small trees to total AGLB were substantially higher. We speculate that these differences may be attributed to the dynamics of Wanang forest where erosion of a recently uplifted and unstable terrain appears to be a major source of natural disturbance. These findings emphasize the need for locally calibrated forest carbon estimates if accurate landscape level valuation and monetization of carbon is to be achieved. Such estimates aim to situate PNG forests in the global carbon context and provide baseline information needed to improve the accuracy of PNG carbon monitoring schemes. PMID:26074730

  4. A Piecewise Local Partial Least Squares (PLS) Method for the Quantitative Analysis of Plutonium Nitrate Solutions

    DOE PAGES

    Lascola, Robert; O'Rourke, Patrick E.; Kyser, Edward A.

    2017-10-05

    Here, we have developed a piecewise local (PL) partial least squares (PLS) analysis method for total plutonium measurements by absorption spectroscopy in nitric acid-based nuclear material processing streams. Instead of using a single PLS model that covers all expected solution conditions, the method selects one of several local models based on an assessment of solution absorbance, acidity, and Pu oxidation state distribution. The local models match the global model for accuracy against the calibration set, but were observed in several instances to be more robust to variations associated with measurements in the process. The improvements are attributed to the relativemore » parsimony of the local models. Not all of the sources of spectral variation are uniformly present at each part of the calibration range. Thus, the global model is locally overfitting and susceptible to increased variance when presented with new samples. A second set of models quantifies the relative concentrations of Pu(III), (IV), and (VI). Standards containing a mixture of these species were not at equilibrium due to a disproportionation reaction. Therefore, a separate principal component analysis is used to estimate of the concentrations of the individual oxidation states in these standards in the absence of independent confirmatory analysis. The PL analysis approach is generalizable to other systems where the analysis of chemically complicated systems can be aided by rational division of the overall range of solution conditions into simpler sub-regions.« less

  5. A Piecewise Local Partial Least Squares (PLS) Method for the Quantitative Analysis of Plutonium Nitrate Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lascola, Robert; O'Rourke, Patrick E.; Kyser, Edward A.

    Here, we have developed a piecewise local (PL) partial least squares (PLS) analysis method for total plutonium measurements by absorption spectroscopy in nitric acid-based nuclear material processing streams. Instead of using a single PLS model that covers all expected solution conditions, the method selects one of several local models based on an assessment of solution absorbance, acidity, and Pu oxidation state distribution. The local models match the global model for accuracy against the calibration set, but were observed in several instances to be more robust to variations associated with measurements in the process. The improvements are attributed to the relativemore » parsimony of the local models. Not all of the sources of spectral variation are uniformly present at each part of the calibration range. Thus, the global model is locally overfitting and susceptible to increased variance when presented with new samples. A second set of models quantifies the relative concentrations of Pu(III), (IV), and (VI). Standards containing a mixture of these species were not at equilibrium due to a disproportionation reaction. Therefore, a separate principal component analysis is used to estimate of the concentrations of the individual oxidation states in these standards in the absence of independent confirmatory analysis. The PL analysis approach is generalizable to other systems where the analysis of chemically complicated systems can be aided by rational division of the overall range of solution conditions into simpler sub-regions.« less

  6. Assessment of local GNSS baselines at co-location sites

    NASA Astrophysics Data System (ADS)

    Herrera Pinzón, Iván; Rothacher, Markus

    2018-01-01

    As one of the major contributors to the realisation of the International Terrestrial Reference System (ITRS), the Global Navigation Satellite Systems (GNSS) are prone to suffer from irregularities and discontinuities in time series. While often associated with hardware/software changes and the influence of the local environment, these discrepancies constitute a major threat for ITRS realisations. Co-located GNSS at fundamental sites, with two or more available instruments, provide the opportunity to mitigate their influence while improving the accuracy of estimated positions by examining data breaks, local biases, deformations, time-dependent variations and the comparison of GNSS baselines with existing local tie measurements. With the use of co-located GNSS data from a subset sites of the International GNSS Service network, this paper discusses a global multi-year analysis with the aim of delivering homogeneous time series of coordinates to analyse system-specific error sources in the local baselines. Results based on the comparison of different GNSS-based solutions with the local survey ties show discrepancies of up to 10 mm despite GNSS coordinate repeatabilities at the sub-mm level. The discrepancies are especially large for the solutions using the ionosphere-free linear combination and estimating tropospheric zenith delays, thus corresponding to the processing strategy used for global solutions. Snow on the antennas causes further problems and seasonal variations of the station coordinates. These demonstrate the need for a permanent high-quality monitoring of the effects present in the short GNSS baselines at fundamental sites.

  7. An object-based image analysis approach for aquaculture ponds precise mapping and monitoring: a case study of Tam Giang-Cau Hai Lagoon, Vietnam.

    PubMed

    Virdis, Salvatore Gonario Pasquale

    2014-01-01

    Monitoring and mapping shrimp farms, including their impact on land cover and land use, is critical to the sustainable management and planning of coastal zones. In this work, a methodology was proposed to set up a cost-effective and reproducible procedure that made use of satellite remote sensing, object-based classification approach, and open-source software for mapping aquaculture areas with high planimetric and thematic accuracy between 2005 and 2008. The analysis focused on two characteristic areas of interest of the Tam Giang-Cau Hai Lagoon (in central Vietnam), which have similar farming systems to other coastal aquaculture worldwide: the first was primarily characterised by locally referred "low tide" shrimp ponds, which are partially submerged areas; the second by earthed shrimp ponds, locally referred to as "high tide" ponds, which are non-submerged areas on the lagoon coast. The approach was based on the region-growing segmentation of high- and very high-resolution panchromatic images, SPOT5 and Worldview-1, and the unsupervised clustering classifier ISOSEG embedded on SPRING non-commercial software. The results, the accuracy of which was tested with a field-based aquaculture inventory, showed that in favourable situations (high tide shrimp ponds), the classification results provided high rates of accuracy (>95 %) through a fully automatic object-based classification. In unfavourable situations (low tide shrimp ponds), the performance degraded due to the low contrast between the water and the pond embankments. In these situations, the automatic results were improved by manual delineation of the embankments. Worldview-1 necessarily showed better thematic accuracy, and precise maps have been realised at a scale of up to 1:2,000. However, SPOT5 provided comparable results in terms of number of correctly classified ponds, but less accurate results in terms of the precision of mapped features. The procedure also demonstrated high degrees of reproducibility because it was applied to images with different spatial resolutions in an area that, during the investigated period, did not experience significant land cover changes.

  8. Nuclear Forensics Attributing the Source of Spent Fuel Used in an RDD Event

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Mark Robert

    2005-05-01

    An RDD attack against the U.S. is something America needs to prepare against. If such an event occurs the ability to quickly identify the source of the radiological material used in an RDD would aid investigators in identifying the perpetrators. Spent fuel is one of the most dangerous possible radiological sources for an RDD. In this work, a forensics methodology was developed and implemented to attribute spent fuel to a source reactor. The specific attributes determined are the spent fuel burnup, age from discharge, reactor type, and initial fuel enrichment. It is shown that by analyzing the post-event material, thesemore » attributes can be determined with enough accuracy to be useful for investigators. The burnup can be found within a 5% accuracy, enrichment with a 2% accuracy, and age with a 10% accuracy. Reactor type can be determined if specific nuclides are measured. The methodology developed was implemented into a code call NEMASYS. NEMASYS is easy to use and it takes a minimum amount of time to learn its basic functions. It will process data within a few minutes and provide detailed information about the results and conclusions.« less

  9. CGDSNPdb: a database resource for error-checked and imputed mouse SNPs.

    PubMed

    Hutchins, Lucie N; Ding, Yueming; Szatkiewicz, Jin P; Von Smith, Randy; Yang, Hyuna; de Villena, Fernando Pardo-Manuel; Churchill, Gary A; Graber, Joel H

    2010-07-06

    The Center for Genome Dynamics Single Nucleotide Polymorphism Database (CGDSNPdb) is an open-source value-added database with more than nine million mouse single nucleotide polymorphisms (SNPs), drawn from multiple sources, with genotypes assigned to multiple inbred strains of laboratory mice. All SNPs are checked for accuracy and annotated for properties specific to the SNP as well as those implied by changes to overlapping protein-coding genes. CGDSNPdb serves as the primary interface to two unique data sets, the 'imputed genotype resource' in which a Hidden Markov Model was used to assess local haplotypes and the most probable base assignment at several million genomic loci in tens of strains of mice, and the Affymetrix Mouse Diversity Genotyping Array, a high density microarray with over 600,000 SNPs and over 900,000 invariant genomic probes. CGDSNPdb is accessible online through either a web-based query tool or a MySQL public login. Database URL: http://cgd.jax.org/cgdsnpdb/

  10. Accuracy improvement in the TDR-based localization of water leaks

    NASA Astrophysics Data System (ADS)

    Cataldo, Andrea; De Benedetto, Egidio; Cannazza, Giuseppe; Monti, Giuseppina; Demitri, Christian

    A time domain reflectometry (TDR)-based system for the localization of water leaks has been recently developed by the authors. This system, which employs wire-like sensing elements to be installed along the underground pipes, has proven immune to the limitations that affect the traditional, acoustic leak-detection systems. Starting from the positive results obtained thus far, in this work, an improvement of this TDR-based system is proposed. More specifically, the possibility of employing a low-cost, water-absorbing sponge to be placed around the sensing element for enhancing the accuracy in the localization of the leak is addressed. To this purpose, laboratory experiments were carried out mimicking a water leakage condition, and two sensing elements (one embedded in a sponge and one without sponge) were comparatively used to identify the position of the leak through TDR measurements. Results showed that, thanks to the water retention capability of the sponge (which maintains the leaked water more localized), the sensing element embedded in the sponge leads to a higher accuracy in the evaluation of the position of the leak.

  11. Alerts of forest disturbance from MODIS imagery

    NASA Astrophysics Data System (ADS)

    Hammer, Dan; Kraft, Robin; Wheeler, David

    2014-12-01

    This paper reports the methodology and computational strategy for a forest cover disturbance alerting system. Analytical techniques from time series econometrics are applied to imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor to detect temporal instability in vegetation indices. The characteristics from each MODIS pixel's spectral history are extracted and compared against historical data on forest cover loss to develop a geographically localized classification rule that can be applied across the humid tropical biome. The final output is a probability of forest disturbance for each 500 m pixel that is updated every 16 days. The primary objective is to provide high-confidence alerts of forest disturbance, while minimizing false positives. We find that the alerts serve this purpose exceedingly well in Pará, Brazil, with high probability alerts garnering a user accuracy of 98 percent over the training period and 93 percent after the training period (2000-2005) when compared against the PRODES deforestation data set, which is used to assess spatial accuracy. Implemented in Clojure and Java on the Hadoop distributed data processing platform, the algorithm is a fast, automated, and open source system for detecting forest disturbance. It is intended to be used in conjunction with higher-resolution imagery and data products that cannot be updated as quickly as MODIS-based data products. By highlighting hotspots of change, the algorithm and associated output can focus high-resolution data acquisition and aid in efforts to enforce local forest conservation efforts.

  12. A technique for treating local breast cancer using a single set-up point and asymmetric collimation.

    PubMed

    Rosenow, U F; Valentine, E S; Davis, L W

    1990-07-01

    Using both pairs of asymmetric jaws of a linear accelerator local-regional breast cancer may be treated from a single set-up point. This point is placed at the abutment of the supraclavicular fields with the medial and lateral tangential fields. Positioning the jaws to create a half-beam superiorly permits treatment of the supraclavicular field. Positioning both jaws asymmetrically at midline to define a single beam in the inferoanterior quadrant permits treatment of the breast from medial and lateral tangents. The highest possible matching accuracy between the supraclavicular and tangential fields is inherently provided by this technique. For treatment of all fields at 100 cm source to axis distance (SAD) the lateral placement and depth of the set-up point may be determined by simulation and simple trigonometry. We elaborate on the clinical procedure. For the technologists treatment of all fields from a single set-up point is simple and efficient. Since the tissue at the superior border of the tangential fields is generally firmer than in mid-breast, greater accuracy in day-to-day set-up is permitted. This technique eliminates the need for table angles even when tangential fields only are planned. Because of half-beam collimation the limit to the tangential field length is 20 cm. Means will be suggested to overcome this limitation in the few cases where it occurs. Another modification is suggested for linear accelerators with only one independent pair of jaws.

  13. GIS based optimal impervious surface map generation using various spatial data for urban nonpoint source management.

    PubMed

    Lee, Cholyoung; Kim, Kyehyun; Lee, Hyuk

    2018-01-15

    Impervious surfaces are mainly artificial structures such as rooftops, roads, and parking lots that are covered by impenetrable materials. These surfaces are becoming the major causes of nonpoint source (NPS) pollution in urban areas. The rapid progress of urban development is increasing the total amount of impervious surfaces and NPS pollution. Therefore, many cities worldwide have adopted a stormwater utility fee (SUF) that generates funds needed to manage NPS pollution. The amount of SUF is estimated based on the impervious ratio, which is calculated by dividing the total impervious surface area by the net area of an individual land parcel. Hence, in order to identify the exact impervious ratio, large-scale impervious surface maps (ISMs) are necessary. This study proposes and assesses various methods for generating large-scale ISMs for urban areas by using existing GIS data. Bupyeong-gu, a district in the city of Incheon, South Korea, was selected as the study area. Spatial data that were freely offered by national/local governments in S. Korea were collected. First, three types of ISMs were generated by using the land-cover map, digital topographic map, and orthophotographs, to validate three methods that had been proposed conceptually by Korea Environment Corporation. Then, to generate an ISM of higher accuracy, an integration method using all data was proposed. Error matrices were made and Kappa statistics were calculated to evaluate the accuracy. Overlay analyses were performed to examine the distribution of misclassified areas. From the results, the integration method delivered the highest accuracy (Kappa statistic of 0.99) compared to the three methods that use a single type of spatial data. However, a longer production time and higher cost were limiting factors. Among the three methods using a single type of data, the land-cover map showed the highest accuracy with a Kappa statistic of 0.91. Thus, it was judged that the mapping method using the land-cover map is more appropriate than the others. In conclusion, it is desirable to apply the integration method when generating the ISM with the highest accuracy. However, if time and cost are constrained, it would be effective to primarily use the land-cover map. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  15. Comparison of seven protocols to identify fecal contamination sources using Escherichia coli

    USGS Publications Warehouse

    Stoeckel, D.M.; Mathes, M.V.; Hyer, K.E.; Hagedorn, C.; Kator, H.; Lukasik, J.; O'Brien, T. L.; Fenger, T.W.; Samadpour, M.; Strickler, K.M.; Wiggins, B.A.

    2004-01-01

    Microbial source tracking (MST) uses various approaches to classify fecal-indicator microorganisms to source hosts. Reproducibility, accuracy, and robustness of seven phenotypic and genotypic MST protocols were evaluated by use of Escherichia coli from an eight-host library of known-source isolates and a separate, blinded challenge library. In reproducibility tests, measuring each protocol's ability to reclassify blinded replicates, only one (pulsed-field gel electrophoresis; PFGE) correctly classified all test replicates to host species; three protocols classified 48-62% correctly, and the remaining three classified fewer than 25% correctly. In accuracy tests, measuring each protocol's ability to correctly classify new isolates, ribotyping with EcoRI and PvuII approached 100% correct classification but only 6% of isolates were classified; four of the other six protocols (antibiotic resistance analysis, PFGE, and two repetitive-element PCR protocols) achieved better than random accuracy rates when 30-100% of challenge isolates were classified. In robustness tests, measuring each protocol's ability to recognize isolates from nonlibrary hosts, three protocols correctly classified 33-100% of isolates as "unknown origin," whereas four protocols classified all isolates to a source category. A relevance test, summarizing interpretations for a hypothetical water sample containing 30 challenge isolates, indicated that false-positive classifications would hinder interpretations for most protocols. Study results indicate that more representation in known-source libraries and better classification accuracy would be needed before field application. Thorough reliability assessment of classification results is crucial before and during application of MST protocols.

  16. A novel method for transient detection in high-cadence optical surveys. Its application for a systematic search for novae in M 31

    NASA Astrophysics Data System (ADS)

    Soraisam, Monika D.; Gilfanov, Marat; Kupfer, Thomas; Masci, Frank; Shafter, Allen W.; Prince, Thomas A.; Kulkarni, Shrinivas R.; Ofek, Eran O.; Bellm, Eric

    2017-03-01

    Context. In the present era of large-scale surveys in the time domain, the processing of data, from procurement up to the detection of sources, is generally automated. One of the main challenges in the astrophysical analysis of their output is contamination by artifacts, especially in the regions of high surface brightness of unresolved emission. Aims: We present a novel method for identifying candidates for variable and transient sources from the outputs of optical time-domain survey data pipelines. We use the method to conduct a systematic search for novae in the intermediate Palomar Transient Factory (iPTF) observations of the bulge part of M 31 during the second half of 2013. Methods: We demonstrate that a significant fraction of artifacts produced by the iPTF pipeline form a locally uniform background of false detections approximately obeying Poissonian statistics, whereas genuine variable and transient sources, as well as artifacts associated with bright stars, result in clusters of detections whose spread is determined by the source localization accuracy. This makes the problem analogous to source detection on images produced by grazing incidence X-ray telescopes, enabling one to utilize the arsenal of powerful tools developed in X-ray astronomy. In particular, we use a wavelet-based source detection algorithm from the Chandra data analysis package CIAO. Results: Starting from 2.5 × 105 raw detections made by the iPTF data pipeline, we obtain approximately 4000 unique source candidates. Cross-matching these candidates with the source-catalog of a deep reference image of the same field, we find counterparts for 90% of the candidates. These sources are either artifacts due to imperfect PSF matching or genuine variable sources. The remaining approximately 400 detections are transient sources. We identify novae among these candidates by applying selection cuts to their lightcurves based on the expected properties of novae. Thus, we recovered all 12 known novae (not counting one that erupted toward the end of the survey) registered during the time span of the survey and discovered three nova candidates. Our method is generic and can be applied to mining any target out of the artifacts in optical time-domain data. As it is fully automated, its incompleteness can be accurately computed and corrected for.

  17. Physics-aspects of dose accuracy in high dose rate (HDR) brachytherapy: source dosimetry, treatment planning, equipment performance and in vivo verification techniques

    PubMed Central

    Bradley, David; Nisbet, Andrew

    2012-01-01

    This study provides a review of recent publications on the physics-aspects of dosimetric accuracy in high dose rate (HDR) brachytherapy. The discussion of accuracy is primarily concerned with uncertainties, but methods to improve dose conformation to the prescribed intended dose distribution are also noted. The main aim of the paper is to review current practical techniques and methods employed for HDR brachytherapy dosimetry. This includes work on the determination of dose rate fields around brachytherapy sources, the capability of treatment planning systems, the performance of treatment units and methods to verify dose delivery. This work highlights the determinants of accuracy in HDR dosimetry and treatment delivery and presents a selection of papers, focusing on articles from the last five years, to reflect active areas of research and development. Apart from Monte Carlo modelling of source dosimetry, there is no clear consensus on the optimum techniques to be used to assure dosimetric accuracy through all the processes involved in HDR brachytherapy treatment. With the exception of the ESTRO mailed dosimetry service, there is little dosimetric audit activity reported in the literature, when compared with external beam radiotherapy verification. PMID:23349649

  18. Physics-aspects of dose accuracy in high dose rate (HDR) brachytherapy: source dosimetry, treatment planning, equipment performance and in vivo verification techniques.

    PubMed

    Palmer, Antony; Bradley, David; Nisbet, Andrew

    2012-06-01

    This study provides a review of recent publications on the physics-aspects of dosimetric accuracy in high dose rate (HDR) brachytherapy. The discussion of accuracy is primarily concerned with uncertainties, but methods to improve dose conformation to the prescribed intended dose distribution are also noted. The main aim of the paper is to review current practical techniques and methods employed for HDR brachytherapy dosimetry. This includes work on the determination of dose rate fields around brachytherapy sources, the capability of treatment planning systems, the performance of treatment units and methods to verify dose delivery. This work highlights the determinants of accuracy in HDR dosimetry and treatment delivery and presents a selection of papers, focusing on articles from the last five years, to reflect active areas of research and development. Apart from Monte Carlo modelling of source dosimetry, there is no clear consensus on the optimum techniques to be used to assure dosimetric accuracy through all the processes involved in HDR brachytherapy treatment. With the exception of the ESTRO mailed dosimetry service, there is little dosimetric audit activity reported in the literature, when compared with external beam radiotherapy verification.

  19. A Spiking Neural Network Model of the Medial Superior Olive Using Spike Timing Dependent Plasticity for Sound Localization

    PubMed Central

    Glackin, Brendan; Wall, Julie A.; McGinnity, Thomas M.; Maguire, Liam P.; McDaid, Liam J.

    2010-01-01

    Sound localization can be defined as the ability to identify the position of an input sound source and is considered a powerful aspect of mammalian perception. For low frequency sounds, i.e., in the range 270 Hz–1.5 KHz, the mammalian auditory pathway achieves this by extracting the Interaural Time Difference between sound signals being received by the left and right ear. This processing is performed in a region of the brain known as the Medial Superior Olive (MSO). This paper presents a Spiking Neural Network (SNN) based model of the MSO. The network model is trained using the Spike Timing Dependent Plasticity learning rule using experimentally observed Head Related Transfer Function data in an adult domestic cat. The results presented demonstrate how the proposed SNN model is able to perform sound localization with an accuracy of 91.82% when an error tolerance of ±10° is used. For angular resolutions down to 2.5°, it will be demonstrated how software based simulations of the model incur significant computation times. The paper thus also addresses preliminary implementation on a Field Programmable Gate Array based hardware platform to accelerate system performance. PMID:20802855

  20. Development of an Acoustic Localization Method for Cavitation Experiments in Reverberant Environments

    NASA Astrophysics Data System (ADS)

    Ranjeva, Minna; Thompson, Lee; Perlitz, Daniel; Bonness, William; Capone, Dean; Elbing, Brian

    2011-11-01

    Cavitation is a major concern for the US Navy since it can cause ship damage and produce unwanted noise. The ability to precisely locate cavitation onset in laboratory scale experiments is essential for proper design that will minimize this undesired phenomenon. Measuring the cavitation onset is more accurately determined acoustically than visually. However, if other parts of the model begin to cavitate prior to the component of interest the acoustic data is contaminated with spurious noise. Consequently, cavitation onset is widely determined by optically locating the event of interest. The current research effort aims at developing an acoustic localization scheme for reverberant environments such as water tunnels. Currently cavitation bubbles are being induced in a static water tank with a laser, allowing the localization techniques to be refined with the bubble at a known location. The source is located with the use of acoustic data collected with hydrophones and analyzed using signal processing techniques. To verify the accuracy of the acoustic scheme, the events are simultaneously monitored visually with the use of a high speed camera. Once refined testing will be conducted in a water tunnel. This research was sponsored by the Naval Engineering Education Center (NEEC).

  1. The acoustical bright spot and mislocalization of tones by human listeners.

    PubMed

    Macaulay, Eric J; Hartmann, William M; Rakerd, Brad

    2010-03-01

    Listeners attempted to localize 1500-Hz sine tones presented in free field from a loudspeaker array, spanning azimuths from 0 degrees (straight ahead) to 90 degrees (extreme right). During this task, the tone levels and phases were measured in the listeners' ear canals. Because of the acoustical bright spot, measured interaural level differences (ILD) were non-monotonic functions of azimuth with a maximum near 55 degrees . In a source-identification task, listeners' localization decisions closely tracked the non-monotonic ILD, and thus became inaccurate at large azimuths. When listeners received training and feedback, their accuracy improved only slightly. In an azimuth-discrimination task, listeners decided whether a first sound was to the left or to the right of a second. The discrimination results also reflected the confusion caused by the non-monotonic ILD, and they could be predicted approximately by a listener's identification results. When the sine tones were amplitude modulated or replaced by narrow bands of noise, interaural time difference (ITD) cues greatly reduced the confusion for most listeners, but not for all. Recognizing the important role of the bright spot requires a reevaluation of the transition between the low-frequency region for localization (mainly ITD) and the high-frequency region (mainly ILD).

  2. Hippocampal activity during recognition memory co-varies with the accuracy and confidence of source memory judgments.

    PubMed

    Yu, Sarah S; Johnson, Jeffrey D; Rugg, Michael D

    2012-06-01

    It has been proposed that the hippocampus selectively supports retrieval of contextual associations, but an alternative view holds that the hippocampus supports strong memories regardless of whether they contain contextual information. We employed a memory test that combined the 'Remember/Know' and source memory procedures, which allowed test items to be segregated both by memory strength (recognition accuracy) and, separately, by the quality of the contextual information that could be retrieved (indexed by the accuracy/confidence of a source memory judgment). As measured by fMRI, retrieval-related hippocampal activity tracked the quality of retrieved contextual information and not memory strength. These findings are consistent with the proposal that the hippocampus supports contextual recollection rather than recognition memory more generally. Copyright © 2011 Wiley Periodicals, Inc.

  3. Testing the accuracy of growth and yield models for southern hardwood forests

    Treesearch

    H. Michael Rauscher; Michael J. Young; Charles D. Webb; Daniel J. Robison

    2000-01-01

    The accuracy of ten growth and yield models for Southern Appalachian upland hardwood forests and southern bottomland forests was evaluated. In technical applications, accuracy is the composite of both bias (average error) and precision. Results indicate that GHAT, NATPIS, and a locally calibrated version of NETWIGS may be regarded as being operationally valid...

  4. Intelligent multi-spectral IR image segmentation

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert

    2017-05-01

    This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.

  5. Accuracy and Availability of Egnos - Results of Observations

    NASA Astrophysics Data System (ADS)

    Felski, Andrzej; Nowak, Aleksander; Woźniak, Tomasz

    2011-01-01

    According to SBAS concept the user should receive timely the correct information about the system integrity and corrections to the pseudoranges measurements, which leads to better accuracy of coordinates. In theory the whole system is permanently monitored by RIMS stations, so it is impossible to deliver the faulty information to the user. The quality of the system is guaranteed inside the border of the system coverage however in the east part of Poland lower accuracy and availability of the system is still observed. This was the impulse to start an observation and analysis of real accuracy and availability of EGNOS service in the context of support air-operations in local airports and as the supplementation in hydrographic operations on the Polish Exclusive Zone. A registration has been conducted on three PANSA stations situated on airports in Warsaw, Krakow and Rzeszow and on PNA station in Gdynia. Measurements on PANSA stations have been completed permanently during each whole month up to end of September 2011. These stations are established on Septentrio PolaRx2e receivers and have been engaged into EGNOS Data Collection Network performed by EUROCONTROL. The advantage of these registrations is the uniformity of receivers. Apart from these registrations additional measurements in Gdynia have been provided with different receivers, mainly dedicated sea-navigation: CSI Wireless 1, NOVATEL OEMV, Sperry Navistar, Crescent V-100 and R110 as well as Magellan FX420. The main object of analyses was the accuracy and availability of EGNOS service in each point and for different receivers. Accuracy has been analyzed separately for each coordinate. Finally the temporarily and spatial correlations of coordinates, its availability and accuracy has been investigated. The findings prove that present accuracy of EGNOS service is about 1,5m (95%), but availability of the service is controversial. The accuracy of present EGNOS service meets the parameters of APV I and even APV II requirements, as well as any maritime and hydrography needs. However introducing this service into the practice demands better availability, because the gaps in receiving the proper information from the system appear too often and are too long at the moment. Additionally it was noticed very random character of availability and no correlation of this parameter in the different point of observations. In spite the correct EGNOS work the accuracy of the coordinates is not predictable in the local conditions. So in authors' opinion Local Airport Monitoring should be deployed if EGNOS would have to serve to the local airport service.

  6. A Mobile Anchor Assisted Localization Algorithm Based on Regular Hexagon in Wireless Sensor Networks

    PubMed Central

    Rodrigues, Joel J. P. C.

    2014-01-01

    Localization is one of the key technologies in wireless sensor networks (WSNs), since it provides fundamental support for many location-aware protocols and applications. Constraints of cost and power consumption make it infeasible to equip each sensor node in the network with a global position system (GPS) unit, especially for large-scale WSNs. A promising method to localize unknown nodes is to use several mobile anchors which are equipped with GPS units moving among unknown nodes and periodically broadcasting their current locations to help nearby unknown nodes with localization. This paper proposes a mobile anchor assisted localization algorithm based on regular hexagon (MAALRH) in two-dimensional WSNs, which can cover the whole monitoring area with a boundary compensation method. Unknown nodes calculate their positions by using trilateration. We compare the MAALRH with HILBERT, CIRCLES, and S-CURVES algorithms in terms of localization ratio, localization accuracy, and path length. Simulations show that the MAALRH can achieve high localization ratio and localization accuracy when the communication range is not smaller than the trajectory resolution. PMID:25133212

  7. Comprehensive and Practical Vision System for Self-Driving Vehicle Lane-Level Localization.

    PubMed

    Du, Xinxin; Tan, Kok Kiong

    2016-05-01

    Vehicle lane-level localization is a fundamental technology in autonomous driving. To achieve accurate and consistent performance, a common approach is to use the LIDAR technology. However, it is expensive and computational demanding, and thus not a practical solution in many situations. This paper proposes a stereovision system, which is of low cost, yet also able to achieve high accuracy and consistency. It integrates a new lane line detection algorithm with other lane marking detectors to effectively identify the correct lane line markings. It also fits multiple road models to improve accuracy. An effective stereo 3D reconstruction method is proposed to estimate vehicle localization. The estimation consistency is further guaranteed by a new particle filter framework, which takes vehicle dynamics into account. Experiment results based on image sequences taken under different visual conditions showed that the proposed system can identify the lane line markings with 98.6% accuracy. The maximum estimation error of the vehicle distance to lane lines is 16 cm in daytime and 26 cm at night, and the maximum estimation error of its moving direction with respect to the road tangent is 0.06 rad in daytime and 0.12 rad at night. Due to its high accuracy and consistency, the proposed system can be implemented in autonomous driving vehicles as a practical solution to vehicle lane-level localization.

  8. An in vitro verification of strength estimation for moving an 125I source during implantation in brachytherapy.

    PubMed

    Tanaka, Kenichi; Kajimoto, Tsuyoshi; Hayashi, Takahiro; Asanuma, Osamu; Hori, Masakazu; Kamo, Ken-Ichi; Sumida, Iori; Takahashi, Yutaka; Tateoka, Kunihiko; Bengua, Gerard; Sakata, Koh-Ichi; Endo, Satoru

    2018-04-11

    This study aims to demonstrate the feasibility of a method for estimating the strength of a moving brachytherapy source during implantation in a patient. Experiments were performed under the same conditions as in the actual treatment, except for one point that the source was not implanted into a patient. The brachytherapy source selected for this study was 125I with an air kerma strength of 0.332 U (μGym2h-1), and the detector used was a plastic scintillator with dimensions of 10 cm × 5 cm × 5 cm. A calibration factor to convert the counting rate of the detector to the source strength was measured and then the accuracy of the proposed method was investigated for a manually driven source. The accuracy was found to be under 10% when the shielding effect of additional needles for implantation at other positions was corrected, and about 30% when the shielding was not corrected. Even without shielding correction, the proposed method can detect dead/dropped source, implantation of a source with the wrong strength, and a mistake in the number of the sources implanted. Furthermore, when the correction was applied, the achieved accuracy came close to within 7% required to find the Oncoseed 6711 (125I seed with unintended strength among the commercially supplied values of 0.392, 0.462 and 0.533 U).

  9. A Multi-Scale Settlement Matching Algorithm Based on ARG

    NASA Astrophysics Data System (ADS)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  10. Circular magnetoplasmonic modes in gold nanoparticles.

    PubMed

    Pineider, Francesco; Campo, Giulio; Bonanni, Valentina; Fernández, César de Julián; Mattei, Giovanni; Caneschi, Andrea; Gatteschi, Dante; Sangregorio, Claudio

    2013-10-09

    The quest for efficient ways of modulating localized surface plasmon resonance is one of the frontiers in current research in plasmonics; the use of a magnetic field as a source of modulation is among the most promising candidates for active plasmonics. Here we report the observation of magnetoplasmonic modes on colloidal gold nanoparticles detected by means of magnetic circular dichroism (MCD) spectroscopy and provide a model that is able to rationalize and reproduce the experiment with unprecedented qualitative and quantitative accuracy. We believe that the steep slope observed at the plasmon resonance in the MCD spectrum can be very efficient in detecting changes in the refractive index of the surrounding medium, and we give a simple proof of principle of its possible implementation for magnetoplasmonic refractometric sensing.

  11. Local Dependence in an Operational CAT: Diagnosis and Implications

    ERIC Educational Resources Information Center

    Pommerich, Mary; Segall, Daniel O.

    2008-01-01

    The accuracy of CAT scores can be negatively affected by local dependence if the CAT utilizes parameters that are misspecified due to the presence of local dependence and/or fails to control for local dependence in responses during the administration stage. This article evaluates the existence and effect of local dependence in a test of…

  12. openPSTD: The open source pseudospectral time-domain method for acoustic propagation

    NASA Astrophysics Data System (ADS)

    Hornikx, Maarten; Krijnen, Thomas; van Harten, Louis

    2016-06-01

    An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory usage as it allows to spatially sample close to the Nyquist criterion, thus keeping both the required spatial and temporal resolution coarse. In the implementation it has been opted to model the physical geometry as a composition of rectangular two-dimensional subdomains, hence initially restricting the implementation to orthogonal and two-dimensional situations. The strategy of using subdomains divides the problem domain into local subsets, which enables the simulation software to be built according to Object-Oriented Programming best practices and allows room for further computational parallelization. The software is built using the open source components, Blender, Numpy and Python, and has been published under an open source license itself as well. For accelerating the software, an option has been included to accelerate the calculations by a partial implementation of the code on the Graphical Processing Unit (GPU), which increases the throughput by up to fifteen times. The details of the implementation are reported, as well as the accuracy of the code.

  13. Measuring diagnoses: ICD code accuracy.

    PubMed

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-10-01

    To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.

  14. Age-Related Positivity Effects and Autobiographical Memory Detail: Evidence from a Past/Future Source Memory Task

    PubMed Central

    Gallo, David A.; Korthauer, Laura E.; McDonough, Ian M.; Teshale, Salom; Johnson, Elizabeth L.

    2013-01-01

    This study investigated whether the age-related positivity effect strengthens specific event details in autobiographical memory. Participants retrieved past events or imagined future events in response to neutral or emotional cue words. Older adults rated each kind of event more positively than younger adults, demonstrating an age-related positivity effect. We next administered a source memory test. Participants were given the same cue words and tried to retrieve the previously generated event and its source (past or future). Accuracy on this source test should depend on the recollection of specific details about the earlier generated events, providing a more objective measure of those details than subjective ratings. We found that source accuracy was greater for positive than negative future events in both age groups, suggesting that positive future events were more detailed. In contrast, valence did not affect source accuracy for past events in either age group, suggesting that positive and negative past events were equally detailed. Although aging can bias people to focus on positive aspects of experience, this bias does not appear to strengthen the availability of details for positive relative to negative past events. PMID:21919591

  15. A multi-agency nutrient dataset used to estimate loads, improve monitoring design, and calibrate regional nutrient SPARROW models

    USGS Publications Warehouse

    Saad, David A.; Schwarz, Gregory E.; Robertson, Dale M.; Booth, Nathaniel

    2011-01-01

    Stream-loading information was compiled from federal, state, and local agencies, and selected universities as part of an effort to develop regional SPAtially Referenced Regressions On Watershed attributes (SPARROW) models to help describe the distribution, sources, and transport of nutrients in streams throughout much of the United States. After screening, 2,739 sites, sampled by 73 agencies, were identified as having suitable data for calculating long-term mean annual nutrient loads required for SPARROW model calibration. These sites had a wide range in nutrient concentrations, loads, and yields, and environmental characteristics in their basins. An analysis of the accuracy in load estimates relative to site attributes indicated that accuracy in loads improve with increases in the number of observations, the proportion of uncensored data, and the variability in flow on observation days, whereas accuracy declines with increases in the root mean square error of the water-quality model, the flow-bias ratio, the number of days between samples, the variability in daily streamflow for the prediction period, and if the load estimate has been detrended. Based on compiled data, all areas of the country had recent declines in the number of sites with sufficient water-quality data to compute accurate annual loads and support regional modeling analyses. These declines were caused by decreases in the number of sites being sampled and data not being entered in readily accessible databases.

  16. Characterisation of residual ionospheric errors in bending angles using GNSS RO end-to-end simulations

    NASA Astrophysics Data System (ADS)

    Liu, C. L.; Kirchengast, G.; Zhang, K. F.; Norman, R.; Li, Y.; Zhang, S. C.; Carter, B.; Fritzer, J.; Schwaerz, M.; Choy, S. L.; Wu, S. Q.; Tan, Z. X.

    2013-09-01

    Global Navigation Satellite System (GNSS) radio occultation (RO) is an innovative meteorological remote sensing technique for measuring atmospheric parameters such as refractivity, temperature, water vapour and pressure for the improvement of numerical weather prediction (NWP) and global climate monitoring (GCM). GNSS RO has many unique characteristics including global coverage, long-term stability of observations, as well as high accuracy and high vertical resolution of the derived atmospheric profiles. One of the main error sources in GNSS RO observations that significantly affect the accuracy of the derived atmospheric parameters in the stratosphere is the ionospheric error. In order to mitigate the effect of this error, the linear ionospheric correction approach for dual-frequency GNSS RO observations is commonly used. However, the residual ionospheric errors (RIEs) can be still significant, especially when large ionospheric disturbances occur and prevail such as during the periods of active space weather. In this study, the RIEs were investigated under different local time, propagation direction and solar activity conditions and their effects on RO bending angles are characterised using end-to-end simulations. A three-step simulation study was designed to investigate the characteristics of the RIEs through comparing the bending angles with and without the effects of the RIEs. This research forms an important step forward in improving the accuracy of the atmospheric profiles derived from the GNSS RO technique.

  17. SRTM 3" comparison with local information: Two examples at national level in Peru

    NASA Astrophysics Data System (ADS)

    Plasencia Sánchez, Edson; Fernandez de Villarán, Ruben

    2012-06-01

    The access to the high resolution digital terrain models (DEM) generated from the data collected by the Shuttle Radar Topography Mission (SRTM) of NASA is freely available to the public. Consequently it has become a source of topographic information which is of great value to scientists involved in geophysical or geodetic analysis. Despite the efforts of the Consultative Group on International Agricultural Research (CGIAR), to validate and complement the information contained in these DEMs (currently offered as version 4.1), they still need to be checked for their accuracy in certain regions of the planet. In this paper, the vertical accuracy of the SRTM 3" version 4.1 DEM was analyzed in several areas of Peru using two sets of control points: the height of the district capitals (the minor politics units) and the heights of the weather and hydrological stations from the National Meteorology and Hydrology Service (SENAMHI) of Peru. The comparison shows that the height differences are independent of the altitude, latitude and longitude of the evaluated points. They are rather related to the aspect of the terrain and to the way the SRTM data were acquired. It shows that the mean square of the height differences at national level was ±20 m for district capitals and ±25 m for the SENAMHI stations. This is slightly larger than the overall accuracy of the SRTM ±16 m.

  18. Neural chronometry and coherency across speed-accuracy demands reveal lack of homomorphism between computational and neural mechanisms of evidence accumulation.

    PubMed

    Heitz, Richard P; Schall, Jeffrey D

    2013-10-19

    The stochastic accumulation framework provides a mechanistic, quantitative account of perceptual decision-making and how task performance changes with experimental manipulations. Importantly, it provides an elegant account of the speed-accuracy trade-off (SAT), which has long been the litmus test for decision models, and also mimics the activity of single neurons in several key respects. Recently, we developed a paradigm whereby macaque monkeys trade speed for accuracy on cue during visual search task. Single-unit activity in frontal eye field (FEF) was not homomorphic with the architecture of models, demonstrating that stochastic accumulators are an incomplete description of neural activity under SAT. This paper summarizes and extends this work, further demonstrating that the SAT leads to extensive, widespread changes in brain activity never before predicted. We will begin by reviewing our recently published work that establishes how spiking activity in FEF accomplishes SAT. Next, we provide two important extensions of this work. First, we report a new chronometric analysis suggesting that increases in perceptual gain with speed stress are evident in FEF synaptic input, implicating afferent sensory-processing sources. Second, we report a new analysis demonstrating selective influence of SAT on frequency coupling between FEF neurons and local field potentials. None of these observations correspond to the mechanics of current accumulator models.

  19. De novo peptide sequencing by deep learning

    PubMed Central

    Tran, Ngoc Hieu; Zhang, Xianglilan; Xin, Lei; Shan, Baozhen; Li, Ming

    2017-01-01

    De novo peptide sequencing from tandem MS data is the key technology in proteomics for the characterization of proteins, especially for new sequences, such as mAbs. In this study, we propose a deep neural network model, DeepNovo, for de novo peptide sequencing. DeepNovo architecture combines recent advances in convolutional neural networks and recurrent neural networks to learn features of tandem mass spectra, fragment ions, and sequence patterns of peptides. The networks are further integrated with local dynamic programming to solve the complex optimization task of de novo sequencing. We evaluated the method on a wide variety of species and found that DeepNovo considerably outperformed state of the art methods, achieving 7.7–22.9% higher accuracy at the amino acid level and 38.1–64.0% higher accuracy at the peptide level. We further used DeepNovo to automatically reconstruct the complete sequences of antibody light and heavy chains of mouse, achieving 97.5–100% coverage and 97.2–99.5% accuracy, without assisting databases. Moreover, DeepNovo is retrainable to adapt to any sources of data and provides a complete end-to-end training and prediction solution to the de novo sequencing problem. Not only does our study extend the deep learning revolution to a new field, but it also shows an innovative approach in solving optimization problems by using deep learning and dynamic programming. PMID:28720701

  20. As above, so below? Towards understanding inverse models in BCI

    NASA Astrophysics Data System (ADS)

    Lindgren, Jussi T.

    2018-02-01

    Objective. In brain-computer interfaces (BCI), measurements of the user’s brain activity are classified into commands for the computer. With EEG-based BCIs, the origins of the classified phenomena are often considered to be spatially localized in the cortical volume and mixed in the EEG. We investigate if more accurate BCIs can be obtained by reconstructing the source activities in the volume. Approach. We contrast the physiology-driven source reconstruction with data-driven representations obtained by statistical machine learning. We explain these approaches in a common linear dictionary framework and review the different ways to obtain the dictionary parameters. We consider the effect of source reconstruction on some major difficulties in BCI classification, namely information loss, feature selection and nonstationarity of the EEG. Main results. Our analysis suggests that the approaches differ mainly in their parameter estimation. Physiological source reconstruction may thus be expected to improve BCI accuracy if machine learning is not used or where it produces less optimal parameters. We argue that the considered difficulties of surface EEG classification can remain in the reconstructed volume and that data-driven techniques are still necessary. Finally, we provide some suggestions for comparing approaches. Significance. The present work illustrates the relationships between source reconstruction and machine learning-based approaches for EEG data representation. The provided analysis and discussion should help in understanding, applying, comparing and improving such techniques in the future.

  1. Polarization leakage in epoch of reionization windows - II. Primary beam model and direction-dependent calibration

    NASA Astrophysics Data System (ADS)

    Asad, K. M. B.; Koopmans, L. V. E.; Jelić, V.; Ghosh, A.; Abdalla, F. B.; Brentjens, M. A.; de Bruyn, A. G.; Ciardi, B.; Gehlot, B. K.; Iliev, I. T.; Mevius, M.; Pandey, V. N.; Yatawatta, S.; Zaroubi, S.

    2016-11-01

    Leakage of diffuse polarized emission into Stokes I caused by the polarized primary beam of the instrument might mimic the spectral structure of the 21-cm signal coming from the epoch of reionization (EoR) making their separation difficult. Therefore, understanding polarimetric performance of the antenna is crucial for a successful detection of the EoR signal. Here, we have calculated the accuracy of the nominal model beam of Low Frequency ARray (LOFAR) in predicting the leakage from Stokes I to Q, U by comparing them with the corresponding leakage of compact sources actually observed in the 3C 295 field. We have found that the model beam has errors of ≤10 per cent on the predicted levels of leakage of ˜1 per cent within the field of view, I.e. if the leakage is taken out perfectly using this model the leakage will reduce to 10-3 of the Stokes I flux. If similar levels of accuracy can be obtained in removing leakage from Stokes Q, U to I, we can say, based on the results of our previous paper, that the removal of this leakage using this beam model would ensure that the leakage is well below the expected EoR signal in almost the whole instrumental k-space of the cylindrical power spectrum. We have also shown here that direction-dependent calibration can remove instrumentally polarized compact sources, given an unpolarized sky model, very close to the local noise level.

  2. Experimental characterization and numerical simulation of riveted lap-shear joints using Rivet Element

    NASA Astrophysics Data System (ADS)

    Vivio, Francesco; Fanelli, Pierluigi; Ferracci, Michele

    2018-03-01

    In aeronautical and automotive industries the use of rivets for applications requiring several joining points is now very common. In spite of a very simple shape, a riveted junction has many contact surfaces and stress concentrations that make the local stiffness very difficult to be calculated. To overcome this difficulty, commonly finite element models with very dense meshes are performed for single joint analysis because the accuracy is crucial for a correct structural analysis. Anyhow, when several riveted joints are present, the simulation becomes computationally too heavy and usually significant restrictions to joint modelling are introduced, sacrificing the accuracy of local stiffness evaluation. In this paper, we tested the accuracy of a rivet finite element presented in previous works by the authors. The structural behaviour of a lap joint specimen with a rivet joining is simulated numerically and compared to experimental measurements. The Rivet Element, based on a closed-form solution of a reference theoretical model of the rivet joint, simulates local and overall stiffness of the junction combining high accuracy with low degrees of freedom contribution. In this paper the Rivet Element performances are compared to that of a FE non-linear model of the rivet, built with solid elements and dense mesh, and to experimental data. The promising results reported allow to consider the Rivet Element able to simulate, with a great accuracy, actual structures with several rivet connections.

  3. Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.

  4. Simultaneous 3D localization of multiple MR-visible markers in fully reconstructed MR images: proof-of-concept for subsecond position tracking.

    PubMed

    Thörmer, Gregor; Garnov, Nikita; Moche, Michael; Haase, Jürgen; Kahn, Thomas; Busse, Harald

    2012-04-01

    To determine whether a greatly reduced spatial resolution of fully reconstructed projection MR images can be used for the simultaneous 3D localization of multiple MR-visible markers and to assess the feasibility of a subsecond position tracking for clinical purposes. Miniature, inductively coupled RF coils were imaged in three orthogonal planes with a balanced steady-state free precession (SSFP) sequence and automatically localized using a two-dimensional template fitting and a subsequent three-dimensional (3D) matching of the coordinates. Precision, accuracy, speed and robustness of 3D localization were assessed for decreasing in-plane resolutions (0.6-4.7 mm). The feasibility of marker tracking was evaluated at the lowest resolution by following a robotically driven needle on a complex 3D trajectory. Average 3D precision and accuracy, sensitivity and specificity of localization ranged between 0.1 and 0.4 mm, 0.5 and 1.0 mm, 100% and 95%, and 100% and 96%, respectively. At the lowest resolution, imaging and localization took ≈350 ms and provided an accuracy of ≈1.0 mm. In the tracking experiment, the needle was clearly depicted on the oblique scan planes defined by the markers. Image-based marker localization at a greatly reduced spatial resolution is considered a feasible approach to monitor reference points or rigid instruments at subsecond update rates. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Distributed Power Allocation for Wireless Sensor Network Localization: A Potential Game Approach.

    PubMed

    Ke, Mingxing; Li, Ding; Tian, Shiwei; Zhang, Yuli; Tong, Kaixiang; Xu, Yuhua

    2018-05-08

    The problem of distributed power allocation in wireless sensor network (WSN) localization systems is investigated in this paper, using the game theoretic approach. Existing research focuses on the minimization of the localization errors of individual agent nodes over all anchor nodes subject to power budgets. When the service area and the distribution of target nodes are considered, finding the optimal trade-off between localization accuracy and power consumption is a new critical task. To cope with this issue, we propose a power allocation game where each anchor node minimizes the square position error bound (SPEB) of the service area penalized by its individual power. Meanwhile, it is proven that the power allocation game is an exact potential game which has one pure Nash equilibrium (NE) at least. In addition, we also prove the existence of an ϵ -equilibrium point, which is a refinement of NE and the better response dynamic approach can reach the end solution. Analytical and simulation results demonstrate that: (i) when prior distribution information is available, the proposed strategies have better localization accuracy than the uniform strategies; (ii) when prior distribution information is unknown, the performance of the proposed strategies outperforms power management strategies based on the second-order cone program (SOCP) for particular agent nodes after obtaining the estimated distribution of agent nodes. In addition, proposed strategies also provide an instructional trade-off between power consumption and localization accuracy.

  6. Strategy for the absolute neutron emission measurement on ITER.

    PubMed

    Sasao, M; Bertalot, L; Ishikawa, M; Popovichev, S

    2010-10-01

    Accuracy of 10% is demanded to the absolute fusion measurement on ITER. To achieve this accuracy, a functional combination of several types of neutron measurement subsystem, cross calibration among them, and in situ calibration are needed. Neutron transport calculation shows the suitable calibration source is a DT/DD neutron generator of source strength higher than 10(10) n/s (neutron/second) for DT and 10(8) n/s for DD. It will take eight weeks at the minimum with this source to calibrate flux monitors, profile monitors, and the activation system.

  7. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.

    PubMed

    Zhu, Xiangbin; Qiu, Huiling

    2016-01-01

    Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.

  8. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections

    PubMed Central

    2016-01-01

    Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved. PMID:27893761

  9. Spinal arteriovenous shunts: accuracy of shunt detection, localization, and subtype discrimination using spinal magnetic resonance angiography and manual contrast injection using a syringe.

    PubMed

    Unsrisong, Kittisak; Taphey, Siriporn; Oranratanachai, Kanokporn

    2016-04-01

    The object of this study was to evaluate the accuracy of fast 3D contrast-enhanced spinal MR angiography (MRA) using a manual syringe contrast injection technique for detecting and evaluating spinal arteriovenous shunts (AVSs). This was a retrospective study of 15 patients and 20 spinal MRA and catheter angiography studies. The accuracy of using spinal MRA to detect spinal AVS, localize shunts, and discriminate the subtype and dominant arterial feeder of the AVS were studied. There were 14 pretherapeutic and 6 posttherapeutic follow-up spinal MRA and catheter spinal angiography studies. The spinal AVS was demonstrated in 17 of 20 studies. Spinal MRA demonstrated 100% sensitivity for detecting spinal AVS with no false-negative results. A 97% accuracy rate for AVS subtype discrimination and shunt level localization was achieved using this study's diagnostic criteria. The detection of the dominant arterial feeder was limited to 9 of these 17 cases (53%). The fast 3D contrast-enhanced MRA technique performed using manual syringe contrast injection can detect the presence of a spinal AVS, locate the shunt level, and discriminate AVS subtype in most cases, but is limited when detecting small arterial feeders.

  10. Usefulness of composite methionine-positron emission tomography/3.0-tesla magnetic resonance imaging to detect the localization and extent of early-stage Cushing adenoma.

    PubMed

    Ikeda, Hidetoshi; Abe, Takehiko; Watanabe, Kazuo

    2010-04-01

    Fifty to eighty percent of Cushing disease is diagnosed by typical endocrine responses. Recently, the number of diagnoses of Cushing disease without typical Cushing syndrome has been increasing; therefore, improving ways to determine the localization of the adenoma and making an early diagnosis is important. This study was undertaken to determine the present diagnostic accuracy for Cushing microadenoma and to compare the differences in diagnostic accuracy between MR imaging and PET/MR imaging. During the past 3 years the authors analyzed the diagnostic accuracy in a series of 35 patients with Cushing adenoma that was verified by surgical pituitary exploration. All 35 cases of Cushing disease, including 20 cases of "overt" and 15 cases of "preclinical" Cushing disease, were studied. Superconductive MR images (1.5 or 3.0 T) and composite images from FDG-PET or methionine (MET)-PET and 3.0-T MR imaging were compared with the localization of adenomas verified by surgery. The diagnostic accuracy of superconductive MR imaging for detecting the localization of Cushing microadenoma was only 40%. The causes of unsatisfactory results for superconductive MR imaging were false-negative results (10 cases), false-positive results (6 cases), and instances of double pituitary adenomas (3 cases). In contrast, the accuracy of microadenoma localization using MET-PET/3.0-T MR imaging was 100% and that of FDG-PET/3.0-T MR imaging was 73%. Moreover, the adenoma location was better delineated on MET-PET/MR images than on FDG-PET/MR images. There was no significant difference in maximum standard uptake value of adenomas evaluated by MET-PET between preclinical Cushing disease and overt Cushing disease. Composite MET-PET/3.0-T MR imaging is useful for the improvement of the delineation of Cushing microadenoma and offers high-quality detectability for early-stage Cushing adenoma.

  11. Evaluation of the accuracy of GPS as a method of locating traffic collisions.

    DOT National Transportation Integrated Search

    2004-06-01

    The objective of this study were to determine the accuracy of GPS units as a traffic crash location tool, evaluate the accuracy of the location data obtained using the GPS units, and determine the largest sources of any errors found. : The analysis s...

  12. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  13. Exploring What Determines the Use of Forecasts of Varying Time Periods in Guanacaste, Costa Rica

    NASA Astrophysics Data System (ADS)

    Babcock, M.; Wong-Parodi, G.; Grossmann, I.; Small, M. J.

    2016-12-01

    Weather and climate forecasts are promoted as ways to improve water management, especially in the face of changing environmental conditions. However, studies indicate many stakeholders who may benefit from such information do not use it. This study sought to better understand which personal factors (e.g., trust in forecast sources, perceptions of accuracy) were important determinants of the use of 4-day, 3-month, and 12-month rainfall forecasts by stakeholders in water management-related sectors in the seasonally dry province of Guanacaste, Costa Rica. From August to October 2015, we surveyed 87 stakeholders from a mix of government agencies, local water committees, large farms, tourist businesses, environmental NGO's, and the public. The result of an exploratory factor analysis suggests that trust in "informal" forecast sources (traditional methods, family advice) and in "formal" sources (government, university and private company science) are independent of each other. The result of logistic regression analyses suggest that 1) greater understanding of forecasts is associated with a greater probability of 4-day and 3-month forecast use, but not 12-month forecast use, 2) a greater probability of 3-month forecast use is associated with a lower level of trust in "informal" sources, and 3), feeling less secure about water resources, and regularly using many sources of information (and specifically formal meetings and reports) are each associated with a greater probability of using 12-month forecasts. While limited by the sample size, and affected by the factoring method and regression model assumptions, these results do appear to suggest that while forecasts of all times scales are used to some extent, local decision makers' decisions to use 4-day and 3-month forecasts appear to be more intrinsically motivated (based on their level of understanding and trust) and the use of 12-month forecasts seems to be more motivated by a sense of requirement or mandate.

  14. Improving the accuracy of k-nearest neighbor using local mean based and distance weight

    NASA Astrophysics Data System (ADS)

    Syaliman, K. U.; Nababan, E. B.; Sitompul, O. S.

    2018-03-01

    In k-nearest neighbor (kNN), the determination of classes for new data is normally performed by a simple majority vote system, which may ignore the similarities among data, as well as allowing the occurrence of a double majority class that can lead to misclassification. In this research, we propose an approach to resolve the majority vote issues by calculating the distance weight using a combination of local mean based k-nearest neighbor (LMKNN) and distance weight k-nearest neighbor (DWKNN). The accuracy of results is compared to the accuracy acquired from the original k-NN method using several datasets from the UCI Machine Learning repository, Kaggle and Keel, such as ionosphare, iris, voice genre, lower back pain, and thyroid. In addition, the proposed method is also tested using real data from a public senior high school in city of Tualang, Indonesia. Results shows that the combination of LMKNN and DWKNN was able to increase the classification accuracy of kNN, whereby the average accuracy on test data is 2.45% with the highest increase in accuracy of 3.71% occurring on the lower back pain symptoms dataset. For the real data, the increase in accuracy is obtained as high as 5.16%.

  15. Accuracy and speed feedback: Global and local effects on strategy use

    PubMed Central

    Touron, Dayna R.; Hertzog, Christopher

    2013-01-01

    Background Skill acquisition often involves a shift from an effortful algorithm-based strategy to more fluent memory-based performance. Older adults’ slower strategy transitions can be ascribed to both slowed learning and metacognitive factors. Experimenters often provide feedback on response accuracy; this emphasis may either inadvertently reinforce older adults’ conservatism or might highlight that retrieval is generally quite accurate. RT feedback can lead to more rapid shift to retrieval (Hertzog, Touron, & Hines, 2007). Methods This study parametrically varied trial-by-trial feedback to examine whether strategy shifts in the noun-pair task in younger (M = 19) and older adults (M = 67) were influenced by type of performance feedback: none, trial accuracy, trial RT, or both accuracy and RT. Results Older adults who received accuracy feedback retrieved more often, particularly on difficult rearranged trials, and participants who receive speed feedback performed the scanning strategy more quickly. Age differences were also obtained in local (trial-level) reactivity to task performance, but these were not affected by feedback. Conclusions Accuracy and speed feedback had distinct global (general) influences on task strategies and performance. In particular, it appears that the standard practice of providing trial-by-trial accuracy feedback might facilitate older adults’ use of retrieval strategies in skill acquisition tasks. PMID:24785594

  16. SU-G-JeP3-01: A Method to Quantify Lung SBRT Target Localization Accuracy Based On Digitally Reconstructed Fluoroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lafata, K; Ren, L; Cai, J

    2016-06-15

    Purpose: To develop a methodology based on digitally-reconstructed-fluoroscopy (DRF) to quantitatively assess target localization accuracy of lung SBRT, and to evaluate using both a dynamic digital phantom and a patient dataset. Methods: For each treatment field, a 10-phase DRF is generated based on the planning 4DCT. Each frame is pre-processed with a morphological top-hat filter, and corresponding beam apertures are projected to each detector plane. A template-matching algorithm based on cross-correlation is used to detect the tumor location in each frame. Tumor motion relative beam aperture is extracted in the superior-inferior direction based on each frame’s impulse response to themore » template, and the mean tumor position (MTP) is calculated as the average tumor displacement. The DRF template coordinates are then transferred to the corresponding MV-cine dataset, which is retrospectively filtered as above. The treatment MTP is calculated within each field’s projection space, relative to the DRF-defined template. The field’s localization error is defined as the difference between the DRF-derived-MTP (planning) and the MV-cine-derived-MTP (delivery). A dynamic digital phantom was used to assess the algorithm’s ability to detect intra-fractional changes in patient alignment, by simulating different spatial variations in the MV-cine and calculating the corresponding change in MTP. Inter-and-intra-fractional variation, IGRT accuracy, and filtering effects were investigated on a patient dataset. Results: Phantom results demonstrated a high accuracy in detecting both translational and rotational variation. The lowest localization error of the patient dataset was achieved at each fraction’s first field (mean=0.38mm), with Fx3 demonstrating a particularly strong correlation between intra-fractional motion-caused localization error and treatment progress. Filtering significantly improved tracking visibility in both the DRF and MV-cine images. Conclusion: We have developed and evaluated a methodology to quantify lung SBRT target localization accuracy based on digitally-reconstructed-fluoroscopy. Our approach may be useful in potentially reducing treatment margins to optimize lung SBRT outcomes. R01-184173.« less

  17. Behavioral and modeling studies of sound localization in cats: effects of stimulus level and duration

    PubMed Central

    Ruhland, Janet L.; Yin, Tom C. T.; Tollin, Daniel J.

    2013-01-01

    Sound localization accuracy in elevation can be affected by sound spectrum alteration. Correspondingly, any stimulus manipulation that causes a change in the peripheral representation of the spectrum may degrade localization ability in elevation. The present study examined the influence of sound duration and level on localization performance in cats with the head unrestrained. Two cats were trained using operant conditioning to indicate the apparent location of a sound via gaze shift, which was measured with a search-coil technique. Overall, neither sound level nor duration had a notable effect on localization accuracy in azimuth, except at near-threshold levels. In contrast, localization accuracy in elevation improved as sound duration increased, and sound level also had a large effect on localization in elevation. For short-duration noise, the performance peaked at intermediate levels and deteriorated at low and high levels; for long-duration noise, this “negative level effect” at high levels was not observed. Simulations based on an auditory nerve model were used to explain the above observations and to test several hypotheses. Our results indicated that neither the flatness of sound spectrum (before the sound reaches the inner ear) nor the peripheral adaptation influences spectral coding at the periphery for localization in elevation, whereas neural computation that relies on “multiple looks” of the spectral analysis is critical in explaining the effect of sound duration, but not level. The release of negative level effect observed for long-duration sound could not be explained at the periphery and, therefore, is likely a result of processing at higher centers. PMID:23657278

  18. SAIP2014, the 59th Annual Conference of the South African Institute of Physics

    NASA Astrophysics Data System (ADS)

    Engelbrecht, Chris; Karataglidis, Steven

    2015-04-01

    The International Celestial Reference Frame (ICRF) was adopted by the International Astronomical Union (IAU) in 1997. The current standard, the ICRF-2, is based on Very Long Baseline Interferometric (VLBI) radio observations of positions of 3414 extragalactic radio reference sources. The angular resolution achieved by the VLBI technique is on a scale of milliarcsecond to sub-milliarcseconds and defines the ICRF with the highest accuracy available at present. An ideal reference source used for celestial reference frame work should be unresolved or point-like on these scales. However, extragalactic radio sources, such as those that definevand maintain the ICRF, can exhibit spatially extended structures on sub-milliarsecond scalesvthat may vary both in time and frequency. This variability can introduce a significant error in the VLBI measurements thereby degrading the accuracy of the estimated source position. Reference source density in the Southern celestial hemisphere is also poor compared to the Northern hemisphere, mainly due to the limited number of radio telescopes in the south. In order to dene the ICRF with the highest accuracy, observational efforts are required to find more compact sources and to monitor their structural evolution. In this paper we show that the astrometric VLBI sessions can be used to obtain source structure information and we present preliminary imaging results for the source J1427-4206 at 2.3 and 8.4 GHz frequencies which shows that the source is compact and suitable as a reference source.

  19. Estimation of genomic prediction accuracy from reference populations with varying degrees of relationship.

    PubMed

    Lee, S Hong; Clark, Sam; van der Werf, Julius H J

    2017-01-01

    Genomic prediction is emerging in a wide range of fields including animal and plant breeding, risk prediction in human precision medicine and forensic. It is desirable to establish a theoretical framework for genomic prediction accuracy when the reference data consists of information sources with varying degrees of relationship to the target individuals. A reference set can contain both close and distant relatives as well as 'unrelated' individuals from the wider population in the genomic prediction. The various sources of information were modeled as different populations with different effective population sizes (Ne). Both the effective number of chromosome segments (Me) and Ne are considered to be a function of the data used for prediction. We validate our theory with analyses of simulated as well as real data, and illustrate that the variation in genomic relationships with the target is a predictor of the information content of the reference set. With a similar amount of data available for each source, we show that close relatives can have a substantially larger effect on genomic prediction accuracy than lesser related individuals. We also illustrate that when prediction relies on closer relatives, there is less improvement in prediction accuracy with an increase in training data or marker panel density. We release software that can estimate the expected prediction accuracy and power when combining different reference sources with various degrees of relationship to the target, which is useful when planning genomic prediction (before or after collecting data) in animal, plant and human genetics.

  20. Based on the CSI regional segmentation indoor localization algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Xi; Lin, Wei; Lan, Jingwei

    2017-08-01

    To solve the problem of high cost and low accuracy, the method of Channel State Information (CSI) regional segmentation are proposed in the indoor positioning. Because Channel State Information (CSI) stability, and effective against multipath effect, we used the Channel State Information (CSI) to segment location area. The method Acquisition CSI the influence of different link to pinpoint the location of the area. Then the method can improve the accuracy of positioning, and reduce the cost of the fingerprint localization algorithm.

  1. Comparative Accuracy Evaluation of Fine-Scale Global and Local Digital Surface Models: The Tshwane Case Study I

    NASA Astrophysics Data System (ADS)

    Breytenbach, A.

    2016-10-01

    Conducted in the City of Tshwane, South Africa, this study set about to test the accuracy of DSMs derived from different remotely sensed data locally. VHR digital mapping camera stereo-pairs, tri-stereo imagery collected by a Pléiades satellite and data detected from the Tandem-X InSAR satellite configuration were fundamental in the construction of seamless DSM products at different postings, namely 2 m, 4 m and 12 m. The three DSMs were sampled against independent control points originating from validated airborne LiDAR data. The reference surfaces were derived from the same dense point cloud at grid resolutions corresponding to those of the samples. The absolute and relative positional accuracies were computed using well-known DEM error metrics and accuracy statistics. Overall vertical accuracies were also assessed and compared across seven slope classes and nine primary land cover classes. Although all three DSMs displayed significantly more vertical errors where solid waterbodies, dense natural and/or alien woody vegetation and, in a lesser degree, urban residential areas with significant canopy cover were encountered, all three surpassed their expected positional accuracies overall.

  2. Dem Local Accuracy Patterns in Land-Use/Land-Cover Classification

    NASA Astrophysics Data System (ADS)

    Katerji, Wassim; Farjas Abadia, Mercedes; Morillo Balsera, Maria del Carmen

    2016-01-01

    Global and nation-wide DEM do not preserve the same height accuracy throughout the area of study. Instead of assuming a single RMSE value for the whole area, this study proposes a vario-model that divides the area into sub-regions depending on the land-use / landcover (LULC) classification, and assigns a local accuracy per each zone, as these areas share similar terrain formation and roughness, and tend to have similar DEM accuracies. A pilot study over Lebanon using the SRTM and ASTER DEMs, combined with a set of 1,105 randomly distributed ground control points (GCPs) showed that even though the inputDEMs have different spatial and temporal resolution, and were collected using difierent techniques, their accuracy varied similarly when changing over difierent LULC classes. Furthermore, validating the generated vario-models proved that they provide a closer representation of the accuracy to the validating GCPs than the conventional RMSE, by 94% and 86% for the SRTMand ASTER respectively. Geostatistical analysis of the input and output datasets showed that the results have a normal distribution, which support the generalization of the proven hypothesis, making this finding applicable to other input datasets anywhere around the world.

  3. Sky and Elemental Planetary Mapping Via Gamma Ray Emissions

    NASA Technical Reports Server (NTRS)

    Roland, John M.

    2011-01-01

    Low-energy gamma ray emissions ((is) approximately 30keV to (is) approximately 30MeV) are significant to astrophysics because many interesting objects emit their primary energy in this regime. As such, there has been increasing demand for a complete map of the gamma ray sky, but many experiments to do so have encountered obstacles. Using an innovative method of applying the Radon Transform to data from BATSE (the Burst And Transient Source Experiment) on NASA's CGRO (Compton Gamma-Ray Observatory) mission, we have circumvented many of these issues and successfully localized many known sources to 0.5 - 1 deg accuracy. Our method, which is based on a simple 2-dimensional planar back-projection approximation of the inverse Radon transform (familiar from medical CAT-scan technology), can thus be used to image the entire sky and locate new gamma ray sources, specifically in energy bands between 200keV and 2MeV which have not been well surveyed to date. Samples of these results will be presented. This same technique can also be applied to elemental planetary surface mapping via gamma ray spectroscopy. Due to our method's simplicity and power, it could potentially improve a current map's resolution by a significant factor.

  4. First detection of precursory ground inflation of a small phreatic eruption by InSAR

    NASA Astrophysics Data System (ADS)

    Kobayashi, Tomokazu; Morishita, Yu; Munekane, Hiroshi

    2018-06-01

    Phreatic eruptions are caused by pressurization of geothermal fluid sources at shallow levels. They are relatively small compared to typical magmatic eruptions, but can be very hazardous. However, owing to their small magnitudes, their occurrences are difficult to predict. Here we show the detection of locally distributed ground inflation preceding a small phreatic eruption at the Hakone volcano, Japan, through the application of interferometric synthetic aperture radar analysis. The ground inflation proceeded the eruption at slow speed of ∼5 mm/month with a spatial size of ∼200 m in the early stage, and then it accelerated 2 months before the eruption that occurred for the first time in 800-900 yrs. The ground uplift reached ∼30 cm, and the eruption occurred nearby the most deformed part. The deformation speed correlated well with inflation of spherical source located at 4.8 km below sea level, thus suggesting that heat and/or volcanic fluid supply from the spherical source, maybe magma reservoir, directly drove the subsurface hydrothermal activity. Our results demonstrate that high-spatial-resolution deformation data can be a good indicator of subsurface pressure conditions with pinpoint spatial accuracy during the preparatory process of phreatic eruptions.

  5. Audio reproduction for personal ambient home assistance: concepts and evaluations for normal-hearing and hearing-impaired persons.

    PubMed

    Huber, Rainer; Meis, Markus; Klink, Karin; Bartsch, Christian; Bitzer, Joerg

    2014-01-01

    Within the Lower Saxony Research Network Design of Environments for Ageing (GAL), a personal activity and household assistant (PAHA), an ambient reminder system, has been developed. One of its central output modality to interact with the user is sound. The study presented here evaluated three different system technologies for sound reproduction using up to five loudspeakers, including the "phantom source" concept. Moreover, a technology for hearing loss compensation for the mostly older users of the PAHA was implemented and evaluated. Evaluation experiments with 21 normal hearing and hearing impaired test subjects were carried out. The results show that after direct comparison of the sound presentation concepts, the presentation by the single TV speaker was most preferred, whereas the phantom source concept got the highest acceptance ratings as far as the general concept is concerned. The localization accuracy of the phantom source concept was good as long as the exact listening position was known to the algorithm and speech stimuli were used. Most subjects preferred the original signals over the pre-processed, dynamic-compressed signals, although processed speech was often described as being clearer.

  6. Locating and Modeling Regional Earthquakes with Broadband Waveform Data

    NASA Astrophysics Data System (ADS)

    Tan, Y.; Zhu, L.; Helmberger, D.

    2003-12-01

    Retrieving source parameters of small earthquakes (Mw < 4.5), including mechanism, depth, location and origin time, relies on local and regional seismic data. Although source characterization for such small events achieves a satisfactory stage in some places with a dense seismic network, such as TriNet, Southern California, a worthy revisit to the historical events in these places or an effective, real-time investigation of small events in many other places, where normally only a few local waveforms plus some short-period recordings are available, is still a problem. To address this issue, we introduce a new type of approach that estimates location, depth, origin time and fault parameters based on 3-component waveform matching in terms of separated Pnl, Rayleigh and Love waves. We show that most local waveforms can be well modeled by a regionalized 1-D model plus different timing corrections for Pnl, Rayleigh and Love waves at relatively long periods, i.e., 4-100 sec for Pnl, and 8-100 sec for surface waves, except for few anomalous paths involving greater structural complexity, meanwhile, these timing corrections reveal similar azimuthal patterns for well-located cluster events, despite their different focal mechanisms. Thus, we can calibrate the paths separately for Pnl, Rayleigh and Love waves with the timing corrections from well-determined events widely recorded by a dense modern seismic network or a temporary PASSCAL experiment. In return, we can locate events and extract their fault parameters by waveform matching for available waveform data, which could be as less as from two stations, assuming timing corrections from the calibration. The accuracy of the obtained source parameters is subject to the error carried by the events used for the calibration. The detailed method requires a Green­_s function library constructed from a regionalized 1-D model together with necessary calibration information, and adopts a grid search strategy for both hypercenter and focal mechanism. We show that the whole process can be easily automated and routinely provide reliable source parameter estimates with a couple of broadband stations. Two applications in the Tibet Plateau and Southern California will be presented along with comparisons of results against other methods.

  7. A Method for Assessing Auditory Spatial Analysis in Reverberant Multitalker Environments.

    PubMed

    Weller, Tobias; Best, Virginia; Buchholz, Jörg M; Young, Taegan

    2016-07-01

    Deficits in spatial hearing can have a negative impact on listeners' ability to orient in their environment and follow conversations in noisy backgrounds and may exacerbate the experience of hearing loss as a handicap. However, there are no good tools available for reliably capturing the spatial hearing abilities of listeners in complex acoustic environments containing multiple sounds of interest. The purpose of this study was to explore a new method to measure auditory spatial analysis in a reverberant multitalker scenario. This study was a descriptive case control study. Ten listeners with normal hearing (NH) aged 20-31 yr and 16 listeners with hearing impairment (HI) aged 52-85 yr participated in the study. The latter group had symmetrical sensorineural hearing losses with a four-frequency average hearing loss of 29.7 dB HL. A large reverberant room was simulated using a loudspeaker array in an anechoic chamber. In this simulated room, 96 scenes comprising between one and six concurrent talkers at different locations were generated. Listeners were presented with 45-sec samples of each scene, and were required to count, locate, and identify the gender of all talkers, using a graphical user interface on an iPad. Performance was evaluated in terms of correctly counting the sources and accuracy in localizing their direction. Listeners with NH were able to reliably analyze scenes with up to four simultaneous talkers, while most listeners with hearing loss demonstrated errors even with two talkers at a time. Localization performance decreased in both groups with increasing number of talkers and was significantly poorer in listeners with HI. Overall performance was significantly correlated with hearing loss. This new method appears to be useful for estimating spatial abilities in realistic multitalker scenes. The method is sensitive to the number of sources in the scene, and to effects of sensorineural hearing loss. Further work will be needed to compare this method to more traditional single-source localization tests. American Academy of Audiology.

  8. Energy-based dosimetry of low-energy, photon-emitting brachytherapy sources

    NASA Astrophysics Data System (ADS)

    Malin, Martha J.

    Model-based dose calculation algorithms (MBDCAs) for low-energy, photon-emitting brachytherapy sources have advanced to the point where the algorithms may be used in clinical practice. Before these algorithms can be used, a methodology must be established to verify the accuracy of the source models used by the algorithms. Additionally, the source strength metric for these algorithms must be established. This work explored the feasibility of verifying the source models used by MBDCAs by measuring the differential photon fluence emitted from the encapsulation of the source. The measured fluence could be compared to that modeled by the algorithm to validate the source model. This work examined how the differential photon fluence varied with position and angle of emission from the source, and the resolution that these measurements would require for dose computations to be accurate to within 1.5%. Both the spatial and angular resolution requirements were determined. The techniques used to determine the resolution required for measurements of the differential photon fluence were applied to determine why dose-rate constants determined using a spectroscopic technique disagreed with those computed using Monte Carlo techniques. The discrepancy between the two techniques had been previously published, but the cause of the discrepancy was not known. This work determined the impact that some of the assumptions used by the spectroscopic technique had on the accuracy of the calculation. The assumption of isotropic emission was found to cause the largest discrepancy in the spectroscopic dose-rate constant. Finally, this work improved the instrumentation used to measure the rate at which energy leaves the encapsulation of a brachytherapy source. This quantity is called emitted power (EP), and is presented as a possible source strength metric for MBDCAs. A calorimeter that measured EP was designed and built. The theoretical framework that the calorimeter relied upon to measure EP was established. Four clinically relevant 125I brachytherapy sources were measured with the instrument. The accuracy of the measured EP was compared to an air-kerma strength-derived EP to test the accuracy of the instrument. The instrument was accurate to within 10%, with three out of the four source measurements accurate to within 4%.

  9. The Herschel-ATLAS Data Release 2. Paper II. Catalogs of Far-infrared and Submillimeter Sources in the Fields at the South and North Galactic Poles

    NASA Astrophysics Data System (ADS)

    Maddox, S. J.; Valiante, E.; Cigan, P.; Dunne, L.; Eales, S.; Smith, M. W. L.; Dye, S.; Furlanetto, C.; Ibar, E.; de Zotti, G.; Millard, J. S.; Bourne, N.; Gomez, H. L.; Ivison, R. J.; Scott, D.; Valtchanov, I.

    2018-06-01

    The Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS) is a survey of 660 deg2 with the PACS and SPIRE cameras in five photometric bands: 100, 160, 250, 350, and 500 μm. This is the second of three papers describing the data release for the large fields at the south and north Galactic poles (NGP and SGP). In this paper we describe the catalogs of far-infrared and submillimeter sources for the NGP and SGP, which cover 177.1 deg2 and 303.4 deg2, respectively. The catalogs contain 118,908 sources for the NGP field and 193,527 sources for the SGP field detected at more than 4σ significance in any of the 250, 350, or 500 μm bands. The source detection is based on the 250 μm map, and we present photometry in all five bands for each source, including aperture photometry for sources known to be extended. The rms positional accuracy for the faintest sources is about 2.4 arcsec in both R.A. and decl. We present a statistical analysis of the catalogs and discuss the practical issues—completeness, reliability, flux boosting, accuracy of positions, accuracy of flux measurements—necessary to use the catalogs for astronomical projects.

  10. Methane Leak Detection and Emissions Quantification with UAVs

    NASA Astrophysics Data System (ADS)

    Barchyn, T.; Fox, T. A.; Hugenholtz, C.

    2016-12-01

    Robust leak detection and emissions quantification algorithms are required to accurately monitor greenhouse gas emissions. Unmanned aerial vehicles (UAVs, `drones') could both reduce the cost and increase the accuracy of monitoring programs. However, aspects of the platform create unique challenges. UAVs typically collect large volumes of data that are close to source (due to limited range) and often lower quality (due to weight restrictions on sensors). Here we discuss algorithm development for (i) finding sources of unknown position (`leak detection') and (ii) quantifying emissions from a source of known position. We use data from a simulated leak and field study in Alberta, Canada. First, we detail a method for localizing a leak of unknown spatial location using iterative fits against a forward Gaussian plume model. We explore sources of uncertainty, both inherent to the method and operational. Results suggest this method is primarily constrained by accurate wind direction data, distance downwind from source, and the non-Gaussian shape of close range plumes. Second, we examine sources of uncertainty in quantifying emissions with the mass balance method. Results suggest precision is constrained by flux plane interpolation errors and time offsets between spatially adjacent measurements. Drones can provide data closer to the ground than piloted aircraft, but large portions of the plume are still unquantified. Together, we find that despite larger volumes of data, working with close range plumes as measured with UAVs is inherently difficult. We describe future efforts to mitigate these challenges and work towards more robust benchmarking for application in industrial and regulatory settings.

  11. An evaluation of talker localization based on direction of arrival estimation and statistical sound source identification

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2002-11-01

    It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.

  12. Particle Streak Anemometry: A New Method for Proximal Flow Sensing from Aircraft

    NASA Astrophysics Data System (ADS)

    Nichols, T. W.

    Accurate sensing of relative air flow direction from fixed-wing small unmanned aircraft (sUAS) is challenging with existing multi-hole pitot-static and vane systems. Sub-degree direction accuracy is generally not available on such systems and disturbances to the local flow field, induced by the airframe, introduce an additional error source. An optical imaging approach to make a relative air velocity measurement with high-directional accuracy is presented. Optical methods offer the capability to make a proximal measurement in undisturbed air outside of the local flow field without the need to place sensors on vulnerable probes extended ahead of the aircraft. Current imaging flow analysis techniques for laboratory use rely on relatively thin imaged volumes and sophisticated hardware and intensity thresholding in low-background conditions. A new method is derived and assessed using a particle streak imaging technique that can be implemented with low-cost commercial cameras and illumination systems, and can function in imaged volumes of arbitrary depth with complex background signal. The new technique, referred to as particle streak anemometry (PSA) (to differentiate from particle streak velocimetry which makes a field measurement rather than a single bulk flow measurement) utilizes a modified Canny Edge detection algorithm with a connected component analysis and principle component analysis to detect streak ends in complex imaging conditions. A linear solution for the air velocity direction is then implemented with a random sample consensus (RANSAC) solution approach. A single DOF non-linear, non-convex optimization problem is then solved for the air speed through an iterative approach. The technique was tested through simulation and wind tunnel tests yielding angular accuracies under 0.2 degrees, superior to the performance of existing commercial systems. Air speed error standard deviations varied from 1.6 to 2.2 m/s depending on the techniques of implementation. While air speed sensing is secondary to accurate flow direction measurement, the air speed results were in line with commercial pitot static systems at low speeds.

  13. Prediction of Long Loops with Embedded Secondary Structure using the Protein Local Optimization Program

    PubMed Central

    Miller, Edward B.; Murrett, Colleen S.; Zhu, Kai; Zhao, Suwen; Goldfeld, Dahlia A.; Bylund, Joseph H.; Friesner, Richard A.

    2013-01-01

    Robust homology modeling to atomic-level accuracy requires in the general case successful prediction of protein loops containing small segments of secondary structure. Further, as loop prediction advances to success with larger loops, the exclusion of loops containing secondary structure becomes awkward. Here, we extend the applicability of the Protein Local Optimization Program (PLOP) to loops up to 17 residues in length that contain either helical or hairpin segments. In general, PLOP hierarchically samples conformational space and ranks candidate loops with a high-quality molecular mechanics force field. For loops identified to possess α-helical segments, we employ an alternative dihedral library composed of (ϕ,ψ) angles commonly found in helices. The alternative library is searched over a user-specified range of residues that define the helical bounds. The source of these helical bounds can be from popular secondary structure prediction software or from analysis of past loop predictions where a propensity to form a helix is observed. Due to the maturity of our energy model, the lowest energy loop across all experiments can be selected with an accuracy of sub-Ångström RMSD in 80% of cases, 1.0 to 1.5 Å RMSD in 14% of cases, and poorer than 1.5 Å RMSD in 6% of cases. The effectiveness of our current methods in predicting hairpin-containing loops is explored with hairpins up to 13 residues in length and again reaching an accuracy of sub-Ångström RMSD in 83% of cases, 1.0 to 1.5 Å RMSD in 10% of cases, and poorer than 1.5 Å RMSD in 7% of cases. Finally, we explore the effect of an imprecise surrounding environment, in which side chains, but not the backbone, are initially in perturbed geometries. In these cases, loops perturbed to 3Å RMSD from the native environment were restored to their native conformation with sub-Ångström RMSD. PMID:23814507

  14. An asymptotic-preserving Lagrangian algorithm for the time-dependent anisotropic heat transport equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis; del-Castillo-Negrete, Diego; Hauck, Cory D.

    2014-09-01

    We propose a Lagrangian numerical algorithm for a time-dependent, anisotropic temperature transport equation in magnetized plasmas in the large guide field regime. The approach is based on an analytical integral formal solution of the parallel (i.e., along the magnetic field) transport equation with sources, and it is able to accommodate both local and non-local parallel heat flux closures. The numerical implementation is based on an operator-split formulation, with two straightforward steps: a perpendicular transport step (including sources), and a Lagrangian (field-line integral) parallel transport step. Algorithmically, the first step is amenable to the use of modern iterative methods, while themore » second step has a fixed cost per degree of freedom (and is therefore scalable). Accuracy-wise, the approach is free from the numerical pollution introduced by the discrete parallel transport term when the perpendicular to parallel transport coefficient ratio X ⊥ /X ∥ becomes arbitrarily small, and is shown to capture the correct limiting solution when ε = X⊥L 2 ∥/X1L 2 ⊥ → 0 (with L∥∙ L⊥ , the parallel and perpendicular diffusion length scales, respectively). Therefore, the approach is asymptotic-preserving. We demonstrate the capabilities of the scheme with several numerical experiments with varying magnetic field complexity in two dimensions, including the case of transport across a magnetic island.« less

  15. The Effects of Age and Set Size on the Fast Extraction of Egocentric Distance

    PubMed Central

    Gajewski, Daniel A.; Wallin, Courtney P.; Philbeck, John W.

    2016-01-01

    Angular direction is a source of information about the distance to floor-level objects that can be extracted from brief glimpses (near one's threshold for detection). Age and set size are two factors known to impact the viewing time needed to directionally localize an object, and these were posited to similarly govern the extraction of distance. The question here was whether viewing durations sufficient to support object detection (controlled for age and set size) would also be sufficient to support well-constrained judgments of distance. Regardless of viewing duration, distance judgments were more accurate (less biased towards underestimation) when multiple potential targets were presented, suggesting that the relative angular declinations between the objects are an additional source of useful information. Distance judgments were more precise with additional viewing time, but the benefit did not depend on set size and accuracy did not improve with longer viewing durations. The overall pattern suggests that distance can be efficiently derived from direction for floor-level objects. Controlling for age-related differences in the viewing time needed to support detection was sufficient to support distal localization but only when brief and longer glimpse trials were interspersed. Information extracted from longer glimpse trials presumably supported performance on subsequent trials when viewing time was more limited. This outcome suggests a particularly important role for prior visual experience in distance judgments for older observers. PMID:27398065

  16. Development of a check sheet for collecting information necessary for occupational safety and health activities and building relevant systems in overseas business places.

    PubMed

    Kajiki, Shigeyuki; Kobayashi, Yuichi; Uehara, Masamichi; Nakanishi, Shigemoto; Mori, Koji

    2016-06-07

    This study aimed to develop an information gathering check sheet to efficiently collect information necessary for Japanese companies to build global occupational safety and health management systems in overseas business places. The study group consisted of 2 researchers with occupational physician careers in a foreign-affiliated company in Japan and 3 supervising occupational physicians who were engaged in occupational safety and health activities in overseas business places. After investigating information and sources of information necessary for implementing occupational safety and health activities and building relevant systems, we conducted information acquisition using an information gathering check sheet in the field, by visiting 10 regions in 5 countries (first phase). The accuracy of the information acquired and the appropriateness of the information sources were then verified in study group meetings to improve the information gathering check sheet. Next, the improved information gathering check sheet was used in another setting (3 regions in 1 country) to confirm its efficacy (second phase), and the information gathering check sheet was thereby completed. The information gathering check sheet was composed of 9 major items (basic information on the local business place, safety and health overview, safety and health systems, safety and health staff, planning/implementation/evaluation/improvement, safety and health activities, laws and administrative organs, local medical care systems and public health, and medical support for resident personnel) and 61 medium items. We relied on the following eight information sources: the internet, company (local business place and head office in Japan), embassy/consulate, ISO certification body, university or other educational institutions, and medical institutions (aimed at Japanese people or at local workers). Through multiple study group meetings and a two-phased field survey (13 regions in 6 countries), an information gathering check sheet was completed. We confirmed the possibility that this check sheet would enable the user to obtain necessary information when expanding safety and health activities in a country or region that is new to the user. It is necessary in the future to evaluate safety and health systems and activities using this information gathering check sheet in a local business place in any country in which a Japanese business will be established, and to verify the efficacy of the check sheet by conducting model programs to test specific approaches.

  17. A test of the reward-value hypothesis.

    PubMed

    Smith, Alexandra E; Dalecki, Stefan J; Crystal, Jonathon D

    2017-03-01

    Rats retain source memory (memory for the origin of information) over a retention interval of at least 1 week, whereas their spatial working memory (radial maze locations) decays within approximately 1 day. We have argued that different forgetting functions dissociate memory systems. However, the two tasks, in our previous work, used different reward values. The source memory task used multiple pellets of a preferred food flavor (chocolate), whereas the spatial working memory task provided access to a single pellet of standard chow-flavored food at each location. Thus, according to the reward-value hypothesis, enhanced performance in the source memory task stems from enhanced encoding/memory of a preferred reward. We tested the reward-value hypothesis by using a standard 8-arm radial maze task to compare spatial working memory accuracy of rats rewarded with either multiple chocolate or chow pellets at each location using a between-subjects design. The reward-value hypothesis predicts superior accuracy for high-valued rewards. We documented equivalent spatial memory accuracy for high- and low-value rewards. Importantly, a 24-h retention interval produced equivalent spatial working memory accuracy for both flavors. These data are inconsistent with the reward-value hypothesis and suggest that reward value does not explain our earlier findings that source memory survives unusually long retention intervals.

  18. Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2014-03-01

    Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.

  19. Stable Local Volatility Calibration Using Kernel Splines

    NASA Astrophysics Data System (ADS)

    Coleman, Thomas F.; Li, Yuying; Wang, Cheng

    2010-09-01

    We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.

  20. Space-Borne Laser Altimeter Geolocation Error Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Fang, J.; Ai, Y.

    2018-05-01

    This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.

Top