Science.gov

Sample records for accurate event locations

  1. Bayesian Mulitple-Event Location

    2010-03-30

    Bayesloc is a statistical model of the multiple seismic location system, including event hypocenters, corrections to model-based travel time predictions, assessments precision for measurement phase arrival times, and phase lavels which indicate phase ray path.

  2. Accurate tremor locations from coherent S and P waves

    NASA Astrophysics Data System (ADS)

    Armbruster, John G.; Kim, Won-Young; Rubin, Allan M.

    2014-06-01

    Nonvolcanic tremor is an important component of the slow slip processes which load faults from below, but accurately locating tremor has proven difficult because tremor rarely contains clear P or S wave arrivals. Here we report the observation of coherence in the shear and compressional waves of tremor at widely separated stations which allows us to detect and accurately locate tremor events. An event detector using data from two stations sees the onset of tremor activity in the Cascadia tremor episodes of February 2003, July 2004, and September 2005 and confirms the previously reported south to north migration of the tremor. Event detectors using data from three and four stations give Sand P arrival times of high accuracy. The hypocenters of the tremor events fall at depths of ˜30 to ˜40 km and define a narrow plane dipping at a shallow angle to the northeast, consistent with the subducting plate interface. The S wave polarizations and P wave first motions define a source mechanism in agreement with the northeast convergence seen in geodetic observations of slow slip. Tens of thousands of locations determined by constraining the events to the plate interface show tremor sources highly clustered in space with a strongly similar pattern of sources in the three episodes examined. The deeper sources generate tremor in minor episodes as well. The extent to which the narrow bands of tremor sources overlap between the three major episodes suggests relative epicentral location errors as small as 1-2 km.

  3. Accurate source location from waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, Nian; Shen, Yang; Flinders, Ashton; Zhang, Wei

    2016-06-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (>100 m). In this study, we explore the use of P coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example to provide realistic topography. A grid search algorithm is combined with the 3-D strain Green's tensor database to improve search efficiency as well as the quality of hypocenter solutions. The strain Green's tensor is calculated using a 3-D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are obtained based on the least squares misfit between the "observed" and predicted P and P coda waves. The 95% confidence interval of the solution is provided as an a posteriori error estimation. For shallow events tested in the study, scattering is mainly due to topography in comparison with stochastic lateral velocity heterogeneity. The incorporation of P coda significantly improves solution accuracy and reduces solution uncertainty. The solution remains robust with wide ranges of random noises in data, unmodeled random velocity heterogeneities, and uncertainties in moment tensors. The method can be extended to locate pairs of sources in close proximity by differential waveforms using source-receiver reciprocity, further reducing errors caused by unmodeled velocity structures.

  4. Automated microseismic event location using Master-Event Waveform Stacking.

    PubMed

    Grigoli, Francesco; Cesca, Simone; Krieger, Lars; Kriegerowski, Marius; Gammaldi, Sergio; Horalek, Josef; Priolo, Enrico; Dahm, Torsten

    2016-01-01

    Accurate and automated locations of microseismic events are desirable for many seismological and industrial applications. The analysis of microseismicity is particularly challenging because of weak seismic signals with low signal-to-noise ratio. Traditional location approaches rely on automated picking, based on individual seismograms, and make no use of the coherency information between signals at different stations. This strong limitation has been overcome by full-waveform location methods, which exploit the coherency of waveforms at different stations and improve the location robustness even in presence of noise. However, the performance of these methods strongly depend on the accuracy of the adopted velocity model, which is often quite rough; inaccurate models result in large location errors. We present an improved waveform stacking location method based on source-specific station corrections. Our method inherits the advantages of full-waveform location methods while strongly mitigating the dependency on the accuracy of the velocity model. With this approach the influence of an inaccurate velocity model on the results is restricted to the estimation of travel times solely within the seismogenic volume, but not for the entire source-receiver path. We finally successfully applied our new method to a realistic synthetic dataset as well as real data. PMID:27185465

  5. Automated microseismic event location using Master-Event Waveform Stacking

    NASA Astrophysics Data System (ADS)

    Grigoli, Francesco; Cesca, Simone; Krieger, Lars; Kriegerowski, Marius; Gammaldi, Sergio; Horalek, Josef; Priolo, Enrico; Dahm, Torsten

    2016-05-01

    Accurate and automated locations of microseismic events are desirable for many seismological and industrial applications. The analysis of microseismicity is particularly challenging because of weak seismic signals with low signal-to-noise ratio. Traditional location approaches rely on automated picking, based on individual seismograms, and make no use of the coherency information between signals at different stations. This strong limitation has been overcome by full-waveform location methods, which exploit the coherency of waveforms at different stations and improve the location robustness even in presence of noise. However, the performance of these methods strongly depend on the accuracy of the adopted velocity model, which is often quite rough; inaccurate models result in large location errors. We present an improved waveform stacking location method based on source-specific station corrections. Our method inherits the advantages of full-waveform location methods while strongly mitigating the dependency on the accuracy of the velocity model. With this approach the influence of an inaccurate velocity model on the results is restricted to the estimation of travel times solely within the seismogenic volume, but not for the entire source-receiver path. We finally successfully applied our new method to a realistic synthetic dataset as well as real data.

  6. Automated microseismic event location using Master-Event Waveform Stacking

    PubMed Central

    Grigoli, Francesco; Cesca, Simone; Krieger, Lars; Kriegerowski, Marius; Gammaldi, Sergio; Horalek, Josef; Priolo, Enrico; Dahm, Torsten

    2016-01-01

    Accurate and automated locations of microseismic events are desirable for many seismological and industrial applications. The analysis of microseismicity is particularly challenging because of weak seismic signals with low signal-to-noise ratio. Traditional location approaches rely on automated picking, based on individual seismograms, and make no use of the coherency information between signals at different stations. This strong limitation has been overcome by full-waveform location methods, which exploit the coherency of waveforms at different stations and improve the location robustness even in presence of noise. However, the performance of these methods strongly depend on the accuracy of the adopted velocity model, which is often quite rough; inaccurate models result in large location errors. We present an improved waveform stacking location method based on source-specific station corrections. Our method inherits the advantages of full-waveform location methods while strongly mitigating the dependency on the accuracy of the velocity model. With this approach the influence of an inaccurate velocity model on the results is restricted to the estimation of travel times solely within the seismogenic volume, but not for the entire source-receiver path. We finally successfully applied our new method to a realistic synthetic dataset as well as real data. PMID:27185465

  7. Accurate source location from P waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.

    2015-12-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (> 100 m). In this study, we explore the use of P-coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example. The grid search method is combined with the 3D strain Green's tensor database type method to improve the search efficiency as well as the quality of hypocenter solution. The strain Green's tensor is calculated by the 3D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are then obtained based on the least-square misfit between the 'observed' and predicted P and P-coda waves. A 95% confidence interval of the solution is also provided as a posterior error estimation. We find that the scattered waves are mainly due to topography in comparison with random velocity heterogeneity characterized by the von Kάrmάn-type power spectral density function. When only P wave data is used, the 'best' solution is offset from the real source location mostly in the vertical direction. The incorporation of P coda significantly improves solution accuracy and reduces its uncertainty. The solution remains robust with a range of random noises in data, un-modeled random velocity heterogeneities, and uncertainties in moment tensors that we tested.

  8. Nonexposure accurate location K-anonymity algorithm in LBS.

    PubMed

    Jia, Jinying; Zhang, Fengli

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  9. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  10. Accurate eye center location through invariant isocentric patterns.

    PubMed

    Valenti, Roberto; Gevers, Theo

    2012-09-01

    Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and impossible to use on standard (i.e., visible wavelength), low-resolution images of eyes. Systems based solely on appearance are proposed in the literature, but their accuracy does not allow us to accurately locate and distinguish eye centers movements in these low-resolution settings. Our aim is to bridge this gap by locating the center of the eye within the area of the pupil on low-resolution images taken from a webcam or a similar device. The proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve in-plane rotational invariance, and to keep low-computational costs. To further gain scale invariance, the approach is applied to a scale space pyramid. In this paper, we extensively test our approach for its robustness to changes in illumination, head pose, scale, occlusion, and eye rotation. We demonstrate that our system can achieve a significant improvement in accuracy over state-of-the-art techniques for eye center location in standard low-resolution imagery. PMID:22813958

  11. It's All about Location, Location, Location: Children's Memory for the "Where'' of Personally Experienced Events

    ERIC Educational Resources Information Center

    Bauer, Patricia J.; Doydum, Ayzit O.; Pathman, Thanujeni; Larkina, Marina; Guler, O. Evren; Burch, Melissa

    2012-01-01

    Episodic memory is defined as the ability to recall specific past events located in a particular time and place. Over the preschool and into the school years, there are clear developmental changes in memory for when events took place. In contrast, little is known about developmental changes in memory for where events were experienced. In the…

  12. Accurate Tremor Locations in Japan from Coherent S-Waves

    NASA Astrophysics Data System (ADS)

    Armbruster, J. G.

    2014-12-01

    The tremor detectors developed for accurately locating tectonic tremor in Cascadia [Armbruster et al., JGR 2014] have been applied to data from the HINET seismic network in Japan. The best results were obtained in the Tokai region with stations ASU, ASH and TYE having relatively close spacing (11-18 km). 330 days with active tremor, 2004-2014, near these stations were found on the daily epicentral distributions of tremor on the HINET web site. The detector sees numbers of detections per day comparable to minor tremor episodes in Cascadia. Major tremor episodes in Cascadia are associated with geodetic signals stronger than those seen in Japan. If the tremor is located by constraining it to the plate interface, a pattern of persistent sources is seen, with some intense sources. This is similar to what was seen in Cascadia. In southwest Shikoku 139 days with tremor were identified. Stations UWA, OOZ and IKT see tremor with persistent patterns and strong sources but with approximately one fifth as many detections per day on active days, compared to ASU-ASH-TYE. The web site tremor distributions show activity here as strong as in Tokai. We believe the lesser number of detections in Shikoku is primarily the result of wider station spacing, 19-30 km, than in Tokai, although there may be other factors. Yabe and Ide [EPS 2013] detect and locate tremor in Kyushu on July 17-18 2005 and December 4-6 2008. A detector with stations NRA, SUK and KTM, station spacing 21-22 km, sees tremor which resembles minor episodes in Cascadia. The relative arrival times are consistent with their locations. We conclude that the methods developed in Cascadia will work in Japan but the typical spacing of HINET stations, ~20 km, is greater than the optimum distance found in analysis of data from Cascadia, 8 to 15 km.

  13. A Bayesian Method to Apply the Results of Multiple-Event Seismic Location to a Subsequent Event

    NASA Astrophysics Data System (ADS)

    Johannesson, G.; Myers, S. C.

    2014-12-01

    BayesLoc is a Bayesian multiple-event seismic locator that uses a Markov chain Monte Carlo (MCMC) algorithm to sample possible seismic hypocenters, travel-time corrections, and the precision of observed arrival data (absolute picks and differential times based on cross-correlated waveforms). By simultaneously locating multiple seismic events, regional biases in the assumed travel-time model (e.g., ak135) can be estimated and corrected for, and data from different seismic stations and phases can be weighted to reflect their accuracy/precision for an event cluster. As such, multiple-event locators generally yield more accurate locations than single-event locators, which lack the data to resolve the underlying travel-time model and adaptively "weight" the arrival data differently for each station and phase. On the other hand, single-event locators are computationally more attractive, making them more suitable for rapid (realtime) location of seismic activity. We present a novel approach to approximate the location accuracy of the BayesLoc multiple-event analysis at a computational cost that is comparable to BayesLoc single-event analysis. The proposed approach consists of two steps: a precomputed multiple-event training analysis and subsequent real-time, single-event location for new events. The precomputed training analsysis consists of carrying out a multiple-event BayesLoc run in a given target event cluster, yielding a posterior sample of travel-time corrections and weights. Given a new event in the vicinity of the training cluster, a BayesLoc single-event run is carried out which samples the travel-time corrections and weights from the multiple-event training run. Hence, it has all the benefits of the multiple-event run at the cost of a single-event run. We present the theoretical underpinnings of the new approach and we compare event location results for the full multiple-event, single-event, and the new approaches. This work was performed under the auspices of

  14. Multiple event location analysis of aftershock sequences in the Pannonian basin

    NASA Astrophysics Data System (ADS)

    Bekesi, Eszter; Sule, Balint; Bondar, Istvan

    2016-04-01

    Accurate seismic event location is crucial to understand tectonic processes such as crustal faults that are most commonly investigated by studying seismic activity. Location errors can be significantly reduced using multiple event location methods. We applied the double difference method to relocate the earthquake occurred near Oroszlány and its 200 aftershocks to identify the geometry of the related fault. We used the extended ISC location algorithm, iLoc to determine the absolute single event locations for the Oroszlány aftershock sequence and applied double difference algorithm on the new hypocenters. To improve location precision, we added differential times from waveform cross-correlation to the multiple event location process to increase the accuracy of arrival time readings. We also tested the effect of various local 1-D velocity models on the results. We compared hypoDD results of bulletin and iLoc hypocenters to investigate the effect of initial hypocenter parameters on the relocation process. We show that hypoDD collapses the initial, rather diffuse locations into a smaller cluster and the vertical cross-sections show sharp images of seismicity. Unsurprisingly, the combined use of catalog and cross-correlation data sets provides the more accurate locations. Some of the relocated events in the cluster are ground truth quality with a location accuracy of 5 km or better. Having achieved accurate locations for the event cluster we are able to resolve the fault plane ambiguity in the moment tensor solutions and determine the accurate strike of the fault.

  15. Using epicenter location to differentiate events from natural background seismicity

    SciTech Connect

    Myers, S C; Walter, W R

    1999-07-26

    Efforts to more effectively monitor the Comprehensive Nuclear-Test-Ban Treaty (commonly referred to as the CTBT) include research into methods of seismic discrimination. The most common seismic discriminants exploit differences in seismic amplitude for differing source types. Amplitude discriminants are quite effective when wave-propagation (a.k.a. path) effects are properly accounted for. However, because path effects can be exceedingly complex, path calibration is often accomplished empirically by spatially interpolating amplitude characteristics for a set of calibration earthquakes with techniques like Bayesian kriging. As a result, amplitude discriminants can be highly effective when natural seismicity provides sufficient event coverage to characterize a region. However, amplitude discrimination can become less effective for events that are far from historical (path-calibration) events. It is intuitive that events occurring at a distance from historical seismicity patterns are inherently suspect. However, quantifying the degree to which a particular event is unexpected could be of great utility in CTBT monitoring. Epicenter location is commonly used as a qualitative discriminant. For instance, if a seismic event is located in the deep ocean, then the event is generally considered to be an earthquake. Such qualitative uses of seismic location have great utility; however, a quantitative method to differentiate events from the natural pattern of seismicity could significantly advance the applicability of location as a discriminant for source type. Clustering of earthquake epicenters is the underlying aspect of earthquake seismicity that allows for an epicenter-based discriminant, and we explore the use of fractal characterization of clustering to characterize seismicity patters. We then evaluate the likelihood that an event at any given location is drawn from the background population. The use of this technique can help to identifying events that are inconsistent

  16. Location of Tremor and Long Period Events Using Seismic Amplitudes

    NASA Astrophysics Data System (ADS)

    Battaglia, J.; Battaglia, J.; Ferrazzini, V.; Okubo, P. G.

    2001-12-01

    Tremor and Long Period (LP) events are of particular interest for understanding the behavior of volcanoes as it is assumed that they directly involve fluids in their source mechanisms. However, those events are usually difficult or impossible to locate using traditional arrival times methods, because of their emergent onsets or because they are stationary for a long time. While techniques have been proposed using seismic arrays, this task remains problematic using data from classical short period volcano monitoring networks. A method based on the use of seismic amplitudes was developed on the Piton de la Fournaise (Réunion island) for locating tremor, LP events or rockfalls. For each event, seismic amplitudes are corrected for the site effects at each station using coda amplification factors. The spatial amplitude distributions are usually smooth and coherent, and the decay of the amplitude as a function of distance can be used to locate their source. In Réunion, this method was applied to locate the source(s) of eruption tremor. Those sources are usually found at shallow depth and close to the eruptive vents. An application of this characteristic is the possibility of using eruption tremor for locating the eruptive fissures at the beginning of eruptions. We apply this technique in Hawaii for locating LP events at Kilauea volcano. We calculated coda amplification factors for all stations of the network, and coherent and smooth amplitude distributions are also obtained after correcting for the site effect. We located about 150 events which occurred in January 1998 during an increased phase of LP activity. This seismicity, which peaked on January 15, was related to a surge of magma that reached the Pu`u`O`o vent on January 14, following a rapid inflation of Kilauea's summit. The use of the amplitude method provides a new image of the LP activity. The events appear to cluster in a single group, while they are much more scattered when located using arrival times

  17. Hyperbola-generator for location of aperiodic events

    NASA Technical Reports Server (NTRS)

    Paucker, H. R.; Spitzer, C. R.; Vann, D. S.

    1970-01-01

    Plotting device, when used in conjunction with three or more detectors and local receiver and recorder, can quickly pinpoint location of any aperiodic event. Operation requires minimal training and is readily adapted to the field. Mechanical error in device prototype is less than or equal to 3 percent.

  18. Development of an accurate transmission line fault locator using the global positioning system satellites

    NASA Technical Reports Server (NTRS)

    Lee, Harry

    1994-01-01

    A highly accurate transmission line fault locator based on the traveling-wave principle was developed and successfully operated within B.C. Hydro. A transmission line fault produces a fast-risetime traveling wave at the fault point which propagates along the transmission line. This fault locator system consists of traveling wave detectors located at key substations which detect and time tag the leading edge of the fault-generated traveling wave as if passes through. A master station gathers the time-tagged information from the remote detectors and determines the location of the fault. Precise time is a key element to the success of this system. This fault locator system derives its timing from the Global Positioning System (GPS) satellites. System tests confirmed the accuracy of locating faults to within the design objective of +/-300 meters.

  19. Event location in the Middle East and North Africa

    SciTech Connect

    Schultz, C.A.; Myers, S.C.; Ruppert, S.D.

    1997-07-01

    The Lawrence Livermore National Laboratory (LLNL) CTBT R{ampersand}D program has made significant progress towards improving the ability of the IMS seismic network to locate small-magnitude events in the Middle East and North Africa (MIYNA). Given that high-grade ground truth (such as known explosions) has been difficult to obtain in these regions, we have placed a significant effort towards the development of a teleseismically constrained seismic database that provides event locations good to within 20m km. This data set is used to make an initial evaluation of the effectiveness of calibration on the proposed seismic IMS network in the MWNA. Utilizing a surrogate IMS regional network in the Middle East we find that when a seismic event lies within the footprint of the recording network the uncalibrated event locations are good to within about 25 km of the teleseismically constrained (TC) location. Using region-specific static station corrections further reduces this difference to about 20 km. To obtain further improvement in location accuracy we have used the modified kriging technique developed by SNL to interpolate new travel-time corrections. We compare this technique withe other robust linear interpolation techniques with the goal of enhancing the estimation of travel-time corrections. This is important to TC events which we find can have large uncorrelated uncertainties. Finally, we are making a large effort to incorporate LLNL analyst picks on primary and secondary phases and develop azimuth and slownsess estimates horn current IMS arrays to improve/supplement the NEIC picks.

  20. Joint Determination of Event Location and Magnitude from Historical Seismic Damage Records

    NASA Astrophysics Data System (ADS)

    Park, S.; Hong, T. K.

    2014-12-01

    Large earthquakes have long recurrence intervals. It is crucial to consider long-time seismicity for a proper assessment of potential seismic hazards. It is required to use historical earthquake records to complement the long-time seismicity records. Historical earthquake records remain as in seismic damage description with limited accuracy in source parameters including event location and its size. It is important to determine epicenters and magnitudes of historical earthquakes accurately. A noble method to determine the event location and magnitude from historical seismic damage records is introduced. Seismic damage is typically proportional to the event magnitude, and is inversely proportional to the distance. This feature allows us to deduce the event magnitude and location from spatial distribution of seismic intensities. However, the magnitude and distance trade off each other, inhibiting unique determination of event magnitude and location. The Gutenberg-Richter frequency-magnitude relationship is additionally considered to constrain the source parameters. The Gutenberg-Richter frequency-magnitude relationship is assumed to be consistent between instrumental and historical seismicity. A set of event location and magnitude that satisfy the chance of event occurrence according to the Gutenberg-Richter frequency-magnitude relationship is selected. The accuracy of the method is tested for synthetic data sets, and the validity of the method is examined. The synthetic tests present high accuracy of the method. The method is applied to historical seismic damage records, which allows us to calibrate the source parameters of historical earthquakes.

  1. Hydrogen atoms can be located accurately and precisely by x-ray crystallography.

    PubMed

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-05-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A-H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A-H bond lengths with those from neutron measurements for A-H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors. PMID:27386545

  2. Hydrogen atoms can be located accurately and precisely by x-ray crystallography

    PubMed Central

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M.; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-01-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A–H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A–H bond lengths with those from neutron measurements for A–H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors. PMID:27386545

  3. Accuracy of teleseismic event locations in the Middle East and North Africa

    SciTech Connect

    Sweeney, J.J.

    1996-12-04

    Seismic characterization at the regional level requires accurate determination of phases and travel times for many combinations of stations and events. An important consideration in the process is the accuracy of event locations. The LLNL Comprehensive Test Ban Treaty Research Program is currently working on data from the Middle East and North Africa, where seismic station coverage is relatively sparse and ``ground truth`` seismic source information is practically nonexistent. In this report the investigator use after shock studies as a source of local ground truth. He evaluates teleseismic location accuracy by comparing hypocenters determined by local networks with those determined teleseismically [e.g. the International Seismological Center (ISC) and the National Earthquake Information Center (NEIC)]. Epicentral locations, origin times, and depth determinations of events from three aftershocks studies (Algeria, Armenia, and Iran) and one local network study (Iran) are compared with ISC and NEIC locations for the same events. The key parameter for the ISC locations is the number of observations used in the location determination. For more than 40-50 observations, the agreement rapidly diminishes and ISC locations can differ from local determinations by as much as 80 km or more. Events in Iran show a distinct bias of ISC location errors toward the northeast; events in Armenia and Algeria show no directional bias. This study shows that only events with ISC M{sub b} {gt} 4.4-4.5 or NEIS M{sub b} {gt} 4.7-4. should be used for compiling travel time information from teleseismic bulletins in the Middle East/North Africa region when locations from the NEIC and ISC bulletins are used.

  4. Accurate location of nuclear explosions at Azgir, Kazakhstan, from satellite images and seismic data: Implications for monitoring decoupled explosions

    NASA Astrophysics Data System (ADS)

    Sykes, Lynn R.; Deng, Jishu; Lyubomirskiy, Paul

    1993-09-01

    The 10 largest tamped nuclear explosions detonated by the Former Soviet Union in and near two salt domes near Azgir were relocated using seismic data and the locations of shot points on a SPOT satellite image taken in 1988. Many of the shot points are clearly recognized on the satellite image and can be located with an accuracy of 60 m even though testing was carried out at those points many years earlier, i. e. between 1966 and 1979. Onsite inspections and a local seismic monitoring network combined with our accurate locations of previous explosions would insure that any cavities that remain standing from those events could not be used for undetected decoupled nuclear testing down to a very small yield. Since the Azgir area, like much of the Pre-Caspian depression, is arid, it would not be a suitable place for constructing large cavities in salt by solution mining and then using them for clandestine nuclear testing.

  5. Integrated Seismic Event Detection and Location by Advanced Array Processing

    SciTech Connect

    Kvaerna, T; Gibbons, S J; Ringdal, F; Harris, D B

    2007-02-09

    The principal objective of this two-year study is to develop and test a new advanced, automatic approach to seismic detection/location using array processing. We address a strategy to obtain significantly improved precision in the location of low-magnitude events compared with current fully-automatic approaches, combined with a low false alarm rate. We have developed and evaluated a prototype automatic system which uses as a basis regional array processing with fixed, carefully calibrated, site-specific parameters in conjuction with improved automatic phase onset time estimation. We have in parallel developed tools for Matched Field Processing for optimized detection and source-region identification of seismic signals. This narrow-band procedure aims to mitigate some of the causes of difficulty encountered using the standard array processing system, specifically complicated source-time histories of seismic events and shortcomings in the plane-wave approximation for seismic phase arrivals at regional arrays.

  6. Leisure and Pleasure: Science events in unusual locations

    NASA Astrophysics Data System (ADS)

    Bultitude, Karen; Margarida Sardo, Ana

    2012-12-01

    Building on concepts relating to informal science education, this work compares science-related activities which successfully engaged public audiences at three different 'generic' locations: a garden festival, a public park, and a music festival. The purpose was to identify what factors contribute to the perceived success of science communication activities occurring within leisure spaces. This article reports the results of 71 short (2-3 min) structured interviews with public participants at the events, and 18 structured observations sessions, demonstrating that the events were considered both novel and interesting by the participants. Audience members were found to perceive both educational and affective purposes from the events. Three key elements were identified as contributing to the success of the activities across the three 'generic venues': the informality of the surroundings, the involvement of 'real' scientists, and the opportunity to re-engage participants with scientific concepts outside formal education.

  7. Structural monitoring for rare events in remote locations

    NASA Astrophysics Data System (ADS)

    Hale, J. M.

    2005-01-01

    A structural monitoring system has been developed for use on high value engineering structures, which is particularly suitable for use in remote locations where rare events such as accidental impacts, seismic activity or terrorist attack might otherwise go undetected. The system comprises a low power intelligent on-site data logger and a remote analysis computer that communicate with one another using the internet and mobile telephone technology. The analysis computer also generates e-mail alarms and maintains a web page that displays detected events in near real-time to authorised users. The application of the prototype system to pipeline monitoring is described in which the analysis of detected events is used to differentiate between impacts and pressure surges. The system has been demonstrated successfully and is ready for deployment.

  8. Absolute GPS Time Event Generation and Capture for Remote Locations

    NASA Astrophysics Data System (ADS)

    HIRES Collaboration

    The HiRes experiment operates fixed location and portable lasers at remote desert locations to generate calibration events. One physics goal of HiRes is to search for unusual showers. These may appear similar to upward or horizontally pointing laser tracks used for atmospheric calibration. It is therefore necessary to remove all of these calibration events from the HiRes detector data stream in a physics blind manner. A robust and convenient "tagging" method is to generate the calibration events at precisely known times. To facilitate this tagging method we have developed the GPSY (Global Positioning System YAG) module. It uses a GPS receiver, an embedded processor and additional timing logic to generate laser triggers at arbitrary programmed times and frequencies with better than 100nS accuracy. The GPSY module has two trigger outputs (one microsecond resolution) to trigger the laser flash-lamp and Q-switch and one event capture input (25nS resolution). The GPSY module can be programmed either by a front panel menu based interface or by a host computer via an RS232 serial interface. The latter also allows for computer logging of generated and captured event times. Details of the design and the implementation of these devices will be presented. 1 Motivation Air Showers represent a small fraction, much less than a percent, of the total High Resolution Fly's Eye data sample. The bulk of the sample is calibration data. Most of this calibration data is generated by two types of systems that use lasers. One type sends light directly to the detectors via optical fibers to monitor detector gains (Girard 2001). The other sends a beam of light into the sky and the scattered light that reaches the detectors is used to monitor atmospheric effects (Wiencke 1998). It is important that these calibration events be cleanly separated from the rest of the sample both to provide a complete set of monitoring information, and more

  9. Automated seismic event location by arrival time stacking: Applications to local and micro-seismicity

    NASA Astrophysics Data System (ADS)

    Grigoli, F.; Cesca, S.; Braun, T.; Philipp, J.; Dahm, T.

    2012-04-01

    Locating seismic events is one of the oldest problem in seismology. In microseismicity application, when the number of event is very large, it is not possible to locate earthquake manually and automated location procedures must be established. Automated seismic event location at different scales is very important in different application areas, including mining monitoring, reservoir geophysics and early warning systems. Location is needed to start rescue operations rapidly. Locating and mapping microearthquakes or acoustic emission sources in mining environments is important for monitoring of mines stability. Mapping fractures through microseimicity distribution inside hydrocarbon reservoirs is needed to find areas with an higher permeability and enhance oil production. In the last 20 years a large number of picking algorithm was developed in order to locate seismic events automatically. While P onsets can now be accurately picked using automatic routines, the automatic picking of later seismic phases (including S onset) is still problematic , thus limiting the location performance. In this work we present a picking free location method based on the use of the Short-Term-Average/Long-Term-Average (STA/LTA) traces at different stations as observed data. For different locations and origin times, observed STA/LTA are stacked along the travel time surface corresponding to the selected hypocentre. Iterating this procedure on a three-dimensional grid we retrieve a multidimensional matrix whose absolute maximum corresponds to the spatio-temporal coordinates of the seismic event. We tested our methodology on synthetic data, simulating different environments and network geometries. Finally, we apply our method to real datasets related to microseismic activity in mines and earthquake swarms in Italy. This work has been funded by the German BMBF "Geotechnologien" project MINE (BMBF03G0737A).

  10. The DOE Model for Improving Seismic Event Locations Using Travel Time Corrections: Description and Demonstration

    SciTech Connect

    Hipp, J.R.; Moore, S.G.; Shepherd, E.; Young, C.J.

    1998-10-20

    The U.S. National Laboratories, under the auspices of the Department of Energy, have been tasked with improv- ing the capability of the United States National Data Center (USNDC) to monitor compliance with the Comprehen- sive Test Ban Trea~ (CTBT). One of the most important services which the USNDC must provide is to locate suspicious events, preferably as accurately as possible to help identify their origin and to insure the success of on-site inspections if they are deemed necessary. The seismic location algorithm used by the USNDC has the capability to generate accurate locations by applying geographically dependent travel time corrections, but to date, none of the means, proposed for generating and representing these corrections has proven to be entirely satisfactory. In this presentation, we detail the complete DOE model for how regional calibration travel time information gathered by the National Labs will be used to improve event locations and provide more realistic location error esti- mates. We begin with residual data and error estimates from ground truth events. Our model consists of three parts: data processing, data storage, and data retrieval. The former two are effectively one-time processes, executed in advance before the system is made operational. The last step is required every time an accurate event location is needed. Data processing involves applying non-stationary Bayesian kriging to the residwd data to densifi them, and iterating to find the optimal tessellation representation for the fast interpolation in the data retrieval task. Both the kriging and the iterative re-tessellation are slow, computationally-expensive processes but this is acceptable because they are performed off-line, before any events are to be located. In the data storage task, the densified data set is stored in a database and spatially indexed. Spatial indexing improves the access efficiency of the geographically-ori- ented data requests associated with event location

  11. Station Correction Uncertainty in Multiple Event Location Algorithms and the Effect on Error Ellipses

    SciTech Connect

    Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne; Hutchenson, Kevin; Oweisny, Linda; Kraft, Gordon; Anderson, Dale N.; Tinker, Mark

    2003-10-30

    Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, it is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.

  12. Investigating Plasmasphere Location during Relativistic Electron Precipitation Events

    NASA Astrophysics Data System (ADS)

    Woodger, L. A.; Millan, R. M.; Goldstein, J.; McCarthy, M. P.; Smith, D. M.; Sample, J. G.

    2006-12-01

    The plasmasphere plays a crucial role in the generation of different wave modes and their resonance conditions with radiation belt relativistic electrons. Meredith's (et. al., 2003) statistical study of resonant conditions for >2MeV electrons with EMIC waves found that the majority of these events occur in the vicinity of the plasmpause. The MAXIS and MINIS balloon observations found a distinct class of relativistic electron precipitation occurring at dusk, suggesting EMIC waves as a possible precipitation mechanism. We investigate the location of these relativistic electron precipitation events with respect to the plasmapause using data from IMAGE EUV, POLAR EFI, and a plasmapause test particle simulation driven by an electric field model with terms representing solar-wind-driven convection and ring-current-ionospheric coupling.

  13. Using Intermediate-Field Terms in Locating Microseismic Events

    NASA Astrophysics Data System (ADS)

    Lorenzo, J. M.; Dahi Taleghani, A.; LeCalvez, J.

    2014-12-01

    Microseismic mapping is a passive seismic technique used extensively for assessment of hydraulic fracturing treatments during the last two decades. Basically, microseisms are microearthquakes induced by the changes in stress and pore-fluid pressure associated with the hydraulic fracturing treatment. Current practice to locate events and determine the source mechanism of microseismic events associated with hydraulic fracture treatments only includes far-field terms for the moment tensor inversion. The intermediate-field terms and near-field term are normally ignored, perhaps simply following the tradition in locating distant earthquakes. However, source-receiver distances in hydraulic fracturing are usually 1000 ft (~300m), which is much less than the typical distances in earthquakes; therefore the effect of near and intermediate-field effects are not yet known. We try to include these near-field effects to improve the precision of locating the events and consequently determining the source mechanism. We find that the intermediate-field term may contribute up to 1/3 of the signal amplitude when the source-receiver distance is about 300 m. The intermediate-field term contributes ~1/20 of the signal amplitude when the source-receiver distance is ~ 2000 m . When the source-receiver distance exceeds ~ 2000 m, the intermediate-field terms can be ignored in our inversion. In our case, we also confirm that the near-field term can be ignored in microseismic analysis. Our results indicate that the intermediate-field terms can improve moment tensor inversion between 2% to 40% at source-receiver ranges less than 300 m. However for the case of large distances, the improvement using this technique is limited to 1%. In the presence of strong environmental noise, intermediate-field terms help to effectively improve the moment tensor inversion: i.e., 15% improvement with noise present vs 3% improvement without noise.

  14. Pickless event detection and location: The waveform correlation event detection system (WCEDS) revisited

    DOE PAGESBeta

    Arrowsmith, Stephen John; Young, Christopher J.; Ballard, Sanford; Slinkard, Megan Elizabeth

    2016-01-01

    The standard paradigm for seismic event monitoring breaks the event detection problem down into a series of processing stages that can be categorized at the highest level into station-level processing and network-level processing algorithms (e.g., Le Bras and Wuster (2002)). At the station-level, waveforms are typically processed to detect signals and identify phases, which may subsequently be updated based on network processing. At the network-level, phase picks are associated to form events, which are subsequently located. Furthermore, waveforms are typically directly exploited only at the station-level, while network-level operations rely on earth models to associate and locate the events thatmore » generated the phase picks.« less

  15. Pickless event detection and location: The waveform correlation event detection system (WCEDS) revisited

    SciTech Connect

    Arrowsmith, Stephen John; Young, Christopher J.; Ballard, Sanford; Slinkard, Megan Elizabeth

    2016-01-01

    The standard paradigm for seismic event monitoring breaks the event detection problem down into a series of processing stages that can be categorized at the highest level into station-level processing and network-level processing algorithms (e.g., Le Bras and Wuster (2002)). At the station-level, waveforms are typically processed to detect signals and identify phases, which may subsequently be updated based on network processing. At the network-level, phase picks are associated to form events, which are subsequently located. Furthermore, waveforms are typically directly exploited only at the station-level, while network-level operations rely on earth models to associate and locate the events that generated the phase picks.

  16. Accurate Damage Location in Complex Composite Structures and Industrial Environments using Acoustic Emission

    NASA Astrophysics Data System (ADS)

    Eaton, M.; Pearson, M.; Lee, W.; Pullin, R.

    2015-07-01

    The ability to accurately locate damage in any given structure is a highly desirable attribute for an effective structural health monitoring system and could help to reduce operating costs and improve safety. This becomes a far greater challenge in complex geometries and materials, such as modern composite airframes. The poor translation of promising laboratory based SHM demonstrators to industrial environments forms a barrier to commercial up take of technology. The acoustic emission (AE) technique is a passive NDT method that detects elastic stress waves released by the growth of damage. It offers very sensitive damage detection, using a sparse array of sensors to detect and globally locate damage within a structure. However its application to complex structures commonly yields poor accuracy due to anisotropic wave propagation and the interruption of wave propagation by structural features such as holes and thickness changes. This work adopts an empirical mapping technique for AE location, known as Delta T Mapping, which uses experimental training data to account for such structural complexities. The technique is applied to a complex geometry composite aerospace structure undergoing certification testing. The component consists of a carbon fibre composite tube with varying wall thickness and multiple holes, that was loaded under bending. The damage location was validated using X-ray CT scanning and the Delta T Mapping technique was shown to improve location accuracy when compared with commercial algorithms. The onset and progression of damage were monitored throughout the test and used to inform future design iterations.

  17. It’s all about location, location, location: Children’s memory for the “where” of personally experienced events

    PubMed Central

    Bauer, Patricia J.; Doydum, Ayzit O.; Pathman, Thanujeni; Larkina, Marina; Güler, O. Evren; Burch, Melissa

    2012-01-01

    Episodic memory is defined as the ability to recall specific past events located in a particular time and place. Over the preschool and into the school years, there are clear developmental changes in memory for when events took place. In contrast, little is known about developmental changes in memory for where events were experienced. In the present research we tested 4-, 6-, and 8-year-old children’s memories for specific laboratory events, each of which was experienced in a unique location. We also tested the children memories for the conjunction of the events and their locations. Age-related differences were observed in all three types of memory (event, location, conjunction of event and location), with the most pronounced differences in memory for conjunctions of events and their locations. The results have implications for our understanding of the development of episodic memory, including suggestions of protracted development of the ability to contextualize events in their spatial locations. PMID:23010356

  18. Distributed fiber sensing system with wide frequency response and accurate location

    NASA Astrophysics Data System (ADS)

    Shi, Yi; Feng, Hao; Zeng, Zhoumo

    2016-02-01

    A distributed fiber sensing system merging Mach-Zehnder interferometer and phase-sensitive optical time domain reflectometer (Φ-OTDR) is demonstrated for vibration measurement, which requires wide frequency response and accurate location. Two narrow line-width lasers with delicately different wavelengths are used to constitute the interferometer and reflectometer respectively. A narrow band Fiber Bragg Grating is responsible for separating the two wavelengths. In addition, heterodyne detection is applied to maintain the signal to noise rate of the locating signal. Experiment results show that the novel system has a wide frequency from 1 Hz to 50 MHz, limited by the sample frequency of data acquisition card, and a spatial resolution of 20 m, according to 200 ns pulse width, along 2.5 km fiber link.

  19. An approach to automatic location of regional events

    NASA Astrophysics Data System (ADS)

    Pinsky, V. I.

    1999-06-01

    The conventional network location algorithms for automatic processing are based on P and S first arrivals. These procedures have the following shortcomings: (1) it is often difficult to identify the proper phase: (Pg or Pn, Sg or Sn); and (2) first arrivals are often masked by noise. Both factors may cause significant location errors. An alternative is to look for maximums of seismic energy time-distance distribution, which are less sensitive to these factors. We measure the time-of-maximum of P and S waves envelopes vs. distance for each available station of the Israel Seismic Network (ISN), thus providing a travel time curve (TTC). The record envelopes are obtained using 1D χ2 optimal detector and 3-6 Hz `short-time-average' time-curves, having enhanced sensitivity for seismic signal arrivals. The corresponding P and S time-of-maximum vs. distance functions are approximated linearly by a least-square method for a set of local earthquakes and quarry blasts. Travel time inversion for location of small events comprises three main steps: (1) cleaning of the records from noise bursts and computation of the envelopes, (2) triggering and identification of the P and S phases and computation of their energy maximums, and (3) maximization of a sum of station residuals as a function of epicenter coordinates. The maximum of the functional is looked for on a 60×60 km grid, step of 2 km, covering different parts of Israel and Jordan. The algorithm is not sensitive to the source depth, thus providing epicenter determination only, but shows to be convenient and robust. As a result of the preliminary study for a set of relatively weak 74 local earthquakes and 58 quarry blasts, ML˜1.5-2.5, we have obtained the accuracy of epicenter estimation ±6 km for 80-90% of both types of events, which is satisfactory for automatic location. The accuracy is measured relative to the ISN bulletin locations for earthquakes and the ground truth information for quarries, respectively.

  20. Accurate Vehicle Location System Using RFID, an Internet of Things Approach

    PubMed Central

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  1. Accurate Vehicle Location System Using RFID, an Internet of Things Approach.

    PubMed

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  2. Incorporation of probabilistic seismic phase labels into a Bayesian multiple-event seismic locator

    SciTech Connect

    Myers, S; Johannesson, G; Hanley, W

    2008-01-17

    We add probabilistic phase labels to the multiple-event joint probability function of Myers et al., 2007 that formerly included event locations, travel-time corrections, and arrival-time measurement precision. Prior information on any of the multiple-event parameters may be used. The phase-label model includes a null label that captures phases not belonging to the collection of phases under consideration. Using the Markov-Chain Monte Carlo method, samples are drawn from the multiple-event joint probability function to infer the posteriori distribution that is consistent with priors and the arrival-time data set. Using this approach phase-label error can be accessed and phase-label error is propagated to all other multiple-event parameters. We test the method using a ground-truth data set of nuclear explosions at the Nevada Test Site. We find that posteriori phase labels agree with the meticulously analyzed data set in more than 97% of instances and the results are robust even when the input phase-label information is discarded. Only when a large percentage of the arrival-time data are corrupted does prior phase label information improve resolution of multiple-event parameters. Simultaneous modeling of the entire multiple-event system results in accurate posteriori probability regions for each multiple-event parameter.

  3. Accurate prediction of V1 location from cortical folds in a surface coordinate system

    PubMed Central

    Hinds, Oliver P.; Rajendran, Niranjini; Polimeni, Jonathan R.; Augustinack, Jean C.; Wiggins, Graham; Wald, Lawrence L.; Rosas, H. Diana; Potthast, Andreas; Schwartz, Eric L.; Fischl, Bruce

    2008-01-01

    Previous studies demonstrated substantial variability of the location of primary visual cortex (V1) in stereotaxic coordinates when linear volume-based registration is used to match volumetric image intensities (Amunts et al., 2000). However, other qualitative reports of V1 location (Smith, 1904; Stensaas et al., 1974; Rademacher et al., 1993) suggested a consistent relationship between V1 and the surrounding cortical folds. Here, the relationship between folds and the location of V1 is quantified using surface-based analysis to generate a probabilistic atlas of human V1. High-resolution (about 200 μm) magnetic resonance imaging (MRI) at 7 T of ex vivo human cerebral hemispheres allowed identification of the full area via the stria of Gennari: a myeloarchitectonic feature specific to V1. Separate, whole-brain scans were acquired using MRI at 1.5 T to allow segmentation and mesh reconstruction of the cortical gray matter. For each individual, V1 was manually identified in the high-resolution volume and projected onto the cortical surface. Surface-based intersubject registration (Fischl et al., 1999b) was performed to align the primary cortical folds of individual hemispheres to those of a reference template representing the average folding pattern. An atlas of V1 location was constructed by computing the probability of V1 inclusion for each cortical location in the template space. This probabilistic atlas of V1 exhibits low prediction error compared to previous V1 probabilistic atlases built in volumetric coordinates. The increased predictability observed under surface-based registration suggests that the location of V1 is more accurately predicted by the cortical folds than by the shape of the brain embedded in the volume of the skull. In addition, the high quality of this atlas provides direct evidence that surface-based intersubject registration methods are superior to volume-based methods at superimposing functional areas of cortex, and therefore are better

  4. Ground truth seismic events and location capability at Degelen mountain, Kazakhstan

    USGS Publications Warehouse

    Trabant, C.; Thurber, C.; Leith, W.

    2002-01-01

    We utilized nuclear explosions from the Degelen Mountain sub-region of the Semipalatinsk Test Site (STS), Kazakhstan, to assess seismic location capability directly. Excellent ground truth information for these events was either known or was estimated from maps of the Degelen Mountain adit complex. Origin times were refined for events for which absolute origin time information was unknown using catalog arrival times, our ground truth location estimates, and a time baseline provided by fixing known origin times during a joint hypocenter determination (JHD). Precise arrival time picks were determined using a waveform cross-correlation process applied to the available digital data. These data were used in a JHD analysis. We found that very accurate locations were possible when high precision, waveform cross-correlation arrival times were combined with JHD. Relocation with our full digital data set resulted in a mean mislocation of 2 km and a mean 95% confidence ellipse (CE) area of 6.6 km2 (90% CE: 5.1 km2), however, only 5 of the 18 computed error ellipses actually covered the associated ground truth location estimate. To test a more realistic nuclear test monitoring scenario, we applied our JHD analysis to a set of seven events (one fixed) using data only from seismic stations within 40?? epicentral distance. Relocation with these data resulted in a mean mislocation of 7.4 km, with four of the 95% error ellipses covering less than 570 km2 (90% CE: 438 km2), and the other two covering 1730 and 8869 km2 (90% CE: 1331 and 6822 km2). Location uncertainties calculated using JHD often underestimated the true error, but a circular region with a radius equal to the mislocation covered less than 1000 km2 for all events having more than three observations. ?? 2002 Elsevier Science B.V. All rights reserved.

  5. Detection and location of multiple events by MARS. Final report. [Multiple Arrival Recognition System

    SciTech Connect

    Wang, J.; Masso, J.F.; Archambeau, C.B.; Savino, J.M.

    1980-09-01

    Seismic data from two explosions was processed using the Systems Science and Software MARS (Multiple Arrival Recognition System) seismic event detector in an effort to determine their relative spatial and temporal separation on the basis of seismic data alone. The explosions were less than 1.0 kilometer apart and were separated by less than 0.5 sec in origin times. The seismic data consisted of nine local accelerograms (r < 1.0 km) and four regional (240 through 400 km) seismograms. The MARS processing clearly indicates the presence of multiple explosions, but the restricted frequency range of the data inhibits accurate time picks and hence limits the precision of the event location.

  6. Bolus Location Associated with Videofluoroscopic and Respirodeglutometric Events

    ERIC Educational Resources Information Center

    Perlman, Adrienne L.; He, Xuming; Barkmeier, Joseph; Van Leer, Eva

    2005-01-01

    The purpose of the present investigation was to determine the relation between specific events observed with simultaneous videofluoroscopy and respirodeglutometry. The order of occurrence was determined for each of 31 events (18 videofluoroscopic, 13 respirodeglutometric). Using 1 video frame (33.3 ms) as the maximum distance allowed between the…

  7. Combined Use of Absolute and Differential Seismic Arrival Time Data to Improve Absolute Event Location

    NASA Astrophysics Data System (ADS)

    Myers, S.; Johannesson, G.

    2012-12-01

    Arrival time measurements based on waveform cross correlation are becoming more common as advanced signal processing methods are applied to seismic data archives and real-time data streams. Waveform correlation can precisely measure the time difference between the arrival of two phases, and differential time data can be used to constrain relative location of events. Absolute locations are needed for many applications, which generally requires the use of absolute time data. Current methods for measuring absolute time data are approximately two orders of magnitude less precise than differential time measurements. To exploit the strengths of both absolute and differential time data, we extend our multiple-event location method Bayesloc, which previously used absolute time data only, to include the use of differential time measurements that are based on waveform cross correlation. Fundamentally, Bayesloc is a formulation of the joint probability over all parameters comprising the multiple event location system. The Markov-Chain Monte Carlo method is used to sample from the joint probability distribution given arrival data sets. The differential time component of Bayesloc includes scaling a stochastic estimate of differential time measurement precision based the waveform correlation coefficient for each datum. For a regional-distance synthetic data set with absolute and differential time measurement error of 0.25 seconds and 0.01 second, respectively, epicenter location accuracy is improved from and average of 1.05 km when solely absolute time data are used to 0.28 km when absolute and differential time data are used jointly (73% improvement). The improvement in absolute location accuracy is the result of conditionally limiting absolute location probability regions based on the precise relative position with respect to neighboring events. Bayesloc estimates of data precision are found to be accurate for the synthetic test, with absolute and differential time measurement

  8. Leisure and Pleasure: Science Events in Unusual Locations

    ERIC Educational Resources Information Center

    Bultitude, Karen; Sardo, Ana Margarida

    2012-01-01

    Building on concepts relating to informal science education, this work compares science-related activities which successfully engaged public audiences at three different "generic" locations: a garden festival, a public park, and a music festival. The purpose was to identify what factors contribute to the perceived success of science communication…

  9. Fast and accurate dating of nuclear events using La-140/Ba-140 isotopic activity ratio.

    PubMed

    Yamba, Kassoum; Sanogo, Oumar; Kalinowski, Martin B; Nikkinen, Mika; Koulidiati, Jean

    2016-06-01

    This study reports on a fast and accurate assessment of zero time of certain nuclear events using La-140/Ba-140 isotopic activity ratio. For a non-steady nuclear fission reaction, the dating is not possible. For the hypothesis of a nuclear explosion and for a release from a steady state nuclear fission reaction the zero-times will differ. This assessment is fast, because we propose some constants that can be used directly for the calculation of zero time and its upper and lower age limits. The assessment is accurate because of the calculation of zero time using a mathematical method, namely the weighted least-squares method, to evaluate an average value of the age of a nuclear event. This was done using two databases that exhibit differences between the values of some nuclear parameters. As an example, the calculation method is applied for the detection of radionuclides La-140 and Ba-140 in May 2010 at the radionuclides station JPP37 (Okinawa Island, Japan). PMID:27058322

  10. Using XTE as Part of the IPN to Derive Accurate GRB Locations

    NASA Technical Reports Server (NTRS)

    Barthelmy, S.

    1998-01-01

    The objective of this final report was to integrate the Rossi X-Ray Timing Explorer PCA into the 3rd Interplanetary Network of gamma-ray burst detectors, to allow more bursts to be detected and accurately localized. Although the necessary software was implemented to do this at Goddard and at UC Berkeley, several factors made a full integration impossible or impractical.

  11. A single geophone to locate seismic events on Mars

    NASA Astrophysics Data System (ADS)

    Roques, Aurélien; Berenguer, Jean-Luc; Bozdag, Ebru

    2016-04-01

    Knowing the structure of Mars is a key point in understanding the formation of Earth-like planets as plate tectonics and erosion have erased the original suface of the Earth formation. Installing a seismometer on Mars surface makes it possible to identify its structure. An important step in the identification of the structure of a planet is the epicenter's location of a seismic source, typically a meteoric impact or an earthquake. On Earth, the classical way of locating epicenters is triangulation, which requires at least 3 stations. The Mars InSight Project plans to set a single station with 3 components. We propose a software to locate seismic sources on Mars thanks to the 3-components simulated data of an earthquake given by Geoazur (Nice Sophia-Antipolis University, CNRS) researchers. Instrumental response of a sensor is crucial for data interpretation. We study the oscillations of geophone in several situations so as to awaken students to the meaning of damping in second order modeling. In physics, car shock absorbers are often used to illustrate the principle of damping but rarely in practical experiments. We propose the use of a simple seismometer (a string with a mass and a damper) that allows changing several parameters (inductive damping, temperature and pressure) so as to see the effects of these parameters on the impulse response and, in particular, on the damping coefficient. In a second step, we illustrate the effect of damping on a seismogram with the difficulty of identifying and interpreting the different phase arrival times with low damping.

  12. Accurate identification of centromere locations in yeast genomes using Hi-C.

    PubMed

    Varoquaux, Nelle; Liachko, Ivan; Ay, Ferhat; Burton, Joshua N; Shendure, Jay; Dunham, Maitreya J; Vert, Jean-Philippe; Noble, William S

    2015-06-23

    Centromeres are essential for proper chromosome segregation. Despite extensive research, centromere locations in yeast genomes remain difficult to infer, and in most species they are still unknown. Recently, the chromatin conformation capture assay, Hi-C, has been re-purposed for diverse applications, including de novo genome assembly, deconvolution of metagenomic samples and inference of centromere locations. We describe a method, Centurion, that jointly infers the locations of all centromeres in a single genome from Hi-C data by exploiting the centromeres' tendency to cluster in three-dimensional space. We first demonstrate the accuracy of Centurion in identifying known centromere locations from high coverage Hi-C data of budding yeast and a human malaria parasite. We then use Centurion to infer centromere locations in 14 yeast species. Across all microbes that we consider, Centurion predicts 89% of centromeres within 5 kb of their known locations. We also demonstrate the robustness of the approach in datasets with low sequencing depth. Finally, we predict centromere coordinates for six yeast species that currently lack centromere annotations. These results show that Centurion can be used for centromere identification for diverse species of yeast and possibly other microorganisms. PMID:25940625

  13. Relocating Seismicity on the Arctic Plate Boundary Using Teleseismic and Regional Phases and a Bayesian Multiple Event Locator

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Dahl-Jensen, Trine; Kværna, Tormod; Larsen, Tine B.; Paulsen, Berit; Voss, Peter

    2016-04-01

    The tectonophysics of plate boundaries are illuminated by the pattern of seismicity - and the ability to locate seismic events accurately depends upon the number and quality of observations, the distribution of recording stations, and how well the traveltimes of seismic phases are modelled. The boundary between the Eurasian and North American plates between 70 and 84 degrees North hosts large seismic events which are well recorded teleseismically and many more events at far lower magnitudes that are well recorded only at regional distances. Existing seismic bulletins have considerable spread and bias resulting from limited station coverage and deficiencies in the velocity models applied; this is particularly acute for the lower magnitude events which may only be constrained by a small number of Pn and Sn arrivals. Over the past 15 years, there has been a significant improvement in the seismic network in the Arctic - a difficult region to instrument due to the harsh climate, a sparsity of quiet and accessible sites, and the expense and difficult logistics of deploying and maintaining stations. New deployments and upgrades to stations on Greenland, Svalbard, and the islands Jan Mayen, Hopen, and Bjørnøya have resulted in a sparse but stable regional seismic network which results in events down to magnitudes below 3 generating high quality Pn and Sn signals on multiple stations. A catalog of over 1000 events in the region since 1998 has been generated using many new phase readings on stations on both sides of the spreading ridge in addition to teleseismic P phases. The Bayesloc program, a Bayesian hierarchical multiple event location algorithm, has been used to relocate the full set of events iteratively and this has resulted in a significant reduction in the spread in hypocenter estimates for both large and small events. Whereas single event location algorithms minimize the vector of time residuals on an event-by-event basis, Bayesloc favours the hypocenters which

  14. Use of Loran-C navigation system to accurately determine sampling site location in an above ground cooling reservoir

    SciTech Connect

    Lockwood, R.E.; Blankinship, D.R.

    1994-12-31

    Environmental monitoring programs often require accurate determination of sampling site locations in aquatic environments. This is especially true when a {open_quotes}picture{close_quotes} of high resolution is needed for observing a changing variable in a given area and location is assumed to be important to the distribution of that variable. Sample site location can be difficult if few visible land marks are available for reference on a large body of water. The use of navigational systems such as Global Positioning System (GPS) and its predecessor, Loran-C, provide an excellent method for sample site location. McFarland (1992) discusses the practicality of GPS for location determination. This article discusses the use of Loran-C in a sampling scheme implemented at the South Texas Project Electrical Generating Station (STPEGS), Wadsworth, Texas.

  15. Disambiguating past events: Accurate source memory for time and context depends on different retrieval processes.

    PubMed

    Persson, Bjorn M; Ainge, James A; O'Connor, Akira R

    2016-07-01

    Current animal models of episodic memory are usually based on demonstrating integrated memory for what happened, where it happened, and when an event took place. These models aim to capture the testable features of the definition of human episodic memory which stresses the temporal component of the memory as a unique piece of source information that allows us to disambiguate one memory from another. Recently though, it has been suggested that a more accurate model of human episodic memory would include contextual rather than temporal source information, as humans' memory for time is relatively poor. Here, two experiments were carried out investigating human memory for temporal and contextual source information, along with the underlying dual process retrieval processes, using an immersive virtual environment paired with a 'Remember-Know' memory task. Experiment 1 (n=28) showed that contextual information could only be retrieved accurately using recollection, while temporal information could be retrieved using either recollection or familiarity. Experiment 2 (n=24), which used a more difficult task, resulting in reduced item recognition rates and therefore less potential for contamination by ceiling effects, replicated the pattern of results from Experiment 1. Dual process theory predicts that it should only be possible to retrieve source context from an event using recollection, and our results are consistent with this prediction. That temporal information can be retrieved using familiarity alone suggests that it may be incorrect to view temporal context as analogous to other typically used source contexts. This latter finding supports the alternative proposal that time since presentation may simply be reflected in the strength of memory trace at retrieval - a measure ideally suited to trace strength interrogation using familiarity, as is typically conceptualised within the dual process framework. PMID:27174312

  16. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  17. A General Event Location Algorithm with Applications to Eclispe and Station Line-of-Sight

    NASA Technical Reports Server (NTRS)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  18. Microseismic event location using the Double-difference technique for multiplet analysis

    NASA Astrophysics Data System (ADS)

    Castellanos Jurado, Fernando Rafael

    Microseismic event location provides a plethora of information about underground processes such as hydraulic fracturing, steam injection or mining and volcano activities. Nevertheless, accuracy is limited by acquisition geometry and errors in the velocity model and time picks. Although microseismic events can happen anywhere, they tend to re-occur in the same zone. This thesis describes a post-processing technique to relocate events originated in the same source region based on the double-difference method. This technique includes a crosscorrelation procedure to detect similar events and correct time picking errors. The performance of the algorithm is tested on synthetic data and a set of microseismic events recorded in a mine. The method significantly improves locations of similar events, compared to a conventional grid-search algorithm, revealing seismicity patterns likely associated with routine mining operations. The method also includes plots used for quality control of time picking and event location, facilitating geological interpretations.

  19. An infrastructure for accurate characterization of single-event transients in digital circuits.

    PubMed

    Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael

    2013-11-01

    We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure. PMID:24748694

  20. An infrastructure for accurate characterization of single-event transients in digital circuits☆

    PubMed Central

    Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael

    2013-01-01

    We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure. PMID:24748694

  1. Locations and focal mechanisms of deep long period events beneath Aleutian Arc volcanoes using back projection methods

    NASA Astrophysics Data System (ADS)

    Lough, A. C.; Roman, D. C.; Haney, M. M.

    2015-12-01

    Deep long period (DLP) earthquakes are commonly observed in volcanic settings such as the Aleutian Arc in Alaska. DLPs are poorly understood but are thought to be associated with movements of fluids, such as magma or hydrothermal fluids, deep in the volcanic plumbing system. These events have been recognized for several decades but few studies have gone beyond their identification and location. All long period events are more difficult to identify and locate than volcano-tectonic (VT) earthquakes because traditional detection schemes focus on high frequency (short period) energy. In addition, DLPs present analytical challenges because they tend to be emergent and so it is difficult to accurately pick the onset of arriving body waves. We now expect to find DLPs at most volcanic centers, the challenge lies in identification and location. We aim to reduce the element of human error in location by applying back projection to better constrain the depth and horizontal position of these events. Power et al. (2004) provided the first compilation of DLP activity in the Aleutian Arc. This study focuses on the reanalysis of 162 cataloged DLPs beneath 11 volcanoes in the Aleutian arc (we expect to ultimately identify and reanalyze more DLPs). We are currently adapting the approach of Haney (2014) for volcanic tremor to use back projection over a 4D grid to determine position and origin time of DLPs. This method holds great potential in that it will allow automated, high-accuracy picking of arrival times and could reduce the number of arrival time picks necessary for traditional location schemes to well constrain event origins. Back projection can also calculate a relative focal mechanism (difficult with traditional methods due to the emergent nature of DLPs) allowing the first in depth analysis of source properties. Our event catalog (spanning over 25 years and volcanoes) is one of the longest and largest and enables us to investigate spatial and temporal variation in DLPs.

  2. A new Bayesian approach of tomography and seismic event location dedicated to the estimation of the true uncertainties

    NASA Astrophysics Data System (ADS)

    Gesret, Alexandrine; Noble, Mark; Desassis, Nicolas; Romary, Thomas

    2013-04-01

    The monitoring of hydrocarbon reservoirs, geothermal reservoirs and mines commonly relies on the analysis of the induced seismicity. Even if a large amount of microseismic data have been recorded, the relationship between the exploration and the induced seismicity still needs to be better understood. This microseismicity is also interpreted to derive the fracture network and several physical parameters. The first step is thus to locate very precisely the induced seismicity and to estimate its associated uncertainties. The microseismic location errors are mainly due to the lack of knowledge of the wave-propagation medium, the velocity model has thus to be preliminary inverted. We here present a tomography algorithm that estimates the true uncertainties on the resulting velocity model. Including these results, we develop an approach that allows to obtain accurate event locations and their associated uncertainties due to the velocity model uncertainties. We apply a Monte-Carlo Markov chain (MCMC) algorithm to the tomography of calibration shots for a typical 3D geometry hydraulic fracture context. Our formulation is especially useful for ill-posed inverse problem, as it results in a large number of samples of possible solutions from the posterior probability distribution. All these velocity models are consistent with both the data and the prior information. Our non linear approach leads to a very satisfying mean velocity model and to associated meaningful standard deviations. These uncertainty estimates are much more reliable and accurate than sensitivity tests for only one final solution that is obtained with a linearized inversion approach. The Bayesian approach is commonly used for the computation of the posterior probability density function (PDF) of the event location as proposed by Tarantola and Valette in 1982 and Lomax in 2000. We add here the propagation of the posterior distribution of the velocity model to the formulation of the posterior PDF of the event

  3. Accurate modeling and inversion of electrical resistivity data in the presence of metallic infrastructure with known location and dimension

    SciTech Connect

    Johnson, Timothy C.; Wellman, Dawn M.

    2015-06-26

    Electrical resistivity tomography (ERT) has been widely used in environmental applications to study processes associated with subsurface contaminants and contaminant remediation. Anthropogenic alterations in subsurface electrical conductivity associated with contamination often originate from highly industrialized areas with significant amounts of buried metallic infrastructure. The deleterious influence of such infrastructure on imaging results generally limits the utility of ERT where it might otherwise prove useful for subsurface investigation and monitoring. In this manuscript we present a method of accurately modeling the effects of buried conductive infrastructure within the forward modeling algorithm, thereby removing them from the inversion results. The method is implemented in parallel using immersed interface boundary conditions, whereby the global solution is reconstructed from a series of well-conditioned partial solutions. Forward modeling accuracy is demonstrated by comparison with analytic solutions. Synthetic imaging examples are used to investigate imaging capabilities within a subsurface containing electrically conductive buried tanks, transfer piping, and well casing, using both well casings and vertical electrode arrays as current sources and potential measurement electrodes. Results show that, although accurate infrastructure modeling removes the dominating influence of buried metallic features, the presence of metallic infrastructure degrades imaging resolution compared to standard ERT imaging. However, accurate imaging results may be obtained if electrodes are appropriately located.

  4. One dimensional P wave velocity structure of the crust beneath west Java and accurate hypocentre locations from local earthquake inversion

    SciTech Connect

    Supardiyono; Santosa, Bagus Jaya

    2012-06-20

    A one-dimensional (1-D) velocity model and station corrections for the West Java zone were computed by inverting P-wave arrival times recorded on a local seismic network of 14 stations. A total of 61 local events with a minimum of 6 P-phases, rms 0.56 s and a maximum gap of 299 Degree-Sign were selected. Comparison with previous earthquake locations shows an improvement for the relocated earthquakes. Tests were carried out to verify the robustness of inversion results in order to corroborate the conclusions drawn out from our reasearch. The obtained minimum 1-D velocity model can be used to improve routine earthquake locations and represents a further step toward more detailed seismotectonic studies in this area of West Java.

  5. The use of propagation path corrections to improve seismic event location in western China

    SciTech Connect

    Cogbill, A.H.; Steck, L.K.

    1998-03-01

    In an effort to improve ability to locate events in western China using only regional data, the authors have developed propagation path corrections to seismic travel times, and applied such corrections using both traditional location routines as well as a nonlinear grid search method. Thus far, they have concentrated on corrections to observed P arrival times. They have constructed such corrections by using travel time observations available from the USGS Earthquake Data Reports, as well as data reported by the ISC. They have also constructed corrections for six stations that are a part of the International monitoring System. For each station having sufficient data, they produce a map of the travel-time residuals from all located events. Large-amplitude residuals are removed by median filtering, and the resulting data are gridded. For a given source location, the correction at a particular station is then interpolated from the correction grid associated with the station. They have constrained the magnitude of the corrections to be {le} 3 s. They have evaluated the utility of the calculated corrections by applying the corrections to the regional relocation of 10 well-located Chinese nuclear tests, as well as a single, well-located aftershock in nearby Kyrgyzstan. The use of corrections having magnitudes > 2 s is troubling when using traditional location codes, as the corrections amount to a nonlinear perturbation correction, and when large may destabilize the location algorithm. Partly for this reason, the authors have begun using grid search methods to relocate regional events. Such methods are easy to implement and fully nonlinear. Moreover, the misfit function used to locate the event can very easily be changed; they have used L{sub 1}- and L{sub 2}-norm misfit functions, for example. Instances in which multiple local minima occur in a location problem are easily recognized by simply contouring or otherwise displaying the misfit function.

  6. Optimizing the real-time automatic location of the events produced in Romania using an advanced processing system

    NASA Astrophysics Data System (ADS)

    Neagoe, Cristian; Grecu, Bogdan; Manea, Liviu

    2016-04-01

    National Institute for Earth Physics (NIEP) operates a real time seismic network which is designed to monitor the seismic activity on the Romanian territory, which is dominated by the intermediate earthquakes (60-200 km) from Vrancea area. The ability to reduce the impact of earthquakes on society depends on the existence of a large number of high-quality observational data. The development of the network in recent years and an advanced seismic acquisition are crucial to achieving this objective. The software package used to perform the automatic real-time locations is Seiscomp3. An accurate choice of the Seiscomp3 setting parameters is necessary to ensure the best performance of the real-time system i.e., the most accurate location for the earthquakes and avoiding any false events. The aim of this study is to optimize the algorithms of the real-time system that detect and locate the earthquakes in the monitored area. This goal is pursued by testing different parameters (e.g., STA/LTA, filters applied to the waveforms) on a data set of representative earthquakes of the local seismicity. The results are compared with the locations from the Romanian Catalogue ROMPLUS.

  7. The use of propagation path corrections to improve regional seismic event location in western China

    SciTech Connect

    Steck, L.K.; Cogbill, A.H.; Velasco, A.A.

    1999-03-01

    In an effort to improve the ability to locate seismic events in western China using only regional data, the authors have developed empirical propagation path corrections (PPCs) and applied such corrections using both traditional location routines as well as a nonlinear grid search method. Thus far, the authors have concentrated on corrections to observed P arrival times for shallow events using travel-time observations available from the USGS EDRs, the ISC catalogs, their own travel-tim picks from regional data, and data from other catalogs. They relocate events with the algorithm of Bratt and Bache (1988) from a region encompassing China. For individual stations having sufficient data, they produce a map of the regional travel-time residuals from all well-located teleseismic events. From these maps, interpolated PPC surfaces have been constructed using both surface fitting under tension and modified Bayesian kriging. The latter method offers the advantage of providing well-behaved interpolants, but requires that the authors have adequate error estimates associated with the travel-time residuals. To improve error estimates for kriging and event location, they separate measurement error from modeling error. The modeling error is defined as the travel-time variance of a particular model as a function of distance, while the measurement error is defined as the picking error associated with each phase. They estimate measurement errors for arrivals from the EDRs based on roundoff or truncation, and use signal-to-noise for the travel-time picks from the waveform data set.

  8. Improving Seismic Event Location: An Alternative to Three-dimensional Structural Models

    NASA Astrophysics Data System (ADS)

    Piromallo, C.; Morelli, A.

    - We devise and apply a method to account for the effect of the aspherical structure of the Earth in locating earthquakes. This technique relies upon the ability to detect the average structural signal present in the residuals between source and receiver and correct for this signal during location, using a phenomenological description that we call Empirical Heterogeneity Corrections (EHC). EHC are employed in the relocation of a large set of well-constrained teleseismic earthquakes selected among the events reported by the Bulletins of the International Seismological Centre 1964-1995. The rms length of EHC relocation vectors for these events is about 10km. The method is also tested against a selected set of ground-truth events, both earthquakes and explosions, whose locations are independently known by nonseismic means. The rms length of the mislocation vectors for the test events, compared to their original mislocation in the reference 1-D model SP6, is reduced in the EHC relocation by 17% for explosions and 12% for earthquakes. Our technique provides a successful alternative to the use of 3-D structural models, approximately reaching the same value of effectiveness in improving event location.

  9. A new method for producing automated seismic bulletins: Probabilistic event detection, association, and location

    SciTech Connect

    Draelos, Timothy J.; Ballard, Sanford; Young, Christopher J.; Brogan, Ronald

    2015-10-01

    Given a set of observations within a specified time window, a fitness value is calculated at each grid node by summing station-specific conditional fitness values. Assuming each observation was generated by a refracted P wave, these values are proportional to the conditional probabilities that each observation was generated by a seismic event at the grid node. The node with highest fitness value is accepted as a hypothetical event location, subject to some minimal fitness value, and all arrivals within a longer time window consistent with that event are associated with it. During the association step, a variety of different phases are considered. In addition, once associated with an event, an arrival is removed from further consideration. While unassociated arrivals remain, the search for other events is repeated until none are identified.

  10. A new method for producing automated seismic bulletins: Probabilistic event detection, association, and location

    DOE PAGESBeta

    Draelos, Timothy J.; Ballard, Sanford; Young, Christopher J.; Brogan, Ronald

    2015-10-01

    Given a set of observations within a specified time window, a fitness value is calculated at each grid node by summing station-specific conditional fitness values. Assuming each observation was generated by a refracted P wave, these values are proportional to the conditional probabilities that each observation was generated by a seismic event at the grid node. The node with highest fitness value is accepted as a hypothetical event location, subject to some minimal fitness value, and all arrivals within a longer time window consistent with that event are associated with it. During the association step, a variety of different phasesmore » are considered. In addition, once associated with an event, an arrival is removed from further consideration. While unassociated arrivals remain, the search for other events is repeated until none are identified.« less

  11. Accurate Analysis of the Change in Volume, Location, and Shape of Metastatic Cervical Lymph Nodes During Radiotherapy

    SciTech Connect

    Takao, Seishin; Tadano, Shigeru; Taguchi, Hiroshi; Yasuda, Koichi; Onimaru, Rikiya; Ishikawa, Masayori; Bengua, Gerard; Suzuki, Ryusuke; Shirato, Hiroki

    2011-11-01

    Purpose: To establish a method for the accurate acquisition and analysis of the variations in tumor volume, location, and three-dimensional (3D) shape of tumors during radiotherapy in the era of image-guided radiotherapy. Methods and Materials: Finite element models of lymph nodes were developed based on computed tomography (CT) images taken before the start of treatment and every week during the treatment period. A surface geometry map with a volumetric scale was adopted and used for the analysis. Six metastatic cervical lymph nodes, 3.5 to 55.1 cm{sup 3} before treatment, in 6 patients with head and neck carcinomas were analyzed in this study. Three fiducial markers implanted in mouthpieces were used for the fusion of CT images. Changes in the location of the lymph nodes were measured on the basis of these fiducial markers. Results: The surface geometry maps showed convex regions in red and concave regions in blue to ensure that the characteristics of the 3D tumor geometries are simply understood visually. After the irradiation of 66 to 70 Gy in 2 Gy daily doses, the patterns of the colors had not changed significantly, and the maps before and during treatment were strongly correlated (average correlation coefficient was 0.808), suggesting that the tumors shrank uniformly, maintaining the original characteristics of the shapes in all 6 patients. The movement of the gravitational center of the lymph nodes during the treatment period was everywhere less than {+-}5 mm except in 1 patient, in whom the change reached nearly 10 mm. Conclusions: The surface geometry map was useful for an accurate evaluation of the changes in volume and 3D shapes of metastatic lymph nodes. The fusion of the initial and follow-up CT images based on fiducial markers enabled an analysis of changes in the location of the targets. Metastatic cervical lymph nodes in patients were suggested to decrease in size without significant changes in the 3D shape during radiotherapy. The movements of the

  12. Optimal filter parameters for low SNR seismograms as a function of station and event location

    NASA Astrophysics Data System (ADS)

    Leach, Richard R.; Dowla, Farid U.; Schultz, Craig A.

    1999-06-01

    Global seismic monitoring requires deployment of seismic sensors worldwide, in many areas that have not been studied or have few useable recordings. Using events with lower signal-to-noise ratios (SNR) would increase the amount of data from these regions. Lower SNR events can add significant numbers to data sets, but recordings of these events must be carefully filtered. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. To reduce this laborious process, we have developed an automated method to provide optimal filters for low SNR regional or teleseismic events. As seismic signals are often localized in frequency and time with distinct time-frequency characteristics, our method is based on the decomposition of a time series into a set of subsignals, each representing a band with f/Δ f constant (constant Q). The SNR is calculated on the pre-event noise and signal window. The band pass signals with high SNR are used to indicate the cutoff filter limits for the optimized filter. Results indicate a significant improvement in SNR, particularly for low SNR events. The method provides an optimum filter which can be immediately applied to unknown regions. The filtered signals are used to map the seismic frequency response of a region and may provide improvements in travel-time picking, azimuth estimation, regional characterization, and event detection. For example, when an event is detected and a preliminary location is determined, the computer could automatically select optimal filter bands for data from non-reporting stations. Results are shown for a set of low SNR events as well as 379 regional and teleseismic events recorded at stations ABKT, KIV, and ANTO in the Middle East.

  13. Finding faces among faces: human faces are located more quickly and accurately than other primate and mammal faces.

    PubMed

    Simpson, Elizabeth A; Buchin, Zachary; Werner, Katie; Worrell, Rey; Jakobsen, Krisztina V

    2014-11-01

    We tested the specificity of human face search efficiency by examining whether there is a broad window of detection for various face-like stimuli-human and animal faces-or whether own-species faces receive greater attentional allocation. We assessed the strength of the own-species face detection bias by testing whether human faces are located more efficiently than other animal faces, when presented among various other species' faces, in heterogeneous 16-, 36-, and 64-item arrays. Across all array sizes, we found that, controlling for distractor type, human faces were located faster and more accurately than primate and mammal faces, and that, controlling for target type, searches were faster when distractors were human faces compared to animal faces, revealing more efficient processing of human faces regardless of their role as targets or distractors (Experiment 1). Critically, these effects remained when searches were for specific species' faces (human, chimpanzee, otter), ruling out a category-level explanation (Experiment 2). Together, these results suggest that human faces may be processed more efficiently than animal faces, both when task-relevant (targets) and task-irrelevant (distractors), even in direct competition with other faces. These results suggest that there is not a broad window of detection for all face-like patterns but that human adults process own-species' faces more efficiently than other species' faces. Such own-species search efficiencies may arise through experience with own-species faces throughout development or may be privileged early in development, due to the evolutionary importance of conspecifics' faces. PMID:25113852

  14. The LLNL-G3D global P-wave velocity model and the significance of the BayesLoc multiple-event location procedure

    NASA Astrophysics Data System (ADS)

    Simmons, N. A.; Myers, S. C.; Johannesson, G.; Matzel, E.

    2011-12-01

    LLNL-G3D is a global-scale model of P-wave velocity designed to accurately predict seismic travel times at regional and teleseismic distances simultaneously. The underlying goal of the model is to provide enhanced seismic event location capabilities. Previous versions of LLNL-G3D (versions 1 and 2) provide substantial improvements in event location accuracy via 3-D ray tracing. The latest models are based on ~2.7 million P and Pn arrivals that are re-processed using our global multi-event locator known as BayesLoc. Bayesloc is a formulation of the joint probability distribution across multiple-event location parameters, including hypocenters, travel time corrections, pick precision, and phase labels. Modeling the whole multiple-event system results in accurate locations and an internally consistent data set that is ideal for tomography. Our recently developed inversion approach (called Progressive Multi-level Tessellation Inversion or PMTI) captures regional trends and fine details where data warrant. Using PMTI, we model multiple heterogeneity scale lengths without defining parameter grids with variable densities based on some ad hoc criteria. LLNL-G3Dv3 (version 3) is produced with data generated with the BayesLoc procedure, recently modified to account for localized travel time trends via a multiple event clustering technique. We demonstrate the significance of BayesLoc processing, the impact on the resulting tomographic images, and the application of LLNL-G3D to seismic event location. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-491805.

  15. A method for detecting and locating geophysical events using groups of arrays

    NASA Astrophysics Data System (ADS)

    de Groot-Hedlin, Catherine D.; Hedlin, Michael A. H.

    2015-11-01

    We have developed a novel method to detect and locate geophysical events that makes use of any sufficiently dense sensor network. This method is demonstrated using acoustic sensor data collected in 2013 at the USArray Transportable Array (TA). The algorithm applies Delaunay triangulation to divide the sensor network into a mesh of three-element arrays, called triads. Because infrasound waveforms are incoherent between the sensors within each triad, the data are transformed into envelopes, which are cross-correlated to find signals that satisfy a consistency criterion. The propagation azimuth, phase velocity and signal arrival time are computed for each signal. Triads with signals that are consistent with a single source are bundled as an event group. The ensemble of arrival times and azimuths of detected signals within each group are used to locate a common source in space and time. A total of 513 infrasonic stations that were active for part or all of 2013 were divided into over 2000 triads. Low (0.5-2 Hz) and high (2-8 Hz) catalogues of infrasonic events were created for the eastern USA. The low-frequency catalogue includes over 900 events and reveals several highly active source areas on land that correspond with coal mining regions. The high-frequency catalogue includes over 2000 events, with most occurring offshore. Although their cause is not certain, most events are clearly anthropogenic as almost all occur during regular working hours each week. The regions to which the TA is most sensitive vary seasonally, with the direction of reception dependent on the direction of zonal winds. The catalogue has also revealed large acoustic events that may provide useful insight into the nature of long-range infrasound propagation in the atmosphere.

  16. A study of various methods for calculating locations of lightning events

    NASA Technical Reports Server (NTRS)

    Cannon, John R.

    1995-01-01

    This article reports on the results of numerical experiments on finding the location of lightning events using different numerical methods. The methods include linear least squares, nonlinear least squares, statistical estimations, cluster analysis and angular filters and combinations of such techniques. The experiments involved investigations of methods for excluding fake solutions which are solutions that appear to be reasonable but are in fact several kilometers distant from the actual location. Some of the conclusions derived from the study are that bad data produces fakes, that no fool-proof method of excluding fakes was found, that a short base-line interferometer under development at Kennedy Space Center to measure the direction cosines of an event shows promise as a filter for excluding fakes. The experiments generated a number of open questions, some of which are discussed at the end of the report.

  17. Improvement of IDC/CTBTO Event Locations in Latin America and the Caribbean Using a Regional Seismic Travel Time Model

    NASA Astrophysics Data System (ADS)

    Given, J. W.; Guendel, F.

    2013-05-01

    The International Data Centre is a vital element of the Comprehensive Test Ban Treaty (CTBT) verification mechanism. The fundamental mission of the International Data Centre (IDC) is to collect, process, and analyze monitoring data and to present results as event bulletins to Member States. For the IDC and in particular for waveform technologies, a key measure of the quality of its products is the accuracy by which every detected event is located. Accurate event location is crucial for purposes of an On Site Inspection (OSI), which would confirm the conduct of a nuclear test. Thus it is important for the IDC monitoring and data analysis to adopt new processing algorithms that improve the accuracy of event location. Among them the development of new algorithms to compute regional seismic travel times through 3-dimensional models have greatly increased IDC's location precision, the reduction of computational time, allowing forward and inverse modeling of large data sets. One of these algorithms has been the Regional Seismic Travel Time model (RSTT) of Myers et al., (2011). The RSTT model is nominally a global model; however, it currently covers only North America and Eurasia in sufficient detail. It is the intention CTBTO's Provisional Technical Secretariat and the IDC to extend the RSTT model to other regions of the earth, e.g. Latin America-Caribbean, Africa and Asia. This is particularly important for the IDC location procedure, as there are regions of the earth for which crustal models are not well constrained. For this purpose IDC has launched a RSTT initiative. In May 2012, a technical meeting was held in Vienna under the auspices of the CTBTO. The purpose of this meeting was to invite National Data Centre experts as well as network operators from Africa, Europe, the Middle East, Asia, Australia, Latin and North America to discuss the context under which a project to extend the RSTT model would be implemented. A total of 41 participants from 32 Member States

  18. Using ancillary information to improve hypocenter estimation: Bayesian single event location (BSEL)

    SciTech Connect

    Anderson, Dale N

    2008-01-01

    We have developed and tested an algorithm, Bayesian Single Event Location (BSEL), for estimating the location of a seismic event. The main driver for our research is the inadequate representation of ancillary information in the hypocenter estimation procedure. The added benefit is that we have also addressed instability issues often encountered with historical NLR solvers (e.g., non-convergence or seismically infeasible results). BSEL differs from established nonlinear regression techniques by using a Bayesian prior probability density function (prior PDF) to incorporate ancillary physical basis constraints about event location. P-wave arrival times from seismic events are used in the development. Depth, a focus of this paper, may be modeled with a prior PDF (potentially skewed) that captures physical basis bounds from surface wave observations. This PDF is constructed from a Rayleigh wave depth excitation eigenfunction that is based on the observed minimum period from a spectrogram analysis and estimated near-source elastic parameters. For example, if the surface wave is an Rg phase, it potentially provides a strong constraint for depth, which has important implications for remote monitoring of nuclear explosions. The proposed Bayesian algorithm is illustrated with events that demonstrate its congruity with established hypocenter estimation methods and its application potential. The BSEL method is applied to three events: (1) A shallow Mw 4 earthquake that occurred near Bardwell, KY on June 6, 2003, (2) the Mw 5.6 earthquake of July 26, 2005 that occurred near Dillon, MT, and (3) a deep Mw 5.7 earthquake that occurred off the coast of Japan on April 22, 1980. A strong Rg was observed from the Bardwell, KY earthquake that places very strong constraints on depth and origin time. No Rg was observed for the Dillon, MT earthquake, but we used the minimum observed period of a Rayleigh wave (7 seconds) to reduce the depth and origin time uncertainty. Because the Japan

  19. A New Characteristic Function for Fast Time-Reverse Seismic Event Location

    NASA Astrophysics Data System (ADS)

    Hendriyana, Andri; Bauer, Klaus; Weber, Michael; Jaya, Makky; Muksin, Muksin

    2015-04-01

    Microseismicity produced by natural activities is usually characterized by low signal-to-noise ratio and huge amount of data as recording is conducted for a long period of time. Locating microseismic events is preferably carried out using migration-based methods such as time-reverse modeling (TRM). The original TRM is based on backpropagating the wavefield from the receiver down to the source location. Alternatively, we are using a characteristic function (CF) derived from the measured wavefield as input for the TRM. The motivation for such a strategy is to avoid undesired contributions from secondary arrivals which may generate artifacts in the final images. In this presentation, we introduce a new CF as input for TRM method. To obtain this CF, initially we apply kurtosis-based automatic onset detection and convolution with a given wavelet. The convolution with low frequency wavelets allows us to conduct time-reverse modeling using coarser sampling hence it will reduce computing time. We apply the method to locate seismic events measured along an active part of the Sumatra Fault around the Tarutung pull-apart basin (North Sumatra, Indonesia). The results show that seismic events are well-determined since they are concentrated along the Sumatran fault. Internal details of the Tarutung basin structure could be derived. Our results are consistent with those obtained from inversion of manually picked travel time data.

  20. Application of an Artificial Intelligence Method for Velocity Calibration and Events Location in Microseismic Monitoring

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Chen, X.

    2013-12-01

    Good quality hydraulic fracture maps are heavily dependent upon the best possible velocity structure. Particle Swarm Optimization inversion scheme, an artificial intelligence technique for velocity calibration and events location could serve as a viable option, able to produce high quality data. Using perforation data to recalibrate the 1D isotropic velocity model derived from dipole sonic logs (or even without them), we are able to get the initial velocity model used for consequential events location. Velocity parameters can be inverted, as well as the thickness of the layer, through an iterative procedure. Performing inversion without integrating available data is unlikely to produce reliable results; especially if there are only one perforation shot and a single poor-layer-covered array along with low signal/noise ratio signal. The inversion method was validated via simulations and compared to the Fast Simulated Annealing approach and the Conjugate Gradient method. Further velocity model refinement can be accomplished while calculating events location during the iterative procedure minimizing the residuals from both sides. This artificial intelligence technique also displays promising application to the joint inversion of large-scale seismic activities data.

  1. Quantifying uncertainties in location and source mechanism for Long-Period events at Mt Etna, Italy.

    NASA Astrophysics Data System (ADS)

    Cauchie, Léna; Saccorotti, Gilberto; Bean, Christopher

    2014-05-01

    The manifestation of Long-Period events is documented at many volcanoes worldwide. However the mechanism at their origin is still object of discussion. Models proposed so far involve (i) the resonance of fluid-filled cracks or conduits that are triggered by fluid instabilities or the brittle failure of high viscous magmas and (ii) the slow-rupture earthquakes in the shallow portion of volcanic edifices. Since LP activity usually precedes and accompanies volcanic eruption, the understanding of these sources is important in terms of hazard assessment and eruption early warning. The work is thus primarily aimed at the assessment of the uncertainties in the determination of LP source properties as a consequence of poor knowledge of the velocity structure and location errors. We used data from temporary networks deployed on Mt Etna in 2005. During August, 2005, about 13000 LP events were detected through a STA/LTA approach, and were classified into two families on the basis of waveform similarity. For each family of events, we located the source using three different approaches: (1) a single-station-location method based on the back-propagation of the polarization vector estimated from covariance analysis of three-component signals; (2) multi-channel analysis of data recorded by two seismic arrays; (3) relative locations based on inversion of differential times obtained through cross-correlation of similar waveforms. For all these three different methods, the solutions are very sensitive to the chosen velocity model. We thus iterated the location procedure for different medium properties; the preferred velocity is that for which the results obtained with the three different methods are consistent each other. For each family, we then defined a volume of possible source location and performed a full-waveform, moment tensor (MT) inversion for the entire catalog of events. In this manner, we obtained a MT solution for each grid node of the investigated volume. The MT

  2. Final Scientific Report, Integrated Seismic Event Detection and Location by Advanced Array Processing

    SciTech Connect

    Kvaerna, T.; Gibbons. S.J.; Ringdal, F; Harris, D.B.

    2007-01-30

    In the field of nuclear explosion monitoring, it has become a priority to detect, locate, and identify seismic events down to increasingly small magnitudes. The consideration of smaller seismic events has implications for a reliable monitoring regime. Firstly, the number of events to be considered increases greatly; an exponential increase in naturally occurring seismicity is compounded by large numbers of seismic signals generated by human activity. Secondly, the signals from smaller events become more difficult to detect above the background noise and estimates of parameters required for locating the events may be subject to greater errors. Thirdly, events are likely to be observed by a far smaller number of seismic stations, and the reliability of event detection and location using a very limited set of observations needs to be quantified. For many key seismic stations, detection lists may be dominated by signals from routine industrial explosions which should be ascribed, automatically and with a high level of confidence, to known sources. This means that expensive analyst time is not spent locating routine events from repeating seismic sources and that events from unknown sources, which could be of concern in an explosion monitoring context, are more easily identified and can be examined with due care. We have obtained extensive lists of confirmed seismic events from mining and other artificial sources which have provided an excellent opportunity to assess the quality of existing fully-automatic event bulletins and to guide the development of new techniques for online seismic processing. Comparing the times and locations of confirmed events from sources in Fennoscandia and NW Russia with the corresponding time and location estimates reported in existing automatic bulletins has revealed substantial mislocation errors which preclude a confident association of detected signals with known industrial sources. The causes of the errors are well understood and are

  3. Coronal Waves and Solar Energetic Particle Events Observed at Widely Separate Locations

    NASA Astrophysics Data System (ADS)

    Nitta, N.; Jian, L.; Gomez-Herrero, R.

    2015-12-01

    During solar cycle 24, thanks largely to the Solar Terrestrial Relations Observatory (STEREO), many solar energetic particle (SEP) events have been observed at widely separate locations in the heliosphere, even including impulsive events that are usually assumed to reflect localized acceleration and injection. It is found that many of these wide SEP events accompany coronal waves that typically appear in extreme-ultraviolet (EUV) images. The EUV wave phenomenon has been observed much more closely than before by the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory that continuously produces full-disk EUV images with unprecedentedly fast cadence and high sensitivity in multiple wavelength bands covering a broad temperature range. This is complemented by the EUV Imager on STEREO that traces the wave front into regions inaccessible from Earth. Several authors have attempted to explain wide SEP events in terms of EUV waves, especially comparing the SEP release times with how and when the EUV wave fronts traverse the magnetic footprints of the locations of SEPs. They have come to mixed results. The primary reason for the mixed results may be that they tend to overlook or underestimate the uncertainties inherent in the works. For example, how well do we model magnetic field connection in the corona and heliosphere? Do we adequately take into account the evolving solar wind conditions? Here we study a number of SEP events with various angular spreads in comparison with newly analyzed EUV waves. We discuss the importance of including the above-mentioned uncertainties as well as understanding EUV waves as part of the 3d propagation of CME-driven shock waves into the coronagraph fields of view. Without these approaches, it may remain ambiguous how much of the angular spread of SEP events is attributable to coronal shock waves.

  4. Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.

    PubMed

    Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L

    2016-08-01

    Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782

  5. Children who experienced a repeated event only appear less accurate in a second interview than those who experienced a unique event.

    PubMed

    Price, Heather L; Connolly, Deborah A; Gordon, Heidi M

    2016-08-01

    When children have experienced a repeated event, reports of experienced details may be inconsistently reported across multiple interviews. In 3 experiments, we explored consistency of children's reports of an instance of a repeated event after a long delay (Exp. 1, N = 53, Mage = 7.95 years; Exp. 2, N = 70, Mage = 5.77 years, Exp. 3, N = 59, Mage = 4.88 years). In all experiments, children either experienced 1 or 4 activity sessions, followed at a relatively short delay (days or weeks) by an initial memory test. Then, following a longer delay (4 months or 1 year), children were reinterviewed with the same memory questions. We analyzed the consistency of children's memory reports across the 2 interviews, as well as forgetting, reminiscence, and accuracy, defined with both narrow and broad criteria. A highly consistent pattern was observed across the 3 experiments with children who experienced a single event appearing more consistent than children who experienced a repeated event. We conclude that inconsistencies across multiple interviews can be expected from children who have experienced repeated events and these inconsistencies are often reflective of accurate, but different, recall. (PsycINFO Database Record PMID:27149287

  6. Event Detection and Location of Earthquakes Using the Cascadia Initiative Dataset

    NASA Astrophysics Data System (ADS)

    Morton, E.; Bilek, S. L.; Rowe, C. A.

    2015-12-01

    The Cascadia subduction zone (CSZ) produces a range of slip behavior along the plate boundary megathrust, from great earthquakes to episodic slow slip and tremor (ETS). Unlike other subduction zones that produce great earthquakes and ETS, the CSZ is notable for the lack of small and moderate magnitude earthquakes recorded. The seismogenic zone extent is currently estimated to be primarily offshore, thus the lack of observed small, interplate earthquakes may be partially due to the use of only land seismometers. The Cascadia Initiative (CI) community seismic experiment seeks to address this issue by including ocean bottom seismometers (OBS) deployed directly over the locked seismogenic zone, in addition to land seismometers. We use these seismic data to explore whether small magnitude earthquakes are occurring on the plate interface, but have gone undetected by the land-based seismic networks. We select a subset of small magnitude (M0.1-3.7) earthquakes from existing earthquake catalogs, based on land seismic data, whose preliminary hypocentral locations suggest they may have occurred on the plate interface. We window the waveforms on CI OBS and land seismometers around the phase arrival times for these earthquakes to generate templates for subspace detection, which allows for additional flexibility over traditional matched filter detection methods. Here we present event detections from the first year of CI deployment and preliminary locations for the detected events. Initial results of scanning the first year of the CI deployment using one cluster of template events, located near a previously identified subducted seamount, include 473 detections on OBS station M08A (~61.6 km offshore) and 710 detections on OBS station J25A (~44.8 km northeast of M08A). Ongoing efforts include detection using additional OBS stations along the margin, as well as determining locations of clusters detected in the first year of deployment.

  7. a Topic Modeling Based Representation to Detect Tweet Locations. Example of the Event "je Suis Charlie"

    NASA Astrophysics Data System (ADS)

    Morchid, M.; Josselin, D.; Portilla, Y.; Dufour, R.; Altman, E.; Linarès, G.

    2015-09-01

    Social Networks became a major actor in information propagation. Using the Twitter popular platform, mobile users post or relay messages from different locations. The tweet content, meaning and location, show how an event-such as the bursty one "JeSuisCharlie", happened in France in January 2015, is comprehended in different countries. This research aims at clustering the tweets according to the co-occurrence of their terms, including the country, and forecasting the probable country of a non-located tweet, knowing its content. First, we present the process of collecting a large quantity of data from the Twitter website. We finally have a set of 2,189 located tweets about "Charlie", from the 7th to the 14th of January. We describe an original method adapted from the Author-Topic (AT) model based on the Latent Dirichlet Allocation (LDA) method. We define an homogeneous space containing both lexical content (words) and spatial information (country). During a training process on a part of the sample, we provide a set of clusters (topics) based on statistical relations between lexical and spatial terms. During a clustering task, we evaluate the method effectiveness on the rest of the sample that reaches up to 95% of good assignment. It shows that our model is pertinent to foresee tweet location after a learning process.

  8. Tectonic tremor locations along the western Mexico subduction zone using stacked waveforms of similar events

    NASA Astrophysics Data System (ADS)

    Schlanser, K. M.; Brudzinski, M. R.; Holtkamp, S. G.; Shelly, D. R.

    2011-12-01

    Tectonic (non-volcanic) tremor is difficult to locate due to its emergent nature, but critical to assess what impact it has on the plate interface slip budget. Tectonic tremor has been observed in Jalisco, Colima, and Michoacán regions of southern Mexico using the MARS seismic network. A semi-automated approach in which analyst-refined relative arrival times are inverted for source locations using a 1-D velocity model has previously produced hundreds of source locations. The results found tectonic tremor shift from near the 50 km contour to the 20 km contour going from east to west, with the latter epicenters hugging the coastline. There is little room between the tectonic tremor and the seismogenic zone for a wide intervening slow slip region like what is seen in other region of the Mexican subduction zone, suggesting a potentially different source process than tremor in other regions. This study seeks to refine the tremor source locations by stacking families of similar events to enhance the signal to noise ratio and bring out clear P- and S-wave arrivals even for low amplitude sources at noisier stations. Well-defined tremor bursts within the Jalisco, Colima, and Michoacán region from previous results are being used to define 6 s template waveforms that are matched to similar waveforms through cross-correlation over the entire duration of recording. After stacking the similar events, the clarified arrival times will be used to refine the source locations. Particular attention will be paid to whether the tremor families form a dipping linear feature consistent with the plate interface and if tremor associated with the Rivera plate is as shallow (~20km) as it appears from previous results.

  9. Locating narrow bipolar events with single-station measurement of low-frequency magnetic fields

    NASA Astrophysics Data System (ADS)

    Zhang, Hongbo; Lu, Gaopeng; Qie, Xiushu; Jiang, Rubin; Fan, Yanfeng; Tian, Ye; Sun, Zhuling; Liu, Mingyuan; Wang, Zhichao; Liu, Dongxia; Feng, Guili

    2016-06-01

    We developed a method to locate the narrow bipolar events (NBEs) based on the single-station measurement of low-frequency (LF, 40-500 kHz) magnetic fields. The direction finding of a two-axis magnetic sensor provides the azimuth of NBEs relative to the measurement site; the ionospheric reflection pairs in the lightning sferics are used to determine the range and height. We applied this method to determine the three-dimensional (3D) locations of 1475 NBEs with magnetic signals recorded during the SHandong Artificially Triggered Lightning Experiment (SHATLE) in summer of 2013. The NBE detections are evaluated on a storm basis by comparing with radar observations of reflectivity and lightning data from the World Wide Lightning Location Network (WWLLN) for two mesoscale convective systems (MCSs) of different sizes. As revealed by previous studies, NBEs are predominately produced in the convective regions with relatively strong radar echo (with composite reflectivity ≥30 dBZ), although not all the convections with high reflectivity and active lightning production are in favor of NBE production. The NBEs located by the single-station magnetic method also exhibit the distinct segregation in altitude for positive and negative NBEs, namely positive NBEs are mainly produced between 7 km and 15 km, while negative NBEs are predominantly produced above 14 km. In summary, the results of comparison generally show that the single-station magnetic method can locate NBEs with good reliability, although the accuracy of 3D location remains to be evaluated with the traditional multi-station method based on the time-of-arrival technique. This method can be applied to track the motion of storm convection within 800 km, especially when they move out to ocean beyond the detection range (typically <400 km) of meteorological radars, making it possible to study NBEs in oceanic thunderstorms for which the location with multiple ground-based stations is usually not feasible.

  10. A data-based model to locate mass movements triggered by seismic events in Sichuan, China.

    PubMed

    de Souza, Fabio Teodoro

    2014-01-01

    Earthquakes affect the entire world and have catastrophic consequences. On May 12, 2008, an earthquake of magnitude 7.9 on the Richter scale occurred in the Wenchuan area of Sichuan province in China. This event, together with subsequent aftershocks, caused many avalanches, landslides, debris flows, collapses, and quake lakes and induced numerous unstable slopes. This work proposes a methodology that uses a data mining approach and geographic information systems to predict these mass movements based on their association with the main and aftershock epicenters, geologic faults, riverbeds, and topography. A dataset comprising 3,883 mass movements is analyzed, and some models to predict the location of these mass movements are developed. These predictive models could be used by the Chinese authorities as an important tool for identifying risk areas and rescuing survivors during similar events in the future. PMID:24085622

  11. On the violation of causal, emotional, and locative inferences: An event-related potentials study.

    PubMed

    Rodríguez-Gómez, Pablo; Sánchez-Carmona, Alberto; Smith, Cybelle; Pozo, Miguel A; Hinojosa, José A; Moreno, Eva M

    2016-07-01

    Previous event-related potential studies have demonstrated the online generation of inferences during reading for comprehension tasks. The present study contrasted the brainwave patterns of activity to the fulfilment or violation of various types of inferences (causal, emotional, locative). Relative to inference congruent sentence endings, a typical centro-parietal N400 was elicited for the violation of causal and locative inferences. This N400 effect was initially absent for emotional inferences, most likely due to their lower cloze probability. Between 500 and 750ms, a larger frontal positivity (pN400FP) was elicited by inference incongruent sentence endings in the causal condition. In emotional sentences, both inference congruent and incongruent endings exerted this frontally distributed late positivity. For the violation of locative inferences, the larger positivity was only marginally significant over left posterior scalp locations. Thus, not all inference eliciting sentences evoked a similar pattern of ERP responses. We interpret and discuss our results in line with recent views on what the N400, the P600 and the pN400FP brainwave potentials index. PMID:27150706

  12. Nuclear event time histories and computed site transfer functions for locations in the Los Angeles region

    USGS Publications Warehouse

    Rogers, A.M.; Covington, P.A.; Park, R.B.; Borcherdt, R.D.; Perkins, D.M.

    1980-01-01

    This report presents a collection of Nevada Test Site (NTS) nuclear explosion recordings obtained at sites in the greater Los Angeles, Calif., region. The report includes ground velocity time histories, as well as, derived site transfer functions. These data have been collected as part of a study to evaluate the validity of using low-level ground motions to predict the frequency-dependent response of a site during an earthquake. For this study 19 nuclear events were recorded at 98 separate locations. Some of these sites have recorded more than one of the nuclear explosions, and, consequently, there are a total of 159, three-component station records. The location of all the recording sites are shown in figures 1–5, the station coordinates and abbreviations are given in table 1. The station addresses are listed in table 2, and the nuclear explosions that were recorded are listed in table 3. The recording sites were chosen on the basis of three criteria: (1) that the underlying geological conditions were representative of conditions over significant areas of the region, (2) that the site was the location of a strong-motion recording of the 1971 San Fernando earthquake, or (3) that more complete geographical coverage was required in that location.

  13. Estimating observing locations for advancing beyond the winter predictability barrier of Indian Ocean dipole event predictions

    NASA Astrophysics Data System (ADS)

    Feng, Rong; Duan, Wansuo; Mu, Mu

    2016-04-01

    In this paper, we explored potential observing locations (i.e., the sensitive areas) of positive Indian Ocean dipole (IOD) events to advance beyond the winter predictability barrier (WPB) using the geophysical fluid dynamics laboratory climate model version 2p1 (GFDL CM2p1). The sensitivity analysis is conducted through perfect model predictability experiments, in which the model is assumed to be perfect and so any prediction errors are caused by initial errors. The results show that the initial errors with an east-west dipole pattern are more likely to result in a significant WPB than spatially correlated noises; the areas where the large values of the dipole pattern initial errors are located have great effects on prediction uncertainties in winter and provide useful information regarding the sensitive areas. Further, the prediction uncertainties in winter are more sensitive to the initial errors in the subsurface large value areas than to those in the surface large value areas. The results indicate that the subsurface large value areas are sensitive areas for advancing beyond the WPB of IOD predictions and if we carry out intensive observations across these areas, the prediction errors in winter may be largely reduced. This will lead to large improvements in the skill of wintertime IOD event forecasts.

  14. Simultaneous Determination of Structure and Event Location Using Body and Surface Wave Measurements at a Single Station: Preparation for Mars Data from the InSight Mission

    NASA Astrophysics Data System (ADS)

    Panning, M. P.; Banerdt, W. B.; Beucler, E.; Blanchette-Guertin, J. F.; Boese, M.; Clinton, J. F.; Drilleau, M.; James, S. R.; Kawamura, T.; Khan, A.; Lognonne, P. H.; Mocquet, A.; van Driel, M.

    2015-12-01

    An important challenge for the upcoming InSight mission to Mars, which will deliver a broadband seismic station to Mars along with other geophysical instruments in 2016, is to accurately determine event locations with the use of a single station. Locations are critical for the primary objective of the mission, determining the internal structure of Mars, as well as a secondary objective of measuring the activity of distribution of seismic events. As part of the mission planning process, a variety of techniques have been explored for location of marsquakes and inversion of structure, and preliminary procedures and software are already under development as part of the InSight Mars Quake and Mars Structure Services. One proposed method, involving the use of recordings of multiple-orbit surface waves, has already been tested with synthetic data and Earth recordings. This method has the strength of not requiring an a priori velocity model of Mars for quake location, but will only be practical for larger events. For smaller events where only first orbit surface waves and body waves are observable, other methods are required. In this study, we implement a transdimensional Bayesian inversion approach to simultaneously invert for basic velocity structure and location parameters (epicentral distance and origin time) using only measurements of body wave arrival times and dispersion of first orbit surface waves. The method is tested with synthetic data with expected Mars noise and Earth data for single events and groups of events and evaluated for errors in both location and structural determination, as well as tradeoffs between resolvable parameters and the effect of 3D crustal variations.

  15. An Event-related Potential Study on the Interaction between Lighting Level and Stimulus Spatial Location

    PubMed Central

    Carretié, Luis; Ruiz-Padial, Elisabeth; Mendoza, María T.

    2015-01-01

    Due to heterogeneous photoreceptor distribution, spatial location of stimulation is crucial to study visual brain activity in different light environments. This unexplored issue was studied through occipital event-related potentials (ERPs) recorded from 40 participants in response to discrete visual stimuli presented at different locations and in two environmental light conditions, low mesopic (L, 0.03 lux) and high mesopic (H, 6.5 lux), characterized by a differential photoreceptor activity balance: rod > cone and rod < cone, respectively. Stimuli, which were exactly the same in L and H, consisted of squares presented at fixation, at the vertical periphery (above or below fixation) or at the horizontal periphery (left or right). Analyses showed that occipital ERPs presented important L vs. H differences in the 100 to 450 ms window, which were significantly modulated by spatial location of stimulation: differences were greater in response to peripheral stimuli than to stimuli presented at fixation. Moreover, in the former case, significance of L vs. H differences was even stronger in response to stimuli presented at the horizontal than at the vertical periphery. These low vs. high mesopic differences may be explained by photoreceptor activation and their retinal distribution, and confirm that ERPs discriminate between rod– and cone-originated visual processing. PMID:26635588

  16. Fast, Accurate and Precise Mid-Sagittal Plane Location in 3D MR Images of the Brain

    NASA Astrophysics Data System (ADS)

    Bergo, Felipe P. G.; Falcão, Alexandre X.; Yasuda, Clarissa L.; Ruppert, Guilherme C. S.

    Extraction of the mid-sagittal plane (MSP) is a key step for brain image registration and asymmetry analysis. We present a fast MSP extraction method for 3D MR images, based on automatic segmentation of the brain and on heuristic maximization of the cerebro-spinal fluid within the MSP. The method is robust to severe anatomical asymmetries between the hemispheres, caused by surgical procedures and lesions. The method is also accurate with respect to MSP delineations done by a specialist. The method was evaluated on 64 MR images (36 pathological, 20 healthy, 8 synthetic), and it found a precise and accurate approximation of the MSP in all of them with a mean time of 60.0 seconds per image, mean angular variation within a same image (precision) of 1.26o and mean angular difference from specialist delineations (accuracy) of 1.64o.

  17. Location of seismic events and eruptive fissures on the Piton de la Fournaise volcano using seismic amplitudes

    USGS Publications Warehouse

    Battaglia, J.; Aki, K.

    2003-01-01

    We present a method for locating the source of seismic events on Piton de la Fournaise. The method is based on seismic amplitudes corrected for station site effects using coda site amplification factors. Once corrected, the spatial distribution of amplitudes shows smooth and simple contours for many types of events, including rockfalls, long-period events and eruption tremor. On the basis of the simplicity of these distributions we develop inversion methods for locating their origins. To achieve this, the decrease of the amplitude as a function of the distance to the source is approximated by the decay either of surface or body waves in a homogeneous medium. The method is effective for locating rockfalls, long-period events, and eruption tremor sources. The sources of eruption tremor are usually found to be located at shallow depth and close to the eruptive fissures. Because of this, our method is a useful tool for locating fissures at the beginning of eruptions.

  18. Location of multi-phase volcanic events from a temporary dense seismic array at White Island, New Zealand

    NASA Astrophysics Data System (ADS)

    Jolly, Arthur; Lokmer, Ivan; Thun, Johannes; Salichon, Jerome; Fournier, Nico; Fry, Bill

    2016-04-01

    The August 2012 to October 2013 White Island eruption sequence included an increase in gas flux and RSAM seismic tremor beginning in late 2011. Prior to this unrest, a small swarm of 25 events was observed on 19-21 August 2011. The events were captured on a temporary dense seismic array including 12 broadband sensors that were deployed between June and November 2011. Each event comprised coupled earthquakes having distinct high frequency (HF = >1 s), long-period (LP = 2-4 s) and very long period (VLP = 10-30 s) pulses. For each coupled HF, LP and VLP event, we compute the source locations, origin times and related uncertainties by application of standard arrival time locations for the HF events and waveform back-projection for the LP and VLP events. Preliminary results suggest that the events are centred beneath active vent at depths generally less than 2 km. The HF earthquakes have diffuse locations (<2 km), while LP events are constrained to generally shallower source depths (< 1km) and VLP events have slightly deeper source locations (1 to 2 km). The arrival-time locations have been constrained using a realistic shallow velocity model while the waveform back-projection locations have been constrained by thorough synthetic testing. Emergent onsets for LP and VLP sources make an analysis of the absolute origin times problematic but waveform matching of VLP to LP components suggests relative time variations of less than a second or two. We will discuss the location and relative timing for the three event types in context with possible hydrothermal and magmatic processes at White Island volcano.

  19. Alignment of leading-edge and peak-picking time of arrival methods to obtain accurate source locations

    SciTech Connect

    Roussel-Dupre, R.; Symbalisty, E.; Fox, C.; and Vanderlinde, O.

    2009-08-01

    The location of a radiating source can be determined by time-tagging the arrival of the radiated signal at a network of spatially distributed sensors. The accuracy of this approach depends strongly on the particular time-tagging algorithm employed at each of the sensors. If different techniques are used across the network, then the time tags must be referenced to a common fiducial for maximum location accuracy. In this report we derive the time corrections needed to temporally align leading-edge, time-tagging techniques with peak-picking algorithms. We focus on broadband radio frequency (RF) sources, an ionospheric propagation channel, and narrowband receivers, but the final results can be generalized to apply to any source, propagation environment, and sensor. Our analytic results are checked against numerical simulations for a number of representative cases and agree with the specific leading-edge algorithm studied independently by Kim and Eng (1995) and Pongratz (2005 and 2007).

  20. Applications of Location Similarity Measures and Conceptual Spaces to Event Coreference and Classification

    ERIC Educational Resources Information Center

    McConky, Katie Theresa

    2013-01-01

    This work covers topics in event coreference and event classification from spoken conversation. Event coreference is the process of identifying descriptions of the same event across sentences, documents, or structured databases. Existing event coreference work focuses on sentence similarity models or feature based similarity models requiring slot…

  1. Acoustic monitoring of laboratory faults: locating the origin of unstable slip events

    NASA Astrophysics Data System (ADS)

    Korkolis, Evangelos; Niemeijer, André; Spiers, Christopher

    2015-04-01

    Over the past several decades, much work has been done on studying the frictional properties of fault gouges at earthquake nucleation velocities. In addition, post-experiment microstructural analyses have been performed in an attempt to link microphysical mechanisms to the observed mechanical data. However, all observations are necessarily post-mortem and it is thus difficult to directly link transients to microstructural characteristics. We are developing an acoustic monitoring system to be used in sliding experiments using a ring shear apparatus. The goal is to locate acoustic emission sources in sheared granular assemblages and link them to processes that act on microstructures responsible for the frictional stability of the simulated fault gouge. The results will be used to develop and constrain microphysical models that explain the relation of these processes to empirical friction laws, such as rate- and state-dependent friction. The acoustic monitoring setup is comprised of an array of 16 piezo-electric sensors installed on the top and bottom sides of an annular sample, at 45 degree intervals. Acoustic emissions associated with slip events can be recorded at sampling rates of up to 50 MHz, in triggered mode. Initial experiments on 0.1 to 0.2 mm and 0.4 to 0.5 mm diameter glass beads, at 1 to 5 MPa normal stress and 1 to 30 um/s load point velocity, have been conducted to estimate the sensitivity of the sensor array. Preliminary results reveal that the intensity of the audible signal is not necessarily proportional to the magnitude of the associated stress drop for constant loading conditions, and that acoustic emissions precede slip events by a small amount of time, in the order of a few milliseconds. Currently, our efforts are focused on developing a suitable source location algorithm with the aim to identify differences in the mode of (unstable) sliding for different types of materials. This will help to identify the micromechanical mechanisms operating

  2. Swift Gamma-Ray Burst Explorer: Mission Design for Rapid, Accurate Location of Gamma-ray Bursts

    NASA Technical Reports Server (NTRS)

    Bundas, David J.

    2004-01-01

    The Swift Gamma-ray Burst Explorer is a NASA Mid-sized Explorer (MIDEX) with the primary mission of determining the origins of Gamma-Ray Bursts (GRBs). It will be the first mission to autonomously respond to newly-discovered GRBs and provide immediate follow-up narrow field instruments capable of multi-wavelength (UV, Optical, X-ray) observations. The characteristics of GRBs that are the key mission design drivers, are their non-repeating and brief duration bursts of multi-wavelength photons. In addition, rapid notification of the location and characteristics of the GRBs to ground-and-space- based observatories drive the end-to-end data analysis and distribution requirements. The Swift mission is managed by the GSFC, and includes an international team of contributors that each bring their unique perspective that have proven invaluable to the mission. The spacecraft bus, provided by Spectrum Astro, Inc. was procured through a Rapid Spacecraft Development Office (RSDO) contract by the GSFC. There are three instruments: the Burst Alert Telescope (BAT) provided by the GSFC; the X-Ray Telescope (XRT) provided by a team led by the Pennsylvania State University (PSU); and the Ultra-Violet Optical Telescope (UVOT), again managed by PSU. The Mission Operations Center (MOC) was developed by and is located at PSU. Science archiving and data analysis centers are located at the GSFC, in the UK and in Italy.

  3. Location of EMIC Wave Events Relative to the Plasmapause: Van Allen Probes Observations

    NASA Astrophysics Data System (ADS)

    Tetrick, S.; Engebretson, M. J.; Posch, J. L.; Kletzing, C.; Smith, C. W.; Wygant, J. R.; Gkioulidou, M.; Reeves, G. D.; Fennell, J. F.

    2015-12-01

    Many early theoretical studies of electromagnetic ion cyclotron (EMIC) waves generated in Earth's magnetosphere predicted that the equatorial plasmapause (PP) would be a preferred location for their generation. However, several large statistical studies in the past two decades, most notably Fraser and Nguyen [2001], have provided little support for this location. In this study we present a survey of the most intense EMIC waves observed by the EMFISIS fluxgate magnetometer on the Van Allen Probes-A spacecraft (with apogee at 5.9 RE) from its launch through the end of 2014, and have compared their location with simultaneous electron density data obtained by the EFW electric field instrument and ring current ion flux data obtained by the HOPE and RBSPICE instruments. We show distributions of these waves as a function of distance inside or outside the PP as a function of local time sector, frequency band (H+, He+, or both), and timing relative to magnetic storms and substorms. Most EMIC waves in this data set occurred within 1 RE of the PP in all local time sectors, but very few were limited to ± 0.1 RE, and most of these occurred in the 06-12 MLT sector during non-storm conditions. The majority of storm main phase waves in the dusk sector occurred inside the PP. He+ band waves dominated at most local times inside the PP, and H+ band waves were never observed there. Although the presence of elevated fluxes of ring current protons was common to all events, the configuration of lower energy ion populations varied as a function of geomagnetic activity and storm phase.

  4. Helicopter Based Magnetic Detection Of Wells At The Teapot Dome (Naval Petroleum Reserve No. 3 Oilfield: Rapid And Accurate Geophysical Algorithms For Locating Wells

    NASA Astrophysics Data System (ADS)

    Harbert, W.; Hammack, R.; Veloski, G.; Hodge, G.

    2011-12-01

    In this study Airborne magnetic data was collected by Fugro Airborne Surveys from a helicopter platform (Figure 1) using the Midas II system over the 39 km2 NPR3 (Naval Petroleum Reserve No. 3) oilfield in east-central Wyoming. The Midas II system employs two Scintrex CS-2 cesium vapor magnetometers on opposite ends of a transversely mounted, 13.4-m long horizontal boom located amidships (Fig. 1). Each magnetic sensor had an in-flight sensitivity of 0.01 nT. Real time compensation of the magnetic data for magnetic noise induced by maneuvering of the aircraft was accomplished using two fluxgate magnetometers mounted just inboard of the cesium sensors. The total area surveyed was 40.5 km2 (NPR3) near Casper, Wyoming. The purpose of the survey was to accurately locate wells that had been drilled there during more than 90 years of continuous oilfield operation. The survey was conducted at low altitude and with closely spaced flight lines to improve the detection of wells with weak magnetic response and to increase the resolution of closely spaced wells. The survey was in preparation for a planned CO2 flood to enhance oil recovery, which requires a complete well inventory with accurate locations for all existing wells. The magnetic survey was intended to locate wells that are missing from the well database and to provide accurate locations for all wells. The well location method used combined an input dataset (for example, leveled total magnetic field reduced to the pole), combined with first and second horizontal spatial derivatives of this input dataset, which were then analyzed using focal statistics and finally combined using a fuzzy combination operation. Analytic signal and the Shi and Butt (2004) ZS attribute were also analyzed using this algorithm. A parameter could be adjusted to determine sensitivity. Depending on the input dataset 88% to 100% of the wells were located, with typical values being 95% to 99% for the NPR3 field site.

  5. aPPRove: An HMM-Based Method for Accurate Prediction of RNA-Pentatricopeptide Repeat Protein Binding Events.

    PubMed

    Harrison, Thomas; Ruiz, Jaime; Sloan, Daniel B; Ben-Hur, Asa; Boucher, Christina

    2016-01-01

    Pentatricopeptide repeat containing proteins (PPRs) bind to RNA transcripts originating from mitochondria and plastids. There are two classes of PPR proteins. The [Formula: see text] class contains tandem [Formula: see text]-type motif sequences, and the [Formula: see text] class contains alternating [Formula: see text], [Formula: see text] and [Formula: see text] type sequences. In this paper, we describe a novel tool that predicts PPR-RNA interaction; specifically, our method, which we call aPPRove, determines where and how a [Formula: see text]-class PPR protein will bind to RNA when given a PPR and one or more RNA transcripts by using a combinatorial binding code for site specificity proposed by Barkan et al. Our results demonstrate that aPPRove successfully locates how and where a PPR protein belonging to the [Formula: see text] class can bind to RNA. For each binding event it outputs the binding site, the amino-acid-nucleotide interaction, and its statistical significance. Furthermore, we show that our method can be used to predict binding events for [Formula: see text]-class proteins using a known edit site and the statistical significance of aligning the PPR protein to that site. In particular, we use our method to make a conjecture regarding an interaction between CLB19 and the second intronic region of ycf3. The aPPRove web server can be found at www.cs.colostate.edu/~approve. PMID:27560805

  6. aPPRove: An HMM-Based Method for Accurate Prediction of RNA-Pentatricopeptide Repeat Protein Binding Events

    PubMed Central

    Harrison, Thomas; Ruiz, Jaime; Sloan, Daniel B.; Ben-Hur, Asa; Boucher, Christina

    2016-01-01

    Pentatricopeptide repeat containing proteins (PPRs) bind to RNA transcripts originating from mitochondria and plastids. There are two classes of PPR proteins. The P class contains tandem P-type motif sequences, and the PLS class contains alternating P, L and S type sequences. In this paper, we describe a novel tool that predicts PPR-RNA interaction; specifically, our method, which we call aPPRove, determines where and how a PLS-class PPR protein will bind to RNA when given a PPR and one or more RNA transcripts by using a combinatorial binding code for site specificity proposed by Barkan et al. Our results demonstrate that aPPRove successfully locates how and where a PPR protein belonging to the PLS class can bind to RNA. For each binding event it outputs the binding site, the amino-acid-nucleotide interaction, and its statistical significance. Furthermore, we show that our method can be used to predict binding events for PLS-class proteins using a known edit site and the statistical significance of aligning the PPR protein to that site. In particular, we use our method to make a conjecture regarding an interaction between CLB19 and the second intronic region of ycf3. The aPPRove web server can be found at www.cs.colostate.edu/~approve. PMID:27560805

  7. Places, Spaces and Memory Traces: Showing Students with Learning Disabilities Ways to Remember Locations and Events on Maps.

    ERIC Educational Resources Information Center

    Brigham, Frederick J.

    This study examined the memory-enhancing effects of elaborative and mnemonic encoding of information presented with maps, compared to more traditional, non-mnemonic maps, on recall of locations of events and information associated with those events by 72 middle school students with learning disabilities. Subjects were presented with map-like…

  8. Swift Gamma-ray Burst Explorer: Mission Design for Rapid, Accurate Location of Gamma-ray Bursts

    NASA Technical Reports Server (NTRS)

    Bundas, David J.

    2005-01-01

    The Swift Gamma-ray Burst Explorer is a NASA Mid-sized Explorer (MIDEX) with the primary mission of determining the origins of Gamma-Ray Bursts (GRBs). It will be the first mission to autonomously respond to newly-discovered GRBs and provide immediate follow-up with narrow field instruments capable of multi-wavelength (UT, Optical, X-ray) observations. The characteristics of GRBs that are the key mission design drivers, are their non-repeating and brief duration bursts of multi-wavelength photons. In addition, rapid notification of the location and characteristics of the GRBs to ground-and-space-based observatories drive the end-to-end data analysis and distribution requirements.

  9. Swift Gamma-Ray Burst Explorer: Mission Design for Rapid, Accurate Location of Gamma-ray Bursts

    NASA Technical Reports Server (NTRS)

    Bundas, David J.

    2004-01-01

    The Swift Gamma-ray Burst Explorer is a NASA Mid-sized Explorer (MIDEX) with the primary mission of determining the origins of Gamma-Ray Bursts (GRBs). It will be the first mission to autonomously respond to newly-discovered GRBs and provide immediate follow-up with narrow field instruments capable of multi-wavelength (UV, Optical, X-ray) observations. The characteristics of GRBs that are the key mission design drivers, are their non-repeating and brief duration bursts of multi-wavelength photons. In addition, rapid notification of the location and characteristics of the GRBs to ground-and-space-based observatories drive the end-to-end data analysis and distribution requirements.

  10. Evolution and location of lightning events that cause terrestrial gamma ray flashes (TGFs)

    NASA Astrophysics Data System (ADS)

    Marshall, T.; Stolzenburg, M.; Karunarathne, S.; Lu, G.; Cummer, S. A.

    2012-12-01

    TGFs often occur during the initial breakdown (IB) stage of intracloud lightning flashes; in particular, they seem to be related to the bipolar IB pulses seen by E-change sensors. Each IB pulse usually has 1-3 fast (2-5 μs) unipolar pulses "superimposed on the initial half cycle" of a slower (~40 μs) bipolar pulse [Weidman and Krider, JGR 1979]. During the summer of 2011 we collected lightning E-change data at 10 sites covering an area of about 70 km × 100 km at the NASA/Kennedy Space Center (KSC). We also use ELF and LF data recorded at Duke University to identify lightning flashes in the KSC area that exhibit TGF-like characteristics. By using a time-of-arrival technique with our E-change data, preliminary results indicate that the successive unipolar pulses in an IB bipolar pulse are located a few hundred meters higher in altitude than the previous unipolar pulse. The E-change data during the initial half cycle of the bipolar pulse are consistent with propagation of a negative streamer upward. We will present these data and discuss ways in which the events might produce gamma rays.

  11. Seismic monitoring of EGS tests at the Coso Geothermal area, California, using accurate MEQ locations and full moment tensors

    SciTech Connect

    Foulger, G.R.; B.R. Julian, B.R.; F. Monastero

    2008-04-01

    We studied high-resolution relative locations and full moment tensors of microearthquakes (MEQs) occurring before, during and following Enhanced Geothermal Systems (EGS) experiments in two wells at the Coso geothermal area, California. The objective was to map new fractures, determine the mode and sense of failure, and characterize the stress cycle associated with injection. New software developed for this work combines waveform crosscorrelation measurement of arrival times with relative relocation methods, and assesses confidence regions for moment tensors derived using linearprogramming methods. For moment tensor determination we also developed a convenient Graphical User Interface (GUI), to streamline the work. We used data from the U.S. Navy’s permanent network of three-component digital borehole seismometers and from 14 portable three-component digital instruments. The latter supplemented the permanent network during injection experiments in well 34A-9 in 2004 and well 34-9RD2 in 2005. In the experiment in well 34A-9, the co-injection earthquakes were more numerous, smaller, more explosive and had more horizontal motion, compared with the pre-injection earthquakes. In the experiment in well 34-9RD2 the relocated hypocenters reveal a well-defined planar structure, 700 m long and 600 m high in the depth range 0.8 to 1.4 km below sea level, striking N 20° E and dipping at 75° to the WNW. The moment tensors show that it corresponds to a mode I (opening) crack. For both wells, the perturbed stress state near the bottom of the well persisted for at least two months following the injection.

  12. A New Regional 3-D Velocity Model of the India-Pakistan Region for Improved Event Location Accuracy

    NASA Astrophysics Data System (ADS)

    Reiter, D.; Vincent, C.; Johnson, M.

    2001-05-01

    A 3-D velocity model for the crust and upper mantle (WINPAK3D) has been developed to improve regional event location in the India-Pakistan region. Results of extensive testing demonstrate that the model improves location accuracy for this region, specifically for the case of small regionally recorded events, for which teleseismic data may not be available. The model was developed by integrating the results of more than sixty previous studies related to crustal velocity structure in the region. We evaluated the validity of the 3-D model using the following methods: (1) cross validation analysis for a variety of events, (2) comparison of model determined hypocenters with known event location, and (3) comparison of model-derived and empirically-derived source-specific station corrections (SSSC) generated for the International Monitoring System (IMS) auxiliary seismic station located at Nilore. The 3-D model provides significant improvement in regional location compared to both global and regional 1-D models in this area of complex structural variability. For example, the epicenter mislocation for an event with a well known location was only 6.4 km using the 3-D model, compared with a mislocation of 13.0 km using an average regional 1-D model and 15.1 km for the IASPEI91 model. We will present these and other results to demonstrate that 3-D velocity models are essential to improving event location accuracy in regions with complicated crustal geology and structures. Such 3-D models will be a prerequisite for achieving improved location accuracies for regions of high monitoring interest.

  13. Pn and Sn tomography across Eurasia to improve regional seismic event locations

    NASA Astrophysics Data System (ADS)

    Ritzwoller, Michael H.; Barmin, Mikhail P.; Villaseñor, Antonio; Levshin, Anatoli L.; Engdahl, E. Robert

    2002-11-01

    This paper has three motivations: first, to map Pn and Sn velocities beneath most of Eurasia to reveal information on a length scale relevant to regional tectonics, second, to test recently constructed 3-D mantle models and, third, to develop and test a method to produce Pn and Sn travel time correction surfaces which are the 3-D analogue of travel time curves for a 1-D model. Our third motive is inspired by the need to improve regional location capabilities in monitoring nuclear treaties such as the nuclear Comprehensive Test Ban Treaty (CTBT). To a groomed version of the ISC/NEIC data, we apply the tomographic method of Barmin et al. [Pure Appl. Geophys. (2001)], augmented to include station and event corrections and an epicentral distance correction. The Pn and Sn maps are estimated on a 1°×1° grid throughout Eurasia. We define the phases Pn and Sn as arriving between epicentral distances of 3° and 15°. After selection, the resulting data set consists of about 1,250,000 Pn and 420,000 Sn travel times distributed inhomogeneously across Eurasia. The rms misfit to the entire Eurasian data set from the Pn and Sn model increases nearly linearly with distance and averages about 1.6 s for Pn and 3.2 s for Sn, but is better for events that occurred on several nuclear test sites and for selected high-quality data subsets. The Pn and Sn maps compare favorably with recent 3-D models of P and S in the uppermost mantle and with recently compiled teleseismic station corrections across the region. The most intriguing features on the maps are the low-velocity anomalies that characterize most tectonically deformed regions such as the anomaly across central and southern Asia and the Middle East that extends along a tortuous path from Turkey in the west to Lake Baikal in the east. These anomalies are related to the closing of the Neo-Tethys Ocean and the collision of India with Asia. The uppermost mantle beneath the Pacific Rim back-arc is also very slow, presumably due to

  14. A new method to estimate location and slip of simulated rock failure events

    NASA Astrophysics Data System (ADS)

    Heinze, Thomas; Galvan, Boris; Miller, Stephen Andrew

    2015-05-01

    At the laboratory scale, identifying and locating acoustic emissions (AEs) is a common method for short term prediction of failure in geomaterials. Above average AE typically precedes the failure process and is easily measured. At larger scales, increase in micro-seismic activity sometimes precedes large earthquakes (e.g. Tohoku, L'Aquilla, oceanic transforms), and can be used to assess seismic risk. The goal of this work is to develop a methodology and numerical algorithms for extracting a measurable quantity analogous to AE arising from the solution of equations governing rock deformation. Since there is no physical property to quantify AE derivable from the governing equations, an appropriate rock-mechanical analog needs to be found. In this work, we identify a general behavior of the AE generation process preceding rock failure. This behavior includes arbitrary localization of low magnitude events during pre-failure stage, followed by increase in number and amplitude, and finally localization around the incipient failure plane during macroscopic failure. We propose deviatoric strain rate as the numerical analog that mimics this behavior, and develop two different algorithms designed to detect rapid increases in deviatoric strain using moving averages. The numerical model solves a fully poro-elasto-plastic continuum model and is coupled to a two-phase flow model. We test our model by comparing simulation results with experimental data of drained compression and of fluid injection experiments. We find for both cases that occurrence and amplitude of our AE analog mimic the observed general behavior of the AE generation process. Our technique can be extended to modeling at the field scale, possibly providing a mechanistic basis for seismic hazard assessment from seismicity that occasionally precedes large earthquakes.

  15. Locating Microseismic Events Using Fat-Ray Double-Difference Tomography for Monitoring CO2 Injection at the Aneth EOR Field

    NASA Astrophysics Data System (ADS)

    Chen, T.; Huang, L.; Rutledge, J. T.

    2014-12-01

    During CO2 injection, the increase in pore pressure and volume may change stress distribution in the field, and induce microseismic events as brittle failure on small faults or fractures. An accurate location of these induced microseismic events can help understand the migration of CO2 and stress evolution in the reservoir. A geophone string spanning 800-1700 m in depth was cemented into a monitoring well at the Aneth oil field in Utah in 2007 for monitoring CO2 injection for enhanced oil recovery (EOR). The monitoring continued till 2010. A total of 24 geophone levels recorded induced microseismic events, including 18 levels of three-component geophones and six vertical-component levels spaced 106.7 m (350 ft) apart to take full advantage of the entire array aperture. We apply a fat-ray double-difference tomography method to microseismic data acquired at the Aneth EOR field. We obtain high-precision locations of microseismic events and improve the velocity structure simultaneously. We demonstrate the improvements by comparing our results with those obtained using the conventional double-difference tomography.

  16. GLAST Burst Monitor On-Board Triggering, Locations and Event Classification

    SciTech Connect

    Briggs, M. S.; Connaughton, V.; Paciesas, W.; Preece, R.; Meegan, C. A.; Fishman, G.; Kouveliotou, C.; Wilson-Hodge, C.; Diehl, R.; Greiner, J.; Kienlin, A. von; Lichti, G.; Steinle, H.; Kippen, R. M.

    2007-07-12

    We report on how the the GLAST Burst Monitor (GBM) Flight Software will detect gamma-ray rate increases, a process known as 'triggering', and on how the Flight Software will locate and classify the causes of the triggers.

  17. The magnetic network location of explosive events observed in the solar transition region

    NASA Technical Reports Server (NTRS)

    Porter, J. G.; Dere, K. P.

    1991-01-01

    Compact short-lived explosive events have been observed in solar transition region lines with the High-Resolution Telescope and Spectrograph (HRTS) flown by the Naval Research Laboratory on a series of rockets and on Spacelab 2. Data from Spacelab 2 are coaligned with a simultaneous magnetogram and near-simultaneous He I 10,380 -A spectroheliogram obtained at the National Solar Observatory at Kitt Peak. The comparison shows that the explosive events occur in the solar magnetic network lanes at the boundaries of supergranular convective cells. However, the events occur away from the larger concentrations of magnetic flux in the network, in contradiction to the observed tendency of the more energetic solar phenomena to be associated with the stronger magnetic fields.

  18. Automatic picker of P & S first arrivals and robust event locator

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Polozov, A.; Hofstetter, A.

    2003-12-01

    We report on further development of automatic all distances location procedure designed for a regional network. The procedure generalizes the previous "loca l" (R < 500 km) and "regional" (500 < R < 2000 km) routines and comprises: a) preliminary data processing (filtering and de-spiking), b) phase identificatio n, c) P, S first arrival picking, d) preliminary location and e) robust grid-search optimization procedure. Innovations concern phase identification, automa tic picking and teleseismic location. A platform free flexible Java interface was recently created, allowing easy parameter tuning and on/off switching to t he full-scale manual picking mode. Identification of the regional P and S phase is provided by choosing between the two largest peaks in the envelope curve. For automatic on-time estimation we utilize now ratio of two STAs, calculated in two consecutive and equal time windows (instead of previously used Akike Information Criterion). "Teleseismic " location is split in two stages: preliminary and final one. The preliminary part estimates azimuth and apparent velocity by fitting a plane wave to the P automatic pickings. The apparent velocity criterion is used to decide about strategy of the following computations: teleseismic or regional. The preliminary estimates of azimuth and apparent velocity provide starting value for the final teleseismic and regional location. Apparent velocity is used to get first a pproximation distance to the source on the basis of the P, Pn, Pg travel-timetables. The distance estimate together with the preliminary azimuth estimate provides first approximations of the source latitude and longitude via sine and cosine theorems formulated for the spherical triangle. Final location is based on robust grid-search optimization procedure, weighting the number of pickings that simultaneously fit the model travel times. The grid covers initial location and becomes finer while approaching true hypocenter. The target function is a sum

  19. Characterization of Source and Wave Propagation Effects of Volcano-seismic Events and Tremor Using the Amplitude Source Location Method

    NASA Astrophysics Data System (ADS)

    Kumagai, H.; Londono, J. M.; López, C. M.; Ruiz, M. C.; Mothes, P. A.; Maeda, Y.

    2015-12-01

    We propose application of the amplitude source location (ASL) method to characterize source and wave propagation effects of volcano-seismic events and tremor observed at different volcanoes. We used this method to estimate the source location and source amplitude from high-frequency (5-10 Hz) seismic amplitudes under the assumption of isotropic S-wave radiation. We estimated the cumulative source amplitude (Is) as the offset value of the time-integrated envelope of the vertical seismogram corrected for geometrical spreading and medium attenuation in the 5-10 Hz band. We studied these parameters of tremor signals associated with eruptions and explosion events at Tungurahua volcano, Ecuador; long-period (LP) events at Cotopaxi volcano, Ecuador; and LP events at Nevado del Ruiz volcano, Colombia. We identified two types of eruption tremor at Tungurahua; noise-like inharmonic waveforms and harmonic oscillatory signals. We found that Is increased linearly with increasing source amplitude for explosion events and LP events, and that Is increased exponentially with increasing source amplitude for inharmonic eruption tremor signals. The source characteristics of harmonic eruption tremor signals differed from those of inharmonic tremor signals. The Is values we estimated for inharmonic eruption tremor were consistent with previous estimates of volumes of tephra fallout. The linear relationship between the source amplitude and Is for LP events can be explained by the wave propagation effects in the diffusion model for multiple scattering assuming a diffusion coefficient of 105 m2/s and an intrinsic Q factor of around 50. The resultant mean free path is approximately 100 m. Our results suggest that Cotopaxi and Nevado del Ruiz volcanoes have similar highly scattering and attenuating structures. Our approach provides a systematic way to compare the size of volcano-seismic signals observed at different volcanoes. The scaling relations among source parameters that we identified

  20. Encoding negative events under stress: high subjective arousal is related to accurate emotional memory despite misinformation exposure.

    PubMed

    Hoscheidt, Siobhan M; LaBar, Kevin S; Ryan, Lee; Jacobs, W Jake; Nadel, Lynn

    2014-07-01

    Stress at encoding affects memory processes, typically enhancing, or preserving, memory for emotional information. These effects have interesting implications for eyewitness accounts, which in real-world contexts typically involve encoding an aversive event under stressful conditions followed by potential exposure to misinformation. The present study investigated memory for a negative event encoded under stress and subsequent misinformation endorsement. Healthy young adults participated in a between-groups design with three experimental sessions conducted 48 h apart. Session one consisted of a psychosocial stress induction (or control task) followed by incidental encoding of a negative slideshow. During session two, participants were asked questions about the slideshow, during which a random subgroup was exposed to misinformation. Memory for the slideshow was tested during the third session. Assessment of memory accuracy across stress and no-stress groups revealed that stress induced just prior to encoding led to significantly better memory for the slideshow overall. The classic misinformation effect was also observed - participants exposed to misinformation were significantly more likely to endorse false information during memory testing. In the stress group, however, memory accuracy and misinformation effects were moderated by arousal experienced during encoding of the negative event. Misinformed-stress group participants who reported that the negative slideshow elicited high arousal during encoding were less likely to endorse misinformation for the most aversive phase of the story. Furthermore, these individuals showed better memory for components of the aversive slideshow phase that had been directly misinformed. Results from the current study provide evidence that stress and high subjective arousal elicited by a negative event act concomitantly during encoding to enhance emotional memory such that the most aversive aspects of the event are well remembered and

  1. Design and Test of an Event Detector and Locator for the ReflectoActive Seals System

    SciTech Connect

    Stinson, Brad J

    2006-06-01

    The purpose of this work was to research, design, develop and test a novel instrument for detecting fiber optic loop continuity and spatially locating fiber optic breaches. The work is for an active seal system called ReflectoActive{trademark} Seals whose purpose is to provide real time container tamper indication. A Field Programmable Gate Array was used to implement a loop continuity detector and a spatial breach locator based on a high acquisition speed single photon counting optical time domain reflectometer. Communication and other control features were added in order to create a usable instrument that met defined requirements. A host graphical user interface was developed to illustrate system use and performance. The resulting device meets performance specifications by exhibiting a dynamic range of 27dB and a spatial resolution of 1.5 ft. The communication scheme used expands installation options and allows the device to communicate to a central host via existing Local Area Networks and/or the Internet.

  2. Pyroclastic density current hazard maps at Campi Flegrei caldera (Italy): the effects of event scale, vent location and time forecasts.

    NASA Astrophysics Data System (ADS)

    Bevilacqua, Andrea; Neri, Augusto; Esposti Ongaro, Tomaso; Isaia, Roberto; Flandoli, Franco; Bisson, Marina

    2016-04-01

    Today hundreds of thousands people live inside the Campi Flegrei caldera (Italy) and in the adjacent part of the city of Naples making a future eruption of such volcano an event with huge consequences. Very high risks are associated with the occurrence of pyroclastic density currents (PDCs). Mapping of background or long-term PDC hazard in the area is a great challenge due to the unknown eruption time, scale and vent location of the next event as well as the complex dynamics of the flow over the caldera topography. This is additionally complicated by the remarkable epistemic uncertainty on the eruptive record, affecting the time of past events, the location of vents as well as the PDCs areal extent estimates. First probability maps of PDC invasion were produced combining a vent-opening probability map, statistical estimates concerning the eruptive scales and a Cox-type temporal model including self-excitement effects, based on the eruptive record of the last 15 kyr. Maps were produced by using a Monte Carlo approach and adopting a simplified inundation model based on the "box model" integral approximation tested with 2D transient numerical simulations of flow dynamics. In this presentation we illustrate the independent effects of eruption scale, vent location and time of forecast of the next event. Specific focus was given to the remarkable differences between the eastern and western sectors of the caldera and their effects on the hazard maps. The analysis allowed to identify areas with elevated probabilities of flow invasion as a function of the diverse assumptions made. With the quantification of some sources of uncertainty in relation to the system, we were also able to provide mean and percentile maps of PDC hazard levels.

  3. BlueDetect: An iBeacon-Enabled Scheme for Accurate and Energy-Efficient Indoor-Outdoor Detection and Seamless Location-Based Service

    PubMed Central

    Zou, Han; Jiang, Hao; Luo, Yiwen; Zhu, Jianjie; Lu, Xiaoxuan; Xie, Lihua

    2016-01-01

    The location and contextual status (indoor or outdoor) is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS) for individuals. In addition, optimizations of building management systems (BMS), such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO) detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS) easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption. PMID:26907295

  4. BlueDetect: An iBeacon-Enabled Scheme for Accurate and Energy-Efficient Indoor-Outdoor Detection and Seamless Location-Based Service.

    PubMed

    Zou, Han; Jiang, Hao; Luo, Yiwen; Zhu, Jianjie; Lu, Xiaoxuan; Xie, Lihua

    2016-01-01

    The location and contextual status (indoor or outdoor) is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS) for individuals. In addition, optimizations of building management systems (BMS), such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO) detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS) easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption. PMID:26907295

  5. Testing the Quick Seismic Event Locator and Magnitude Calculator (SSL_Calc) by Marsite Project Data Base

    NASA Astrophysics Data System (ADS)

    Tunc, Suleyman; Tunc, Berna; Caka, Deniz; Baris, Serif

    2016-04-01

    Locating and calculating size of the seismic events is quickly one of the most important and challenging issue in especially real time seismology. In this study, we developed a Matlab application to locate seismic events and calculate their magnitudes (Local Magnitude and empirical Moment Magnitude) using single station called SSL_Calc. This newly developed sSoftware has been tested on the all stations of the Marsite project "New Directions in Seismic Hazard Assessment through Focused Earth Observation in the Marmara Supersite-MARsite". SSL_Calc algorithm is suitable both for velocity and acceleration sensors. Data has to be in GCF (Güralp Compressed Format). Online or offline data can be selected in SCREAM software (belongs to Guralp Systems Limited) and transferred to SSL_Calc. To locate event P and S wave picks have to be marked by using SSL_Calc window manually. During magnitude calculation, instrument correction has been removed and converted to real displacement in millimeter. Then the displacement data is converted to Wood Anderson Seismometer output by using; Z=[0;0]; P=[-6.28+4.71j; -6.28-4.71j]; A0=[2080] parameters. For Local Magnitude calculation,; maximum displacement amplitude (A) and distance (dist) are used in formula (1) for distances up to 200km and formula (2) for more than 200km. ML=log10(A)-(-1.118-0.0647*dist+0.00071*dist2-3.39E-6*dist3+5.71e-9*dist4) (1) ML=log10(A)+(2.1173+0.0082*dist-0.0000059628*dist2) (2) Following Local Magnitude calculation, the programcode calculates two empiric Moment Magnitudes using formulas (3) Akkar et al. (2010) and (4) Ulusay et al. (2004). Mw=0.953* ML+0.422 (3) Mw=0.7768* ML+1.5921 (4) SSL_Calc is a software that is easy to implement and user friendly and offers practical solution to individual users to location of event and ML, Mw calculation.

  6. Single-station and single-event marsquake location and inversion for structure using synthetic Martian waveforms

    NASA Astrophysics Data System (ADS)

    Khan, A.; van Driel, M.; Böse, M.; Giardini, D.; Ceylan, S.; Yan, J.; Clinton, J.; Euchner, F.; Lognonné, P.; Murdoch, N.; Mimoun, D.; Panning, M.; Knapmeyer, M.; Banerdt, W. B.

    2016-09-01

    In anticipation of the upcoming InSight mission, which is expected to deploy a single seismic station on the Martian surface in November 2018, we describe a methodology that enables locating marsquakes and obtaining information on the interior structure of Mars. The method works sequentially and is illustrated using single representative 3-component seismograms from two separate events: a relatively large teleseismic event (Mw5.1) and a small-to-moderate-sized regional event (Mw3.8). Location and origin time of the event is determined probabilistically from observations of Rayleigh waves and body-wave arrivals. From the recording of surface waves, averaged fundamental-mode group velocity dispersion data can be extracted and, in combination with body-wave arrival picks, inverted for crust and mantle structure. In the absence of Martian seismic data, we performed full waveform computations using a spectral element method (AxiSEM) to compute seismograms down to a period of 1 s. The model (radial profiles of density, P- and S-wave-speed, and attenuation) used for this purpose is constructed on the basis of an average Martian mantle composition and model areotherm using thermodynamic principles, mineral physics data, and viscoelastic modeling. Noise was added to the synthetic seismic data using an up-to-date noise model that considers a whole series of possible noise sources generated in instrument and lander, including wind-, thermal-, and pressure-induced effects and electromagnetic noise. The examples studied here, which are based on the assumption of spherical symmetry, show that we are able to determine epicentral distance and origin time to accuracies of ∼ 0.5-1° and ± 3-6 s, respectively. For the events and the particular noise level chosen, information on Rayleigh-wave group velocity dispersion in the period range ∼ 14-48 s (Mw5.1) and ∼ 14-34 s (Mw3.8) could be determined. Stochastic inversion of dispersion data in combination with body-wave travel time

  7. a Multiple Data Set Joint Inversion Global 3d P-Velocity Model of the Earth's Crust and Mantle for Improved Seismic Event Location

    NASA Astrophysics Data System (ADS)

    Ballard, S.; Begnaud, M. L.; Hipp, J. R.; Chael, E. P.; Encarnacao, A.; Maceira, M.; Yang, X.; Young, C. J.; Phillips, W.

    2013-12-01

    SALSA3D is a global 3D P wave velocity model of the Earth's crust and mantle developed specifically to provide seismic event locations that are more accurate and more precise than are locations from 1D and 2.5D models. In this paper, we present the most recent version of our model, for the first time jointly derived from multiple types of data: body wave travel times, surface wave group velocities, and gravity. The latter two are added to provide information in areas with poor body wave coverage, and are down-weighted in areas where body wave coverage is good. To constrain the inversions, we invoked empirical relations among the density, S velocity, and P velocity. We demonstrate the ability of the new SALSA3D model to reduce mislocations and generate statistically robust uncertainty estimates for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. We obtain path-dependent travel time prediction uncertainties for our model by computing the full 3D model covariance matrix of our tomographic system and integrating the model slowness variance and covariance along paths of interest. This approach yields very low travel time prediction uncertainties for well-sampled paths through the Earth and higher uncertainties for paths that are poorly represented in the data set used to develop the model. While the calculation of path-dependent prediction uncertainties with this approach is computationally expensive, uncertainties can be pre-computed for a network of stations and stored in 3D lookup tables that can be quickly and efficiently interrogated using GeoTess software.

  8. Location, Location, Location!

    ERIC Educational Resources Information Center

    Ramsdell, Kristin

    2004-01-01

    Of prime importance in real estate, location is also a key element in the appeal of romances. Popular geographic settings and historical periods sell, unpopular ones do not--not always with a logical explanation, as the author discovered when she conducted a survey on this topic last year. (Why, for example, are the French Revolution and the…

  9. SALSA3D - Improving Event Locations Using a Global 3D P-Velocity Model of the Earth's Crust and Mantle

    NASA Astrophysics Data System (ADS)

    Begnaud, M. L.; Ballard, S.; Young, C. J.; Hipp, J. R.; Chang, M.; Encarnacao, A.; Rowe, C. A.; Phillips, W. S.; Steck, L.

    2011-12-01

    To test the hypothesis that high quality 3D Earth models will produce seismic event locations that are more accurate and more precise than currently used 1D and 2/2.5D models, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D (SAndia LoS Alamos 3D) version 1.7, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth (GT) events, compared to existing models and/or systems. Our model is derived from the latest version of the GT catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is ~50%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified layer crustal model derived from the NNSA Unified model in Eurasia and Crust 2.0 model elsewhere, over a uniform ak135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only in areas where the data warrant it. In previous versions of SALSA3D, we based this refinement on velocity changes from previous model iterations. For version 1.7, we utilize the diagonal of the model resolution matrix to control where grid refinement occurs, resulting in more consistent and continuous areas of refinement than before. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. We

  10. SALSA3D - A Global 3D P-Velocity Model of the Earth's Crust and Mantle for Improved Event Location

    NASA Astrophysics Data System (ADS)

    Ballard, S.; Begnaud, M. L.; Young, C. J.; Hipp, J. R.; Chang, M.; Encarnacao, A. V.; Rowe, C. A.; Phillips, W. S.; Steck, L.

    2010-12-01

    To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth’s crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D version 1.5, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is ~50%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions.. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with ~400 processors. Resolution of our model is assessed using a

  11. A Global 3D P-Velocity Model of the Earth's Crust and Mantle for Improved Event Location

    NASA Astrophysics Data System (ADS)

    Ballard, S.; Young, C. J.; Hipp, J. R.; Chang, M.; Lewis, J.; Begnaud, M. L.; Rowe, C. A.

    2009-12-01

    Effectively monitoring for small nuclear tests (<<1 kT) using seismic data can be best performed via use of a true global model without seams, based on a single, simultaneous inversion of a global data set encompassing regional and teleseismic data from a variety of areas. Several such models have been developed, but generally for insight into the structure of the inner Earth, not for improving treaty-monitoring capability. We present our first generation global P-velocity model developed specifically to improve event location using both teleseismic and regional distance phases. Our data set is the global EHB catalog, consisting of 130,000 events spanning some 46 years (e.g., ISC, PDE; http://www.isc.ac.uk/EHB/ index.html). The total number of P, Pn, and Pg ray paths is over 14 million. To make the inversion computationally practical, and to prevent overweighting due to ray path redundancy, we cluster rays based on geometric similarity of the entire ray paths traced through our starting model, reducing the number of ray paths by more than 30%. Travel times and uncertainties for the representative rays are taken as the inverse uncertainty-weighted average of the values for the individual rays. The model is represented using the variable resolution tessellation developed by Ballard et al (2009). For our initial model, we use a simplified 2 layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Travel times are calculated using the robust and efficient ray-bending algorithm developed by Ballard et al (2009). Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We regularize using a variation of the progressive grid refinement methodology used by Simmons and Myers (2009). We begin the inversion at a coarse regular sampling, iterate to convergence, then add intermediate nodes at positions adjacent to adjusted nodes from the previous grid. The new model is iterated to convergence, then

  12. The Role of Color Cues in Facilitating Accurate and Rapid Location of Aided Symbols by Children with and without Down Syndrome

    ERIC Educational Resources Information Center

    Wilkinson, Krista; Carlin, Michael; Thistle, Jennifer

    2008-01-01

    Purpose: This research examined how the color distribution of symbols within a visual aided augmentative and alternative communication array influenced the speed and accuracy with which participants with and without Down syndrome located a target picture symbol. Method: Eight typically developing children below the age of 4 years, 8 typically…

  13. [Research on impact of dust event frequency on atmosphere visibility variance: a case study of typical weather stations locating in the dust route to Beijing].

    PubMed

    Qiu, Yu-jun; Zou, Xue-yong; Zhang, Chun-lai

    2006-06-01

    Relationship between dust event frequency and atmosphere visibility deviation is analyzed by using the data of daily visibility and various dust events in Beijing and other 13 typical weather stations locating in the dust events route to Beijing from 1971 to 2000. Results show that the visibility variance increases a standard deviation in the response to the dust event frequency decrease once. The influence of dust event to visibility comes from the high-frequency change of wind velocity. The change of wind velocity in one standard deviation can result in dust event frequency increasing by 30%. The high-frequency changes of near-surface wind influence the occurrence of dust event, and also the fluctuation of daily visibility deviation. The relationship between abnormal low visibility event and visibility deviation is in significant positive correlation. The increase of wind average distance leads to the enhance frequency of dust event and consequently the abnormal low visibility event. There are different relationships between abnormal low visibility event and floating dust, sandstorm and flying-dust respectively. PMID:16921932

  14. Lost in translation: preclinical studies on 3,4-methylenedioxymethamphetamine provide information on mechanisms of action, but do not allow accurate prediction of adverse events in humans

    PubMed Central

    Green, AR; King, MV; Shortall, SE; Fone, KCF

    2012-01-01

    3,4-Methylenedioxymethamphetamine (MDMA) induces both acute adverse effects and long-term neurotoxic loss of brain 5-HT neurones in laboratory animals. However, when choosing doses, most preclinical studies have paid little attention to the pharmacokinetics of the drug in humans or animals. The recreational use of MDMA and current clinical investigations of the drug for therapeutic purposes demand better translational pharmacology to allow accurate risk assessment of its ability to induce adverse events. Recent pharmacokinetic studies on MDMA in animals and humans are reviewed and indicate that the risks following MDMA ingestion should be re-evaluated. Acute behavioural and body temperature changes result from rapid MDMA-induced monoamine release, whereas long-term neurotoxicity is primarily caused by metabolites of the drug. Therefore acute physiological changes in humans are fairly accurately mimicked in animals by appropriate dosing, although allometric dosing calculations have little value. Long-term changes require MDMA to be metabolized in a similar manner in experimental animals and humans. However, the rate of metabolism of MDMA and its major metabolites is slower in humans than rats or monkeys, potentially allowing endogenous neuroprotective mechanisms to function in a species specific manner. Furthermore acute hyperthermia in humans probably limits the chance of recreational users ingesting sufficient MDMA to produce neurotoxicity, unlike in the rat. MDMA also inhibits the major enzyme responsible for its metabolism in humans thereby also assisting in preventing neurotoxicity. These observations question whether MDMA alone produces long-term 5-HT neurotoxicity in human brain, although when taken in combination with other recreational drugs it may induce neurotoxicity. LINKED ARTICLES This article is commented on by Parrott, pp. 1518–1520 of this issue. To view this commentary visit http://dx.doi.org/10.1111/j.1476-5381.2012.01941.x and to view the the

  15. Performance of a Micro-Strip Gas Chamber for event wise, high rate thermal neutron detection with accurate 2D position determination

    NASA Astrophysics Data System (ADS)

    Mindur, B.; Alimov, S.; Fiutowski, T.; Schulz, C.; Wilpert, T.

    2014-12-01

    A two-dimensional (2D) position sensitive detector for neutron scattering applications based on low-pressure gas amplification and micro-strip technology was built and tested with an innovative readout electronics and data acquisition system. This detector contains a thin solid neutron converter and was developed for time- and thus wavelength-resolved neutron detection in single-event counting mode, which improves the image contrast in comparison with integrating detectors. The prototype detector of a Micro-Strip Gas Chamber (MSGC) was built with a solid natGd/CsI thermal neutron converter for spatial resolutions of about 100 μm and counting rates up to 107 neutrons/s. For attaining very high spatial resolutions and counting rates via micro-strip readout with centre-of-gravity evaluation of the signal amplitude distributions, a fast, channel-wise, self-triggering ASIC was developed. The front-end chips (MSGCROCs), which are very first signal processing components, are read out into powerful ADC-FPGA boards for on-line data processing and thereafter via Gigabit Ethernet link into the data receiving PC. The workstation PC is controlled by a modular, high performance dedicated software suite. Such a fast and accurate system is crucial for efficient radiography/tomography, diffraction or imaging applications based on high flux thermal neutron beam. In this paper a brief description of the detector concept with its operation principles, readout electronics requirements and design together with the signals processing stages performed in hardware and software are presented. In more detail the neutron test beam conditions and measurement results are reported. The focus of this paper is on the system integration, two dimensional spatial resolution, the time resolution of the readout system and the imaging capabilities of the overall setup. The detection efficiency of the detector prototype is estimated as well.

  16. Source amplitudes of volcano-seismic signals determined by the amplitude source location method as a quantitative measure of event size

    NASA Astrophysics Data System (ADS)

    Kumagai, Hiroyuki; Lacson, Rudy; Maeda, Yuta; Figueroa, Melquiades S.; Yamashina, Tadashi; Ruiz, Mario; Palacios, Pablo; Ortiz, Hugo; Yepes, Hugo

    2013-05-01

    The amplitude source location (ASL) method, which uses high-frequency amplitudes under the assumption of isotropic S-wave radiation, has been shown to be useful for locating the sources of various types of volcano-seismic signals. We tested the ASL method by using synthetic seismograms and examined the source amplitudes determined by this method for various types of volcano-seismic signals observed at different volcanoes. Our synthetic tests indicated that, although ASL results are not strongly influenced by velocity structure and noise, they do depend on site amplification factors at individual stations. We first applied the ASL method to volcano-tectonic (VT) earthquakes at Taal volcano, Philippines. Our ASL results for the largest VT earthquake showed that a frequency range of 7-12 Hz and a Q value of 50 were appropriate for the source location determination. Using these values, we systematically estimated source locations and amplitudes of VT earthquakes at Taal. We next applied the ASL method to long-period events at Cotopaxi volcano and to explosions at Tungurahua volcano in Ecuador. We proposed a practical approach to minimize the effects of site amplifications among different volcano seismic networks, and compared the source amplitudes of these various volcano-seismic events with their seismic magnitudes. We found a proportional relation between seismic magnitude and the logarithm of the source amplitude. The ASL method can be used to determine source locations of small events for which onset measurements are difficult, and thus can estimate the sizes of events over a wider range of sizes compared with conventional hypocenter determination approaches. Previously, there has been no parameter widely used to quantify the sources of volcano-seismic signals. This study showed that the source amplitude determined by the ASL method may be a useful quantitative measure of volcano-seismic event size.

  17. LLNL Location and Detection Research

    SciTech Connect

    Myers, S C; Harris, D B; Anderson, M L; Walter, W R; Flanagan, M P; Ryall, F

    2003-07-16

    We present two LLNL research projects in the topical areas of location and detection. The first project assesses epicenter accuracy using a multiple-event location algorithm, and the second project employs waveform subspace Correlation to detect and identify events at Fennoscandian mines. Accurately located seismic events are the bases of location calibration. A well-characterized set of calibration events enables new Earth model development, empirical calibration, and validation of models. In a recent study, Bondar et al. (2003) develop network coverage criteria for assessing the accuracy of event locations that are determined using single-event, linearized inversion methods. These criteria are conservative and are meant for application to large bulletins where emphasis is on catalog completeness and any given event location may be improved through detailed analysis or application of advanced algorithms. Relative event location techniques are touted as advancements that may improve absolute location accuracy by (1) ensuring an internally consistent dataset, (2) constraining a subset of events to known locations, and (3) taking advantage of station and event correlation structure. Here we present the preliminary phase of this work in which we use Nevada Test Site (NTS) nuclear explosions, with known locations, to test the effect of travel-time model accuracy on relative location accuracy. Like previous studies, we find that the reference velocity-model and relative-location accuracy are highly correlated. We also find that metrics based on travel-time residual of relocated events are not a reliable for assessing either velocity-model or relative-location accuracy. In the topical area of detection, we develop specialized correlation (subspace) detectors for the principal mines surrounding the ARCES station located in the European Arctic. Our objective is to provide efficient screens for explosions occurring in the mines of the Kola Peninsula (Kovdor, Zapolyarny

  18. Locations of Long-Period Seismic Events Beneath the Soufriere Hills Volcano, Montserrat, W.I., Inferred from a Waveform Semblance Method

    NASA Astrophysics Data System (ADS)

    Taira, T.; Linde, A. T.; Sacks, I. S.; Shalev, E.; Malin, P. E.; Nielsen, J. M.; Voight, B.; Hidayat, D.; Mattioli, G. S.

    2005-05-01

    Analysis of long-period (LP) seismic events provides information about the internal state of a volcano because LP events are attributed mainly to fluid dynamics between magma and hydrothermal reservoirs in its volcano (e.g., Chouet, 1992). We analyzed LP events recorded by three borehole seismic stations (AIRS, OLVN, and TRNT) at Soufriere Hills Volcano (SHV), Montserrat, W.I., during the period from March to June 2003. Borehole stations were deployed by the Caribbean Andesite Lava Island Precision Seismo-geodetic Observatory project (e.g., Shalev et al., 2003; Mattioli et al., 2004) and equipped with three-component short-period velocity seismometers with a sampling rate of 200 Hz. We selected 61 LP events with high signal-to-noise ratios. Almost all of the selected LP events are characterized by dominant periods in a range of 0.3 to 2.0 sec and durations of about 30 sec. Several LP events appear to be generated by a single source, based on the strong similarity in their waveforms. We first identified a family of LP events based on the dimensionless cross-correlation coefficient (CCC) of their spectral amplitudes of a period in a range of 0.2 to 2.0 sec, under the assumption of a fluid-driven crack model (Chouet, 1986). Seven LP events are identified as a family of LP events with high CCCs, particularly CCCs at AIRS in the vertical component greater than 0.88 in each event. This result suggested that these LP events are probably due to a repeated excitation of an identical source mechanism. We next attempted to estimate the locations of the identified a family of LP events by a waveform semblance method (Kawakatsu et al., 2000; Almendros and Chouet, 2003). To apply the above method, we searched the seismic phases with a rectilinear polarization from LP events, by performing a complex polarization analysis (Vidale, 1986). These phases are identified as averaged particle motion ellipticities of all stations in a time window less than 0.50. Incident angles of the

  19. Does visual working memory represent the predicted locations of future target objects? An event-related brain potential study.

    PubMed

    Grubert, Anna; Eimer, Martin

    2015-11-11

    During the maintenance of task-relevant objects in visual working memory, the contralateral delay activity (CDA) is elicited over the hemisphere opposite to the visual field where these objects are presented. The presence of this lateralised CDA component demonstrates the existence of position-dependent object representations in working memory. We employed a change detection task to investigate whether the represented object locations in visual working memory are shifted in preparation for the known location of upcoming comparison stimuli. On each trial, bilateral memory displays were followed after a delay period by bilateral test displays. Participants had to encode and maintain three visual objects on one side of the memory display, and to judge whether they were identical or different to three objects in the test display. Task-relevant memory and test stimuli were located in the same visual hemifield in the no-shift task, and on opposite sides in the horizontal shift task. CDA components of similar size were triggered contralateral to the memorized objects in both tasks. The absence of a polarity reversal of the CDA in the horizontal shift task demonstrated that there was no preparatory shift of memorized object location towards the side of the upcoming comparison stimuli. These results suggest that visual working memory represents the locations of visual objects during encoding, and that the matching of memorized and test objects at different locations is based on a comparison process that can bridge spatial translations between these objects. This article is part of a Special Issue entitled SI: Prediction and Attention. PMID:25445999

  20. Microseismic event location using an inverse method of joint P-S phase arrival difference and P-wave arrival difference in a borehole system

    NASA Astrophysics Data System (ADS)

    Zhou, Wen; Wang, Liangshu; Guan, Luping; Guo, Quanshi; Cui, Shuguo; Yu, Bo

    2015-04-01

    The accuracy of hypocenter location is the essential issue for microseismic monitoring, and is the basis for evaluating the effect of fracture. Although the signal obtained from a borehole monitoring system has a higher signal to noise ratio (SNR) than the surface system, a narrow monitoring aperture makes the location sensitive to noise and tends to be a misguided shape. In order to overcome this disadvantage and obtain a more accurate estimation of the source, we develop a ‘jointing method’, which combines the P-S phase arrival difference and P-wave arrival difference of each receiver pair (PSP) in the objective function. In the synthetic example, we compare the noise responses of three different location methods which are based on P-wave arrival time difference, P-S wave arrival time difference and the PSP method, respectively. This analysis shows that the P-wave arrival difference method is more sensitive to arrival time error than the others and the location results tend to be in a misleading line directed to the receivers. The P-S arrival difference method is more robust than the method using P-wave and its error distribution is perpendicular to the ray-path direction. The PSP method, as expected, is the most stable and accurate. When the P-S method and PSP method are applied to field data of a coal bed methane hydro-fracture process monitoring, the results indicate that the PSP method is preferable. The successful location with the PSP method proves that it is suitable for field data.

  1. Location capability of a sparse regional network (RSTN) using a multi-phase earthquake location algorithm (REGLOC)

    SciTech Connect

    Hutchings, L.

    1994-01-01

    The Regional Seismic Test Network (RSTN) was deployed by the US Department of Energy (DOE) to determine whether data recorded by a regional network could be used to detect and accurately locate seismic events that might be clandestine nuclear tests. The purpose of this paper is to evaluate the location capability of the RSTN. A major part of this project was the development of the location algorithm REGLOC and application of Basian a prior statistics for determining the accuracy of the location estimates. REGLOC utilizes all identifiable phases, including backazimuth, in the location. Ninty-four events, distributed throughout the network area, detected by both the RSTN and located by local networks were used in the study. The location capability of the RSTN was evaluated by estimating the location accuracy, error ellipse accuracy, and the percentage of events that could be located, as a function of magnitude. The location accuracy was verified by comparing the RSTN results for the 94 events with published locations based on data from the local networks. The error ellipse accuracy was evaluated by determining whether the error ellipse includes the actual location. The percentage of events located was assessed by combining detection capability with location capability to determine the percentage of events that could be located within the study area. Events were located with both an average crustal model for the entire region, and with regional velocity models along with station corrections obtained from master events. Most events with a magnitude <3.0 can only be located with arrivals from one station. Their average location errors are 453 and 414 km for the average- and regional-velocity model locations, respectively. Single station locations are very unreliable because they depend on accurate backazimuth estimates, and backazimuth proved to be a very unreliable computation.

  2. Seismicity Along the Endeavour Segment of the Juan de Fuca Ridge: Automated Event Locations for an Ocean-Bottom Seismometer Network

    NASA Astrophysics Data System (ADS)

    Weekly, R. T.; Wilcock, W. S.; Hooft, E. E.; Toomey, D. R.; McGill, P. R.

    2004-12-01

    From 2003-2006, the W.M. Keck Foundation supported the operation of a network of eight ocean-bottom seismometers (OBSs) that were deployed with a remotely operated vehicle along the central portion of the Endeavour Segment of the Juan de Fuca mid-ocean ridge as part of a multidisciplinary prototype NEPTUNE experiment. Data from 2003-2004 were initially analyzed during a research apprenticeship class at the University of Washington's Friday Harbor Laboratories. Eight student analysts located ~13,000 earthquakes along the Endeavour Segment. Analysis of data from 2004-2005 has to date been limited to locating ~6,000 earthquakes associated with a swarm in February-March 2005 near the northern end of the Endeavour Segment. The remaining data includes several significant swarms and it is anticipated that tens of thousands of earthquakes still need to be located. In order to efficiently obtain a complete catalog of high-quality locations for the 3-year experiment, we are developing an automatic method for earthquake location. We first apply a 5-Hz high-pass filter and identify triggers when the ratio of the root-mean square (RMS) amplitudes in short- and long- term windows exceeds a specified threshold. We search for events that are characterized by triggers within a short time interval on the majority of stations and use the signal spectra to eliminate events that are the result of 20-Hz Fin and Blue whale vocalizations. An autoregressive technique is applied to a short time window centered on the trigger time to pick P-wave times on each station's vertical channel. We locate the earthquake with these picks and either attempt to repick or eliminate arrivals with unacceptable residuals. Preliminary S-wave picks are then made on the horizontal channels by applying a 5-12 Hz bandpass filter, identifying the peak RMS amplitude for a short running window, and making a pick at the time the RMS amplitude rises above 50% of this value. The picks are refined using the

  3. Seismicity Along the Endeavour Segment of the Juan de Fuca Ridge: Automated Event Locations for an Ocean-Bottom Seismometer Network

    NASA Astrophysics Data System (ADS)

    Weekly, R. T.; Wilcock, W. S.; Hooft, E. E.; Toomey, D. R.; McGill, P. R.

    2007-12-01

    From 2003-2006, the W.M. Keck Foundation supported the operation of a network of eight ocean-bottom seismometers (OBSs) that were deployed with a remotely operated vehicle along the central portion of the Endeavour Segment of the Juan de Fuca mid-ocean ridge as part of a multidisciplinary prototype NEPTUNE experiment. Data from 2003-2004 were initially analyzed during a research apprenticeship class at the University of Washington's Friday Harbor Laboratories. Eight student analysts located ~13,000 earthquakes along the Endeavour Segment. Analysis of data from 2004-2005 has to date been limited to locating ~6,000 earthquakes associated with a swarm in February-March 2005 near the northern end of the Endeavour Segment. The remaining data includes several significant swarms and it is anticipated that tens of thousands of earthquakes still need to be located. In order to efficiently obtain a complete catalog of high-quality locations for the 3-year experiment, we are developing an automatic method for earthquake location. We first apply a 5-Hz high-pass filter and identify triggers when the ratio of the root-mean square (RMS) amplitudes in short- and long- term windows exceeds a specified threshold. We search for events that are characterized by triggers within a short time interval on the majority of stations and use the signal spectra to eliminate events that are the result of 20-Hz Fin and Blue whale vocalizations. An autoregressive technique is applied to a short time window centered on the trigger time to pick P-wave times on each station's vertical channel. We locate the earthquake with these picks and either attempt to repick or eliminate arrivals with unacceptable residuals. Preliminary S-wave picks are then made on the horizontal channels by applying a 5-12 Hz bandpass filter, identifying the peak RMS amplitude for a short running window, and making a pick at the time the RMS amplitude rises above 50% of this value. The picks are refined using the

  4. Alcohol and remembering a hypothetical sexual assault: Can people who were under the influence of alcohol during the event provide accurate testimony?

    PubMed

    Flowe, Heather D; Takarangi, Melanie K T; Humphries, Joyce E; Wright, Deborah S

    2016-09-01

    We examined the influence of alcohol on remembering an interactive hypothetical sexual assault scenario in the laboratory using a balanced placebo design. Female participants completed a memory test 24 hours and 4 months later. Participants reported less information (i.e., responded "don't know" more often to questions) if they were under the influence of alcohol during scenario encoding. The accuracy of the information intoxicated participants reported did not differ compared to sober participants, however, suggesting intoxicated participants were effectively monitoring the accuracy of their memory at test. Additionally, peripheral details were remembered less accurately than central details, regardless of the intoxication level; and memory accuracy for peripheral details decreased by a larger amount compared to central details across the retention interval. Finally, participants were more accurate if they were told they were drinking alcohol rather than a placebo. We discuss theoretical implications for alcohol myopia and memory regulation, together with applied implications for interviewing intoxicated witnesses. PMID:26278075

  5. Regional seismic event identification and improved locations with small arrays and networks. Final report, 7 May 1993-30 September 1995

    SciTech Connect

    Vernon, F.L.; Minster, J.B.; Orcutt, J.A.

    1995-09-20

    This final report contains a summary of our work on the use of seismic networks and arrays to improve locations and identify small seismic event. We have developed techniques to migrate 3-component array records of local, regional and teleseismic wavetrains to directly image buried two- and three-dimensional heterogeneities (e.g. layer irregularities, volumetric heterogeneities) in the vicinity of the array. We have developed a technique to empirically characterize local and regional seismic code by binning and stacking network recordings of dense aftershock sequences. The principle motivation for this work was to look for robust coda phases dependent on source depth. We have extended our ripple-fired event discriminant (based on the time-independence of coda produced by ripple firing) by looking for an independence of the coda from the recording direction (also indicative of ripple-firing).

  6. A universal support vector machines based method for automatic event location in waveforms and video-movies: Applications to massive nuclear fusion databases

    NASA Astrophysics Data System (ADS)

    Vega, J.; Murari, A.; González, S.; Jet-Efda Contributors

    2010-02-01

    Big physics experiments can collect terabytes (even petabytes) of data under continuous or long pulse basis. The measurement systems that follow the temporal evolution of physical quantities translate their observations into very large time-series data and video-movies. This article describes a universal and automatic technique to recognize and locate inside waveforms and video-films both signal segments with data of potential interest for specific investigations and singular events. The method is based on regression estimations of the signals using support vector machines. A reduced number of the samples is shown as outliers in the regression process and these samples allow the identification of both special signatures and singular points. Results are given with the database of the JET fusion device: location of sawteeth in soft x-ray signals to automate the plasma incremental diffusivity computation, identification of plasma disruptive behaviors with its automatic time instant determination, and, finally, recognition of potential interesting plasma events from infrared video-movies.

  7. Application of surface wave travel times and amplitude ratios interpreted through a 3D crustal model to locate and characterize regional seismic events in the US

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Ritzwoller, M. H.; Shen, W.; Levshin, A. L.; Barmin, M. P.

    2014-12-01

    The error in the epicentral location of crustal earthquakes across the contiguous US is on the order of 10 km due to the inability of 1D seismic velocity models to capture regional body wave travel time variations. New high resolution 3D models of the crust and uppermost mantle have been constructed recently across the US by inverting surface wave dispersion from ambient noise and earthquakes, receiver functions, and Rayleigh wave H/V ratios using USArray data [e.g., Shen et al., 2013]. These are mostly S-wave models of the lithosphere, however, which are not optimal for predicting regional P-wave travel times. We explore the use of observations of surface waves to improve regional event characterization because the new 3D models are constructed explicitly to model their behavior. In particular, we use measurements of group and phase time delays and the amplitude ratio between different periods of surface waves to estimate the moment tensor, the epicentral location and the earthquake depth. Preliminary estimates of these variables are determined through a simulated annealing algorithm. Afterward, a Bayesian Monte Carlo method is applied to estimate the posterior distribution of all variables in order to assess uncertainties in source characteristics. The reliability and limitations of the location method are tested by systematic relocation of earthquakes across the contiguous US.

  8. Theatre Applications: Locations, Event, Futurity

    ERIC Educational Resources Information Center

    Mackey, Sally; Fisher, Amanda Stuart

    2011-01-01

    The three papers and the pictorial essay that follow Rustom Bharucha's keynote all originated at "Theatre Applications" (Central School of Speech and Drama, London, April 2010). One theme of the conference was "cultural geographies of dislocation, place and space"; the three papers and pictorial essay respond to that theme. All address issues of…

  9. Regional location in western China

    SciTech Connect

    Cogbill, A.H.; Steck, L.K.

    1996-10-01

    Accurately locating seismic events in western China using only regional seismic stations is a challenge. Not only is the number of seismic stations available for locating events small, but most stations available to researchers are often over 10{degree} distant. Here the authors describe the relocation, using regional stations, of both nuclear and earthquake sources near the Lop Nor test site in western China. For such relocations, they used the Earthquake Data Reports provided by the US Geological Survey (USGS) for the reported travel times. Such reports provide a listing of all phases reported to the USGS from stations throughout the world, including many stations in the People`s Republic of China. LocSAT was used as the location code. The authors systematically relocated each event int his study several times, using fewer and fewer stations at reach relocation, with the farther stations being eliminated at each step. They found that location accuracy, judged by comparing solutions from few stations to the solution provided using all available stations, remained good typically until fewer than seven stations remained.With a good station distribution, location accuracy remained surprisingly good (within 7 km) using as few as 3 stations. Because these relocations were computed without good station corrections and without source-specific station corrections (that is, path corrections), they believe that such regional locations can be substantially improved, largely using static station corrections and source-specific station corrections, at least in the Lop nor area, where sources have known locations. Elsewhere in China, one must rely upon known locations of regionally-recorded explosions. Locating such sources is clearly one of the major problems to be overcome before one can provide event locations with any assurance from regional stations.

  10. SALSA3D: Validating a Global 3D P-Velocity Model of the Earth's Crust and Mantle for Improved Event Location

    NASA Astrophysics Data System (ADS)

    Begnaud, M. L.; Ballard, S.; Young, C. J.; Hipp, J. R.; Encarnacao, A.; Phillips, W. S.; Chael, E. P.; Rowe, C. A.

    2012-12-01

    We are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography to assess improvement to seismic event locations obtained using high quality 3D Earth models in lieu of 1D and 2/2.5D models. We present the most recent version of SALSA3D (SAndia LoS Alamos 3D) version 1.9, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth (GT) events. Our model is derived from the latest version of the GT catalog of P/Pn travel-time picks assembled by Los Alamos National Laboratory. For this current version, we employ more robust data quality control measures than previously used, as well as additional global GT data sources. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays into representative rays. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified layer crustal model derived from the NNSA Unified model in Eurasia and Crust 2.0 model everywhere else, overlying a uniform ak135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only in areas where the data warrant such a refinement. In previous versions, we based this refinement on velocity changes from previous model iterations. For the current version, we utilize the diagonal of the model resolution matrix to control where grid refinement occurs, resulting in more consistent and continuous areas of refinement than before. In addition to the changes in grid refinement, we also employ a more robust convergence criterion between successive grid refinements, allowing a better fit to first broader

  11. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  12. Earthquake location in island arcs

    USGS Publications Warehouse

    Engdahl, E.R.; Dewey, J.W.; Fujita, K.

    1982-01-01

    -velocity lithospheric slab. In application, JHD has the practical advantage that it does not require the specification of a theoretical velocity model for the slab. Considering earthquakes within a 260 km long by 60 km wide section of the Aleutian main thrust zone, our results suggest that the theoretical velocity structure of the slab is presently not sufficiently well known that accurate locations can be obtained independently of locally recorded data. Using a locally recorded earthquake as a calibration event, JHD gave excellent results over the entire section of the main thrust zone here studied, without showing a strong effect that might be attributed to spatially varying source-station anomalies. We also calibrated the ray-tracing method using locally recorded data and obtained results generally similar to those obtained by JHD. ?? 1982.

  13. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  14. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  15. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  16. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  17. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  18. Regional Location Calibration in Asia

    NASA Astrophysics Data System (ADS)

    Steck, L. K.; Hartse, H.; Aprea, C.; Franks, J.; Velasco, A.; Randall, G.; Bradley, C.; Begnaud, M.; Aguilar-Chang, J.

    2002-12-01

    This paper presents a spectrum of issues and efforts involved in improving seismic location performance worldwide. Our efforts are largely designed around providing validated, rigorously calibrated travel times, azimuths, and slownesses along with accurate error estimates. To do so entails a significant effort that includes data mining, data integration, database management, developing optimal 1-, 2-, and 3-D Earth models, using the Earth models to predict wave propagation, developing corrections and errors for travel times, azimuths, and slownesses, and validation of all products. Results presented here will focus on Asia. For the region around station MAKZ in north-central Asia we have looked at several tens of published 1-D velocity models. For each model, travel time calculations were performed, predictions for P and S arrivals were established, and the predicted times were compared to the observed. We will present best-fit models for tectonic provinces out to regional distances from MAKZ. Previous work has shown that Non-stationary Modified Bayesian Kriging of travel time residuals successfully improves regional seismic event location, and this method is being extended to calculate corrections for azimuth and slowness. The ability to krig over 3-D Earth models is also being implemented. In order to produce the most useful corrections, we require accurate ground truth. For this we are continuing efforts to create a location database consisting of the best available seismic event locations and the most accurate and precise travel times. Building this database relies on participation from universities, other NNSA laboratories, and contacts in private industry. Through the kriging procedure we are able to stabilize location algorithms, but the ultimate usefulness of the corrections themselves is directly related to the quality of the ground truth from which the corrections are derived. Indeed, epicentral mislocations from EvLoc using travel time correction

  19. Wave-equation Based Earthquake Location

    NASA Astrophysics Data System (ADS)

    Tong, P.; Yang, D.; Yang, X.; Chen, J.; Harris, J.

    2014-12-01

    Precisely locating earthquakes is fundamentally important for studying earthquake physics, fault orientations and Earth's deformation. In industry, accurately determining hypocenters of microseismic events triggered in the course of a hydraulic fracturing treatment can help improve the production of oil and gas from unconventional reservoirs. We develop a novel earthquake location method based on solving full wave equations to accurately locate earthquakes (including microseismic earthquakes) in complex and heterogeneous structures. Traveltime residuals or differential traveltime measurements with the waveform cross-correlation technique are iteratively inverted to obtain the locations of earthquakes. The inversion process involves the computation of the Fréchet derivative with respect to the source (earthquake) location via the interaction between a forward wavefield emitting from the source to the receiver and an adjoint wavefield reversely propagating from the receiver to the source. When there is a source perturbation, the Fréchet derivative not only measures the influence of source location but also the effects of heterogeneity, anisotropy and attenuation of the subsurface structure on the arrival of seismic wave at the receiver. This is essential for the accuracy of earthquake location in complex media. In addition, to reduce the computational cost, we can first assume that seismic wave only propagates in a vertical plane passing through the source and the receiver. The forward wavefield, adjoint wavefield and Fréchet derivative with respect to the source location are all computed in a 2D vertical plane. By transferring the Fréchet derivative along the horizontal direction of the 2D plane into the ones along Latitude and Longitude coordinates or local 3D Cartesian coordinates, the source location can be updated in a 3D geometry. The earthquake location obtained with this combined 2D-3D approach can then be used as the initial location for a true 3D wave

  20. Photographic Analysis Technique for Assessing External Tank Foam Loss Events

    NASA Technical Reports Server (NTRS)

    Rieckhoff, T. J.; Covan, M.; OFarrell, J. M.

    2001-01-01

    A video camera and recorder were placed inside the solid rocket booster forward skirt in order to view foam loss events over an area on the external tank (ET) intertank surface. In this Technical Memorandum, a method of processing video images to allow rapid detection of permanent changes indicative of foam loss events on the ET surface was defined and applied to accurately count, categorize, and locate such events.

  1. Location Estimation of the Bow Shock and Theta angle (B, n) cuasiperpendicular magnetospheric, using data from 14 different events crossings shock recorded by THEMIS-C.

    NASA Astrophysics Data System (ADS)

    Amazo-Gomez, Eliana; Alvarado-Gomez, Julian David; Calvo Mozo, Benjamin

    In this Work we calculated the average position of the bow shock, through the eigenvalues ​​and corresponding eigenvectors of the covariance matrix for the magnetic field developed from 10 different crosses shock events recorded by THEMIS A, during the years 2009 and 2010. With data obtained from previous calibration and the propagation direction of the magnetic field of the plasma is able to find the interaction quasi perpendicular angle Theta (B,n) which depends on the direction normal shock and the direction of incidence of field magnetic plasma. The importance of this type of analysis is that the understanding of the phenomenology of the bow shock, which is vital for the characterization of processes such as magnetic reconnection between magnetospheric lines terrestrial and interplanetary field lines carrying a large contribution from the Sun apparently lines will also be important for the description of how to enter the plasma charged particles from impacting the bow shock to the internal field lines to these particles subsequently lead to the Earth's atmosphere, these initially enter through the polar region (Polar Cusp) and then disseminated depending on the conditions of the plasma into the Earth's atmosphere , and parameters such as the position of the bow shock, this variation and interaction angle Theta (B,n) are basic to reach a minimal representation of the phenomenon. In events of great magnitude can have undesirable effects on satellites, power lines, communications and air travel, the latter is the interest on discrimination of some parameters of the phenomenon presented in this work. The study of the Bow shock, bow shock and Magnetospheric has as its starting point a detailed description of Earth's magnetosphere and solar wind phenomena must be understood independently initially and then trying to relate in terms of their interaction and communion in their respective limits, parameters such as the balance between dynamic and magnetic pressure

  2. NHD INDEXED LOCATIONS FOR GRTS

    EPA Science Inventory

    GRTS locational data for nonpoint source projects. GRTS locations are coded onto route.drain (Transport and Coastline Reach) feature of NHD to create Point Events and Linear Events. GRTS locations are coded onto region.rch (Waterbody Reach) feature of NHD to create NHD Waterbody ...

  3. NHD INDEXED LOCATIONS FOR BEACH

    EPA Science Inventory

    Beach locational data for BEACH Act. Beach locations are coded onto route.drain (Transport and Coastline Reach) feature of NHD to create Point Events and Linear Events. Beach locations are coded onto region.rch (Waterbody Reach) feature of NHD to create NHD Waterbody Shapefiles...

  4. Laser measuring system accurately locates point coordinates on photograph

    NASA Technical Reports Server (NTRS)

    Doede, J. H.; Lindenmeyer, C. W.; Vonderohe, R. H.

    1966-01-01

    Laser activated ultraprecision ranging apparatus interfaced with a computer determines point coordinates on a photograph. A helium-neon gas CW laser provides collimated light for a null balancing optical system. This system has no mechanical connection between the ranging apparatus and the photograph.

  5. Event Perception

    PubMed Central

    Radvansky, Gabriel; Zacks, Jeffrey M.

    2012-01-01

    Events are central elements of human experience. Formally, they can be individuated in terms of the entities that compose them, the features of those entities, and the relations amongst entities. Psychologically, representations of events capture their spatiotemporal location, the people and objects involved, and the relations between these elements. Here, we present an account of the nature of psychological representations of events and how they are constructed and updated. Event representations are like images in that they are isomorphic to the situations they represent. However, they are like models or language in that they are constructed of components rather than being holistic. Also, they are partial representations that leave out some elements and abstract others. Representations of individual events are informed by schematic knowledge about general classes of events. Event representations are constructed in a process that segments continuous activity into discrete events. The construction of a series of event representations forms a basis for predicting the future, planning for that future, and imagining alternatives. PMID:23082236

  6. Relative earthquake location for remote offshore and tectonically active continental regions using surface waves

    NASA Astrophysics Data System (ADS)

    Cleveland, M.; Ammon, C. J.; Vandemark, T. F.

    2015-12-01

    Earthquake locations are a fundamental parameter necessary for reliable seismic monitoring and seismic event characterization. Within dense continental seismic networks, event locations can be accurately and precisely estimated. However, for many regions of interest, existing catalog data and traditional location methods provide neither accurate nor precise hypocenters. In particular, for isolated continental and offshore areas, seismic event locations are estimated primarily using distant observations, often resulting in inaccurate and imprecise locations. The use of larger, moderate-size events is critical to the construction of useful travel-time corrections in regions of strong geologic heterogeneity. Double difference methods applied to cross-correlation measured Rayleigh and Love wave time shifts are an effective tool at providing improved epicentroid locations and relative origin-time shifts in these regions. Previous studies have applied correlation of R1 and G1 waveforms to moderate-magnitude vertical strike-slip transform-fault and normal faulting earthquakes from nearby ridges. In this study, we explore the utility of phase-match filtering techniques applied to surface waves to improve cross-correlation measurements, particularly for smaller magnitude seismic events. We also investigate the challenges associated with applying surface-wave location methods to shallow earthquakes in tectonically active continental regions.

  7. Eldercare Locator

    MedlinePlus

    ... page content Skip Navigation Department of Health and Human Services Your Browser ... Welcome to the Eldercare Locator, a public service of the U.S. Administration on Aging connecting you to services for older ...

  8. Towards an Accurate Orbital Calibration of Late Miocene Climate Events: Insights From a High-Resolution Chemo- and Magnetostratigraphy (8-6 Ma) from Equatorial Pacific IODP Sites U1337 and U1338

    NASA Astrophysics Data System (ADS)

    Drury, A. J.; Westerhold, T.; Frederichs, T.; Wilkens, R.; Channell, J. E. T.; Evans, H. F.; Hodell, D. A.; John, C. M.; Lyle, M. W.; Roehl, U.; Tian, J.

    2015-12-01

    In the 8-6 Ma interval, the late Miocene is characterised by a long-term -0.3 ‰ reduction in benthic foraminiferal δ18O and distinctive short-term δ18O cycles, possibly related to dynamic Antarctic ice sheet variability. In addition, the late Miocene carbon isotope shift (LMCIS) marks a permanent long-term -1 ‰ shift in oceanic δ13CDIC, which is the largest, long-term perturbation in the global marine carbon cycle since the mid Miocene Monterey excursion. Accurate age control is crucial to investigate the origin of the δ18O cyclicity and determine the precise onset of the LMCIS. The current Geological Time Scale in the 8-6 Ma interval is constructed using astronomical tuning of sedimentary cycles in Mediterranean outcrops. However, outside of the Mediterranean, a comparable high-resolution chemo-, magneto-, and cyclostratigraphy at a single DSDP/ODP/IODP site does not exist. Generating an accurate astronomically-calibrated chemo- and magneto-stratigraphy in the 8-6 Ma interval became possible with retrieval of equatorial Pacific IODP Sites U1337 and U1338, as both sites have sedimentation rates ~2 cm/kyr, high biogenic carbonate content, and magnetic polarity stratigraphies. Here we present high-resolution correlation of Sites U1337 and U1338 using Milankovitch-related cycles in core images and X-ray fluorescence core scanning data. By combining inclination and declination data from ~400 new discrete samples with shipboard measurements, we are able to identify 14 polarity reversals at Site U1337 from the young end of Chron C3An.1n (~6.03 Ma) to the onset of Chron C4n.2n (~8.11 Ma). New high-resolution (<1.5 kyr) stable isotope records from Site U1337 correlate highly with Site U1338 records, enabling construction of a high-resolution stack. Initial orbital tuning of the U1337-U1338 records show that the δ18O cyclicity is obliquity driven, indicating high-latitude climate forcing. The LMCIS starts ~7.55 Ma and is anchored in Chron C4n.1n, which is

  9. Grading More Accurately

    ERIC Educational Resources Information Center

    Rom, Mark Carl

    2011-01-01

    Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…

  10. Extreme Events

    NASA Astrophysics Data System (ADS)

    Nott, Jonathan

    2006-04-01

    The assessment of risks posed by natural hazards such as floods, droughts, earthquakes, tsunamis or cyclones, is often based on short-term historical records that may not reflect the full range or magnitude of events possible. As human populations grow, especially in hazard-prone areas, methods for accurately assessing natural hazard risks are becoming increasingly important. In Extreme Events Jonathan Nott describes the many methods used to reconstruct such hazards from natural long-term records. He demonstrates how long-term (multi-century to millennial) records are essential in gaining a realistic understanding of the variability of natural hazards, and how short-term historical records can often misrepresent the likely risks associated with natural hazards. This book will form a useful resource for students taking courses covering natural hazards and risk assessment. It will also be valuable for urban planners, policy makers and non-specialists as a guide to understanding and reconstructing long-term records of natural hazards. Explains mechanisms that cause extreme events and discusses their prehistoric records Describes how to reconstruct long-term records of natural hazards in order to make accurate risk assessments Demonstrates that natural hazards can follow cycles over time and do not occur randomly

  11. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  12. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  13. A Central Brazil GT5 Event

    NASA Astrophysics Data System (ADS)

    Barros, L. V.; Assumpcao, M.; Caixeta, D.

    2013-05-01

    Ground-truth (GT) events, accurately located with a precision of 5 km (GT5 event) and associated travel times to regional stations are important in developing precise velocity models. The low Brazilian seismicity, with only three continental earthquakes of magnitude five in the last three decades, and the low number of seismic stations explain the difficulty to detect events at regional distances. In the world maps of GT events, Brazil appears almost empty. In Stable Continental Interiors, like Brazil, it is difficult to find an event fulfilling all the GT5 prerequisites, particularly in respect with the number of picked phases and azimuthal gaps. Recently PTS-CTBTO has organized meeting and workshops to encourage seismologists from South and Central America to cooperate with the work of identifying GT5 events in these countries, with a goal of developing a 3-dimentional velocity model for this part of the globe not covered yet like Europe and North America. As a result we studied a recent magnitude 5 event in Central Brazil detected by few regional stations. Aftershock studies with local stations, showed a fault 5 km long. Taking the mainshock epicenter as the center of fault the maximum error would be minimal, 2.5 km, assuming the events were located with zero uncertainty. The parameters depth and origin time source were precisely determined using correlations between waveforms of six events and stations corrections. The event magnitudes range from 3.5 to 5.0 (mainshock, taken as reference event) recorded by regional and local stations. Events recorded at local and regional stations were used to determine the regional station corrections. These events were located only with data from local stations, assigning to the regional stations P and S phases zero weight in order to determine residuals for each regional stations used. The stations corrections were taken as the average of the residuals at each station. Precise pickings of P and S phases for the mainshock

  14. Location, location, location: the misprediction of satisfaction in housing lotteries.

    PubMed

    Dunn, Elizabeth W; Wilson, Timothy D; Gilbert, Daniel T

    2003-11-01

    People tend to overestimate the emotional consequences of future life events, exhibiting an impact bias. The authors replicated the impact bias in a real-life context in which undergraduates were randomly assigned to dormitories (or "houses"). Participants appeared to focus on the wrong factors when imagining their future happiness in the houses. They placed far greater weight on highly variable physical features than on less variable social features in predicting their future happiness in each house, despite accurately recognizing that social features were more important than physical features when asked explicitly about the determinants of happiness. In Experiment 2, we found that this discrepancy emerged in part because participants exhibited an isolation effect, focusing too much on factors that distinguished between houses and not enough on factors that varied only slightly, such as social features. PMID:15189579

  15. Accurate measurement of time

    NASA Astrophysics Data System (ADS)

    Itano, Wayne M.; Ramsey, Norman F.

    1993-07-01

    The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.

  16. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  17. NHD INDEXED LOCATIONS FOR PCS PIPE SCHEDULE AND FACILITY LOCATIONS

    EPA Science Inventory

    Permit Compliance System (PCS) pipe schedule and facility locations indexed to the National Hydrography Dataset (NHD). PCS pipe schedule and facility locations are coded onto route.drain (Transport and Coastline Reach) feature of NHD to create Point Events. PCS pipe schedule an...

  18. Distributed Pedestrian Detection Alerts Based on Data Fusion with Accurate Localization

    PubMed Central

    García, Fernando; Jiménez, Felipe; Anaya, José Javier; Armingol, José María; Naranjo, José Eugenio; de la Escalera, Arturo

    2013-01-01

    Among Advanced Driver Assistance Systems (ADAS) pedestrian detection is a common issue due to the vulnerability of pedestrians in the event of accidents. In the present work, a novel approach for pedestrian detection based on data fusion is presented. Data fusion helps to overcome the limitations inherent to each detection system (computer vision and laser scanner) and provides accurate and trustable tracking of any pedestrian movement. The application is complemented by an efficient communication protocol, able to alert vehicles in the surroundings by a fast and reliable communication. The combination of a powerful location, based on a GPS with inertial measurement, and accurate obstacle localization based on data fusion has allowed locating the detected pedestrians with high accuracy. Tests proved the viability of the detection system and the efficiency of the communication, even at long distances. By the use of the alert communication, dangerous situations such as occlusions or misdetections can be avoided. PMID:24008284

  19. Distributed pedestrian detection alerts based on data fusion with accurate localization.

    PubMed

    García, Fernando; Jiménez, Felipe; Anaya, José Javier; Armingol, José María; Naranjo, José Eugenio; de la Escalera, Arturo

    2013-01-01

    Among Advanced Driver Assistance Systems (ADAS) pedestrian detection is a common issue due to the vulnerability of pedestrians in the event of accidents. In the present work, a novel approach for pedestrian detection based on data fusion is presented. Data fusion helps to overcome the limitations inherent to each detection system (computer vision and laser scanner) and provides accurate and trustable tracking of any pedestrian movement. The application is complemented by an efficient communication protocol, able to alert vehicles in the surroundings by a fast and reliable communication. The combination of a powerful location, based on a GPS with inertial measurement, and accurate obstacle localization based on data fusion has allowed locating the detected pedestrians with high accuracy. Tests proved the viability of the detection system and the efficiency of the communication, even at long distances. By the use of the alert communication, dangerous situations such as occlusions or misdetections can be avoided. PMID:24008284

  20. Underwater hydrophone location survey

    NASA Technical Reports Server (NTRS)

    Cecil, Jack B.

    1993-01-01

    The Atlantic Undersea Test and Evaluation Center (AUTEC) is a U.S. Navy test range located on Andros Island, Bahamas, and a Division of the Naval Undersea Warfare Center (NUWC), Newport, RI. The Headquarters of AUTEC is located at a facility in West Palm Beach, FL. AUTEC's primary mission is to provide the U.S. Navy with a deep-water test and evaluation facility for making underwater acoustic measurements, testing and calibrating sonars, and providing accurate underwater, surface, and in-air tracking data on surface ships, submarines, aircraft, and weapon systems. Many of these programs are in support of Antisubmarine Warfare (ASW), undersea research and development programs, and Fleet assessment and operational readiness trials. Most tests conducted at AUTEC require precise underwater tracking (plus or minus 3 yards) of multiple acoustic signals emitted with the correct waveshape and repetition criteria from either a surface craft or underwater vehicle.

  1. Location of Microearthquakes in Various Noisy Environments Using Envelope Stacking

    NASA Astrophysics Data System (ADS)

    Oye, V.; Gharti, H.

    2009-12-01

    Monitoring of microearthquakes is routinely conducted in various environments such as hydrocarbon and geothermal reservoirs, mines, dams, seismically active faults, volcanoes, nuclear power plants and CO2 storages. In many of these cases the handled data is sensitive and the interpretation of the data may be vital. In some cases, such as during mining or hydraulic fracturing activities, the number of microearthquakes is very large with tens to thousands of events per hour. In others, almost no events occur during a week and furthermore, it might not be anticipated that many events occur at all. However, the general setup of seismic networks, including surface and downhole stations, is usually optimized to record as many microearthquakes as possible, thereby trying to lower the detection threshold of the network. This process is obviously limited to some extent. Most microearthquake location techniques take advantage of a combination of P- and S-wave onset times that often can be picked reliably in an automatic mode. Moreover, when using seismic wave onset times, sometimes in combination with seismic wave polarization, these methods are more accurate compared to migration-based location routines. However, many events cannot be located because their magnitude is too small, i.e. the P- and/or S-wave onset times cannot be picked accurately on a sufficient number of receivers. Nevertheless, these small events are important for the interpretation of the processes that are monitored and even an inferior estimate of event locations and strengths is valuable information. Moreover, the smaller the event the more often such events statistically occur and the more important such additional information becomes. In this study we try to enhance the performance of any microseismic network, providing additional estimates of event locations below the actual detection threshold. We present a migration-based event location method, where we project the recorded seismograms onto the ray

  2. An information-theoretic approach to microseismic source location

    NASA Astrophysics Data System (ADS)

    Prange, Michael D.; Bose, Sandip; Kodio, Ousmane; Djikpesse, Hugues A.

    2015-04-01

    There has been extensive work on seismic source localization, going as far back as Geiger's 1912 paper, that is based on least-squares fitting of arrival times. The primary advantage of time-based methods over waveform-based methods (e.g. reverse-time migration and beam forming) is that simulated arrival times are considerably more reliable than simulated waveforms, especially in the context of an uncertain velocity model, thereby yielding more reliable estimates of source location. However, time-based methods are bedeviled by the unsolved challenges of accurate time picking and labelling of the seismic phases in the waveforms for each event. Drawing from Woodward's canonical 1953 text on the application of information theory to radar applications, we show that time-based methods can be applied directly to waveform data, thus capturing the advantages of time-based methods without being impacted by the aforementioned hindrances. We extend Woodward's approach to include an unknown distortion on wavelet amplitude and phase, showing that the related marginalization integrals can be analytically evaluated. We also provide extensions for correlation-based location methods such as relative localization and the S-P method. We demonstrate this approach through applications to microseismic event location, presenting formulations and results for both absolute and relative localization approaches, with receiver arrays either in a borehole or on the surface. By properly quantifying uncertainty in our location estimates, our formulations provide an objective measure for ranking the accuracy of microseismic source location methodologies.

  3. Pan-information Location Map

    NASA Astrophysics Data System (ADS)

    Zhu, X. Y.; Guo, W.; Huang, L.; Hu, T.; Gao, W. X.

    2013-11-01

    A huge amount of information, including geographic, environmental, socio-economic, personal and social network information, has been generated from diverse sources. Most of this information exists separately and is disorderly even if some of it is about the same person, feature, phenomenon or event. Users generally need to collect related information from different sources and then utilize them in applications. An automatic mechanism, therefore, for establishing a connection between potentially-related information will profoundly expand the usefulness of this huge body of information. A connection tie is semantic location describing semantically concepts and attributes of locations as well as relationships between locations, since 80% of information contains some kind of geographic reference but not all of geographic reference has explicit geographic coordinates. Semantic location is an orthogonal form of location representation which can be represented as domain ontology or UML format. Semantic location associates various kinds of information about a same object to provide timely information services according to users' demands, habits, preferences and applications. Based on this idea, a Pan-Information Location Map (PILM) is proposed as a new-style 4D map to associates semantic location-based information dynamically to organize and consolidate the locality and characteristics of corresponding features and events, and delivers on-demand information with a User-Adaptive Smart Display (UASD).

  4. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  5. How to accurately bypass damage

    PubMed Central

    Broyde, Suse; Patel, Dinshaw J.

    2016-01-01

    Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203

  6. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, David C.; Goorvitch, D.

    1994-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  7. Properties of the near-field term and its effect on polarisation analysis and source locations of long-period (LP) and very-long-period (VLP) seismic events at volcanoes

    NASA Astrophysics Data System (ADS)

    Lokmer, Ivan; Bean, Christopher J.

    2010-04-01

    Seismicity that originates within volcanic magmatic and hydrothermal plumbing systems is characterised by wavelengths that are often comparable to or longer than the source-receiver distance. The effect of such a near-field configuration must be explored when analysing these signals. Herein, we summarise properties of near-field observations for both a single force and moment-tensor seismic sources. We show radiation patterns of the near-, intermediate- and far-field terms for the source types that are most likely candidates for long- (LP) and very-long-period (VLP) volcanic seismicity, including: a single force, compensated linear vector dipole (CLVD), a tensile crack and a pipe-like source. We find that the deviation of the first motion polarisation from the radial direction is significant in all planes except one whose normal is parallel to the symmetry axis (if there is one) of the source mechanism. However, this deviation is less pronounced (or even negligible), when there is a considerable volumetric component in the source (as in the case of a tensile crack or pipe). Our location test shows that the accuracy of locations obtained using the semblance or cross-correlation techniques is very significantly affected by the near-field geometry. This effect is especially pronounced for shallow sources, such as often encountered on volcanoes, and decreases with increasing source depth. Hence, in practical applications on volcanoes, 3D full waveform numerical simulation (including topography and structural heterogeneities) should be used in order to both validate location techniques and as an interpretational aid to reduce misinterpretations of location results.

  8. A unified Bayesian framework for relative microseismic location

    NASA Astrophysics Data System (ADS)

    Poliannikov, Oleg V.; Prange, Michael; Malcolm, Alison; Djikpesse, Hugues

    2013-07-01

    We study the problem of determining an unknown microseismic event location relative to previously located events using a single monitoring array in a monitoring well. We show that using the available information about the previously located events for locating new events is advantageous compared to locating each event independently. By analysing confidence regions, we compare the performance of two previously proposed location methods, double-difference and interferometry, for varying signal-to-noise ratio and uncertainty in the velocity model. We show that one method may have an advantage over another depending on the experiment geometry, assumptions about uncertainty in velocity and recorded signal, etc. We propose a unified approach to relative event location that includes double-difference and interferometry as special cases, and is applicable to velocity models and well geometries of arbitrary complexity, producing location estimators that are superior to those of double-difference and interferometry.

  9. Use of Imperfect Calibration for Seismic Location

    SciTech Connect

    Myers, S C; Schultz, C A

    2000-07-12

    Efforts to more effectively monitor nuclear explosions include the calibration of travel times along specific paths. Benchmark events are used to improve travel-time prediction by (1) improving models, (2) determining travel times empirically, or (3) using a hybrid approach. Even velocity models that are determined using geophysical analogy (i.e. models determined without the direct use of calibration data) require validation with calibration events. Ideally, the locations and origin times of calibration events would be perfectly known. However, the existing set of perfectly known events is spatially limited and many of these events occurred prior to the installation of current monitoring stations, thus limiting their usefulness. There are, however, large numbers of well (but not perfectly) located events that are spatially distributed, and many of these events may be used for calibration. Identifying the utility and limitations of the spatially distributed set of imperfect calibration data is of paramount importance to the calibration effort. In order to develop guidelines for calibration utility, we examine the uncertainty and correlation of location parameters under several network configurations that are commonly used to produce calibration-grade locations. We then map these calibration uncertainties through location procedures with network configurations that are likely in monitoring situations. By examining the ramifications of depth and origin-time uncertainty, we expand on previous studies that focus strictly on epicenter accuracy. Particular attention is given to examples where calibration events are determined with teleseismic or local networks and monitoring is accomplished with a regional network.

  10. Ice jam flooding: a location prediction model

    NASA Astrophysics Data System (ADS)

    Collins, H. A.

    2009-12-01

    Flooding created by ice jamming is a climatically dependent natural hazard frequently affecting cold regions with disastrous results. Basic known physical characteristics which combine in the landscape to create an ice jam flood are modeled on the Cattaraugus Creek Watershed, located in Western New York State. Terrain analysis of topographic features, and the built environment features is conducted using Geographic Information Systems in order to predict the location of ice jam flooding events. The purpose of this modeling is to establish a broadly applicable Watershed scale model for predicting the probable locations of ice jam flooding.location of historic ice jam flooding events

  11. Integrating diverse calibration products to improve seismic location

    SciTech Connect

    Schultz, C; Myers, S; Swenson, J; Flanagan, M; Pasyanos, M; Bhattacharyya, J; Dodge, D

    2000-07-17

    The monitoring of nuclear explosions on a global basis requires accurate event locations. As an example, under the Comprehensive Test Ban Treaty, the size of an on-site inspection search area is 1,000 square kilometers or approximately 17 km accuracy assuming a circular area. This level of accuracy is a significant challenge for small events that are recorded using a sparse regional network. In such cases, the travel-time of seismic energy is strongly affected by crustal and upper mantle heterogeneity and large biases can result. This can lead to large systematic errors in location and, more importantly, to invalid error bounds associated with location estimates. Corrections can be developed and integrated to correct for these biases. These path corrections take the form of both three-dimensional model corrections along with three-dimensional empirically based travel time corrections. LLNL is currently working to integrate a diverse set of three-dimensional velocity model and empirical based travel-time products into one consistent and validated calibration set. To perform this task, we have developed a hybrid approach that uses three-dimensional model corrections for a region and then uses reference events when available to improve the path correction. This Bayesian kriging approach uses the best apriori three-dimensional velocity model that is produced for a local region and uses this as a baseline correction. When multiple models are produced for a local region, uncertainties in the models are compared against each other using ground truth data and an optimal model is chosen. We .are in the process of combining three-dimensional models on a region-by-region basis and integrating the uncertainties to form a global correction set. The Bayesian kriging prediction combines this a priori model and its statistics with the empirical calibrations to give an optimal aposteriori calibration estimate. In regions where there is limited or no coverage by reference events the

  12. Repeating seismic events in China.

    PubMed

    Schaff, David P; Richards, Paul G

    2004-02-20

    About 10% of seismic events in and near China from 1985 to 2000 were repeating events not more than about 1 kilometer from each other. We cross-correlated seismograms from approximately 14,000 earthquakes and explosions and measured relative arrival times to approximately 0.01 second, enabling lateral location precision of about 100 to 300 meters. Such precision is important for seismic hazard studies, earthquake physics, and nuclear test ban verification. Recognition and measurement of repeating signals in archived data and the resulting improvement in location specificity quantifies the inaccuracy of current procedures for picking onset times and locating events. PMID:14976310

  13. UTILIZING RESULTS FROM INSAR TO DEVELOP SEISMIC LOCATION BENCHMARKS AND IMPLICATIONS FOR SEISMIC SOURCE STUDIES

    SciTech Connect

    M. BEGNAUD; ET AL

    2000-09-01

    Obtaining accurate seismic event locations is one of the most important goals for monitoring detonations of underground nuclear teats. This is a particular challenge at small magnitudes where the number of recording stations may be less than 20. Although many different procedures are being developed to improve seismic location, most procedures suffer from inadequate testing against accurate information about a seismic event. Events with well-defined attributes, such as latitude, longitude, depth and origin time, are commonly referred to as ground truth (GT). Ground truth comes in many forms and with many different levels of accuracy. Interferometric Synthetic Aperture Radar (InSAR) can provide independent and accurate information (ground truth) regarding ground surface deformation and/or rupture. Relating surface deformation to seismic events is trivial when events are large and create a significant surface rupture, such as for the M{sub w} = 7.5 event that occurred in the remote northern region of the Tibetan plateau in 1997. The event, which was a vertical strike slip even appeared anomalous in nature due to the lack of large aftershocks and had an associated surface rupture of over 180 km that was identified and modeled using InSAR. The east-west orientation of the fault rupture provides excellent ground truth for latitude, but is of limited use for longitude. However, a secondary rupture occurred 50 km south of the main shock rupture trace that can provide ground truth with accuracy within 5 km. The smaller, 5-km-long secondary rupture presents a challenge for relating the deformation to a seismic event. The rupture is believed to have a thrust mechanism; the dip of the fimdt allows for some separation between the secondary rupture trace and its associated event epicenter, although not as much as is currently observed from catalog locations. Few events within the time period of the InSAR analysis are candidates for the secondary rupture. Of these, we have

  14. Activating Event Knowledge

    PubMed Central

    Hare, Mary; Jones, Michael; Thomson, Caroline; Kelly, Sarah; McRae, Ken

    2009-01-01

    An increasing number of results in sentence and discourse processing demonstrate that comprehension relies on rich pragmatic knowledge about real-world events, and that incoming words incrementally activate such knowledge. If so, then even outside of any larger context, nouns should activate knowledge of the generalized events that they denote or typically play a role in. We used short stimulus onset asynchrony priming to demonstrate that (1) event nouns prime people (sale-shopper) and objects (trip-luggage) commonly found at those events; (2) location nouns prime people/animals (hospital-doctor) and objects (barn-hay) commonly found at those locations; and (3) instrument nouns prime things on which those instruments are commonly used (key-door), but not the types of people who tend to use them (hose-gardener). The priming effects are not due to normative word association. On our account, facilitation results from event knowledge relating primes and targets. This has much in common with computational models like LSA or BEAGLE in which one word primes another if they frequently occur in similar contexts. LSA predicts priming for all six experiments, whereas BEAGLE correctly predicted that priming should not occur for the instrument-people relation but should occur for the other five. We conclude that event-based relations are encoded in semantic memory and computed as part of word meaning, and have a strong influence on language comprehension. PMID:19298961

  15. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  16. Location, Location, Location: Development of Spatiotemporal Sequence Learning in Infancy

    ERIC Educational Resources Information Center

    Kirkham, Natasha Z.; Slemmer, Jonathan A.; Richardson, Daniel C.; Johnson, Scott P.

    2007-01-01

    We investigated infants' sensitivity to spatiotemporal structure. In Experiment 1, circles appeared in a statistically defined spatial pattern. At test 11-month-olds, but not 8-month-olds, looked longer at a novel spatial sequence. Experiment 2 presented different color/shape stimuli, but only the location sequence was violated during test;…

  17. NHD INDEXED LOCATIONS FOR SEWAGE NO DISCHARGE ZONES

    EPA Science Inventory

    Locations where vessel sewage discharge is prohibited. Sewage no discharge zone (NDZ) locations are coded onto route.drain (Transport and Coastline Reach) feature of NHD to create Point Events and Linear Events. Sewage no discharge zone locations are coded onto region.rch (Wat...

  18. Helicopter magnetic survey conducted to locate wells

    SciTech Connect

    Veloski, G.A.; Hammack, R.W.; Stamp, V.; Hall, R.; Colina, K.

    2008-07-01

    A helicopter magnetic survey was conducted in August 2007 over 15.6 sq mi at the Naval Petroleum Reserve No. 3’s (NPR-3) Teapot Dome Field near Casper, Wyoming. The survey’s purpose was to accurately locate wells drilled there during more than 90 years of continuous oilfield operation. The survey was conducted at low altitude and with closely spaced flight lines to improve the detection of wells with weak magnetic response and to increase the resolution of closely spaced wells. The survey was in preparation for a planned CO2 flood for EOR, which requires a complete well inventory with accurate locations for all existing wells. The magnetic survey was intended to locate wells missing from the well database and to provide accurate locations for all wells. The ability of the helicopter magnetic survey to accurately locate wells was accomplished by comparing airborne well picks with well locations from an intense ground search of a small test area.

  19. Predict amine solution properties accurately

    SciTech Connect

    Cheng, S.; Meisen, A.; Chakma, A.

    1996-02-01

    Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.

  20. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  1. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  2. Sudden event recognition: a survey.

    PubMed

    Suriani, Nor Surayahani; Hussain, Aini; Zulkifley, Mohd Asyraf

    2013-01-01

    Event recognition is one of the most active research areas in video surveillance fields. Advancement in event recognition systems mainly aims to provide convenience, safety and an efficient lifestyle for humanity. A precise, accurate and robust approach is necessary to enable event recognition systems to respond to sudden changes in various uncontrolled environments, such as the case of an emergency, physical threat and a fire or bomb alert. The performance of sudden event recognition systems depends heavily on the accuracy of low level processing, like detection, recognition, tracking and machine learning algorithms. This survey aims to detect and characterize a sudden event, which is a subset of an abnormal event in several video surveillance applications. This paper discusses the following in detail: (1) the importance of a sudden event over a general anomalous event; (2) frameworks used in sudden event recognition; (3) the requirements and comparative studies of a sudden event recognition system and (4) various decision-making approaches for sudden event recognition. The advantages and drawbacks of using 3D images from multiple cameras for real-time application are also discussed. The paper concludes with suggestions for future research directions in sudden event recognition. PMID:23921828

  3. Sudden Event Recognition: A Survey

    PubMed Central

    Suriani, Nor Surayahani; Hussain, Aini; Zulkifley, Mohd Asyraf

    2013-01-01

    Event recognition is one of the most active research areas in video surveillance fields. Advancement in event recognition systems mainly aims to provide convenience, safety and an efficient lifestyle for humanity. A precise, accurate and robust approach is necessary to enable event recognition systems to respond to sudden changes in various uncontrolled environments, such as the case of an emergency, physical threat and a fire or bomb alert. The performance of sudden event recognition systems depends heavily on the accuracy of low level processing, like detection, recognition, tracking and machine learning algorithms. This survey aims to detect and characterize a sudden event, which is a subset of an abnormal event in several video surveillance applications. This paper discusses the following in detail: (1) the importance of a sudden event over a general anomalous event; (2) frameworks used in sudden event recognition; (3) the requirements and comparative studies of a sudden event recognition system and (4) various decision-making approaches for sudden event recognition. The advantages and drawbacks of using 3D images from multiple cameras for real-time application are also discussed. The paper concludes with suggestions for future research directions in sudden event recognition. PMID:23921828

  4. Accurate Cross Sections for Microanalysis

    PubMed Central

    Rez, Peter

    2002-01-01

    To calculate the intensity of x-ray emission in electron beam microanalysis requires a knowledge of the energy distribution of the electrons in the solid, the energy variation of the ionization cross section of the relevant subshell, the fraction of ionizations events producing x rays of interest and the absorption coefficient of the x rays on the path to the detector. The theoretical predictions and experimental data available for ionization cross sections are limited mainly to K shells of a few elements. Results of systematic plane wave Born approximation calculations with exchange for K, L, and M shell ionization cross sections over the range of electron energies used in microanalysis are presented. Comparisons are made with experimental measurement for selected K shells and it is shown that the plane wave theory is not appropriate for overvoltages less than 2.5 V. PMID:27446747

  5. CHED Events: Salt Lake City

    NASA Astrophysics Data System (ADS)

    Wink, Donald J.

    2009-03-01

    The Division of Chemical Education (CHED) Committee meetings planned for the Spring 2009 ACS Meeting in Salt Lake City will be in the Marriott City Center Hotel. Check the location of other CHED events, the CHED Social Event, the Undergraduate Program, Sci-Mix, etc. because many will be in the Salt Palace Convention Center.

  6. Cable-fault locator

    NASA Technical Reports Server (NTRS)

    Cason, R. L.; Mcstay, J. J.; Heymann, A. P., Sr.

    1979-01-01

    Inexpensive system automatically indicates location of short-circuited section of power cable. Monitor does not require that cable be disconnected from its power source or that test signals be applied. Instead, ground-current sensors are installed in manholes or at other selected locations along cable run. When fault occurs, sensors transmit information about fault location to control center. Repair crew can be sent to location and cable can be returned to service with minimum of downtime.

  7. Location, Location, Location: Where Do Location-Based Services Fit into Your Institution's Social Media Mix?

    ERIC Educational Resources Information Center

    Nekritz, Tim

    2011-01-01

    Foursquare is a location-based social networking service that allows users to share their location with friends. Some college administrators have been thinking about whether and how to take the leap into location-based services, which are also known as geosocial networking services. These platforms, which often incorporate gaming elements like…

  8. Accurate ab Initio Spin Densities

    PubMed Central

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921

  9. Travel-time source-specific station correction improves location accuracy

    NASA Astrophysics Data System (ADS)

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  10. Sensor-Generated Time Series Events: A Definition Language

    PubMed Central

    Anguera, Aurea; Lara, Juan A.; Lizcano, David; Martínez, Maria Aurora; Pazos, Juan

    2012-01-01

    There are now a great many domains where information is recorded by sensors over a limited time period or on a permanent basis. This data flow leads to sequences of data known as time series. In many domains, like seismography or medicine, time series analysis focuses on particular regions of interest, known as events, whereas the remainder of the time series contains hardly any useful information. In these domains, there is a need for mechanisms to identify and locate such events. In this paper, we propose an events definition language that is general enough to be used to easily and naturally define events in time series recorded by sensors in any domain. The proposed language has been applied to the definition of time series events generated within the branch of medicine dealing with balance-related functions in human beings. A device, called posturograph, is used to study balance-related functions. The platform has four sensors that record the pressure intensity being exerted on the platform, generating four interrelated time series. As opposed to the existing ad hoc proposals, the results confirm that the proposed language is valid, that is generally applicable and accurate, for identifying the events contained in the time series.

  11. Improved Teleseismic Locations of Shallow Subduction Zone Earthquakes

    NASA Astrophysics Data System (ADS)

    Bisrat, S. T.; Deshon, H. R.; Engdahl, E. R.; Bilek, S. L.

    2009-12-01

    Improved precision teleseismic earthquake locations in subduction zones are being used to better understand shallow megathrust frictional conditions and determine the global distribution of tsunami earthquakes. Most global teleseismic catalogs fail to accurately locate shallow subduction zone earthquakes, especially mid-magnitude events, leading to increased error in determining source time functions useful for identifying tsunami earthquakes. The Engdahl, van der Hilst and Buland (EHB) method had addressed this problem in part by including the teleseismic depth phases pP, pwP and sP in the relocation algorithm. The EHB catalog relies on phase times reported to the ISC and NEIC, but additional high quality depth phase onsets can be incorporated in the relocation procedure to enhance the robustness of individual locations. We present improvements to an automated frequency-based picker that identifies depth phases not reported in the standard catalogs. The revised autopicker uses abrupt amplitude changes of the power spectral density (PSD) function calculated at optimized frequencies for each waveform. It is being used to pick onsets for P and depth phases pP, pwP or sP for inclusion in the EHB phase catalog. In the case of events with an emergent P-wave onset or with a complex waveform consisting of sub-events, the autopicker may either overlook a relatively small change in frequency of the first arrival or misidentify the onset arrival time of associated later arrivals, leading to erroneous results. We track those waveforms by comparing the difference of the P-wave arrival time from ISC/NEIC and the autopicker. The phase arrivals can then be adjusted manually as they usually make up a few percent of the whole data. Epicentral changes following relocation using additional depth phases are generally small (<5 km). Changes in depth may be on the order of 10s of km for some events, though the standard deviation of depth changes within each subduction zone is ~5 km. We

  12. Location, location, location: tissue-specific regulation of immune responses

    PubMed Central

    Hu, Wei; Pasare, Chandrashekhar

    2013-01-01

    Discovery of DCs and PRRs has contributed immensely to our understanding of induction of innate and adaptive immune responses. Activation of PRRs leads to secretion of inflammatory cytokines that regulate priming and differentiation of antigen-specific T and B lymphocytes. Pathogens enter the body via different routes, and although the same set of PRRs is likely to be activated, it is becoming clear that the route of immune challenge determines the nature of outcome of adaptive immunity. In addition to the signaling events initiated following innate-immune receptor activation, the cells of the immune system are influenced by the microenvironments in which they reside, and this has a direct impact on the resulting immune response. Specifically, immune responses could be influenced by specialized DCs, specific factors secreted by stromal cells, and also, by commensal microbiota present in certain organs. Following microbial detection, the complex interactions among DCs, stromal cells, and tissue-specific factors influence outcome of immune responses. In this review, we summarize recent findings on the phenotypic heterogeneity of innate and adaptive immune cells and how tissue-specific factors in the systemic and mucosal immune system influence the outcome of adaptive-immune responses. PMID:23825388

  13. Accurate Mass Measurements in Proteomics

    SciTech Connect

    Liu, Tao; Belov, Mikhail E.; Jaitly, Navdeep; Qian, Weijun; Smith, Richard D.

    2007-08-01

    To understand different aspects of life at the molecular level, one would think that ideally all components of specific processes should be individually isolated and studied in details. Reductionist approaches, i.e., studying one biological event at a one-gene or one-protein-at-a-time basis, indeed have made significant contributions to our understanding of many basic facts of biology. However, these individual “building blocks” can not be visualized as a comprehensive “model” of the life of cells, tissues, and organisms, without using more integrative approaches.1,2 For example, the emerging field of “systems biology” aims to quantify all of the components of a biological system to assess their interactions and to integrate diverse types of information obtainable from this system into models that could explain and predict behaviors.3-6 Recent breakthroughs in genomics, proteomics, and bioinformatics are making this daunting task a reality.7-14 Proteomics, the systematic study of the entire complement of proteins expressed by an organism, tissue, or cell under a specific set of conditions at a specific time (i.e., the proteome), has become an essential enabling component of systems biology. While the genome of an organism may be considered static over short timescales, the expression of that genome as the actual gene products (i.e., mRNAs and proteins) is a dynamic event that is constantly changing due to the influence of environmental and physiological conditions. Exclusive monitoring of the transcriptomes can be carried out using high-throughput cDNA microarray analysis,15-17 however the measured mRNA levels do not necessarily correlate strongly with the corresponding abundances of proteins,18-20 The actual amount of functional proteins can be altered significantly and become independent of mRNA levels as a result of post-translational modifications (PTMs),21 alternative splicing,22,23 and protein turnover.24,25 Moreover, the functions of expressed

  14. Towards accurate and automatic morphing

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Sharkey, Paul M.

    2005-10-01

    Image morphing has proved to be a powerful tool for generating compelling and pleasing visual effects and has been widely used in entertainment industry. However, traditional image morphing methods suffer from a number of drawbacks: feature specification between images is tedious and the reliance on 2D information ignores the possible advantages to be gained from 3D knowledge. In this paper, we utilize recent advantages of computer vision technologies to diminish these drawbacks. By analyzing multi view geometry theories, we propose a processing pipeline based on three reference images. We first seek a few seed correspondences using robust methods and then recover multi view geometries using the seeds, through bundle adjustment. Guided by the recovered two and three view geometries, a novel line matching algorithm across three views is then deduced, through edge growth, line fitting and two and three view geometry constraints. Corresponding lines on a novel image is then obtained by an image transfer method and finally matched lines are fed into the traditional morphing methods and novel images are generated. Novel images generated by this pipeline have advantages over traditional morphing methods: they have an inherent 3D foundation and are therefore physically close to real scenes; not only images located between the baseline connecting two reference image centers, but also extrapolated images away from the baseline are possible; and the whole processing can be either wholly automatic, or at least the tedious task of feature specification in traditional morphing methods can be greatly relieved.

  15. Transionospheric chirp event classifier

    SciTech Connect

    Argo, P.E.; Fitzgerald, T.J.; Freeman, M.J.

    1995-09-01

    In this paper we will discuss a project designed to provide computer recognition of the transionospheric chirps/pulses measured by the Blackbeard (BB) satellite, and expected to be measured by the upcoming FORTE satellite. The Blackbeard data has been perused by human means -- this has been satisfactory for the relatively small amount of data taken by Blackbeard. But with the advent of the FORTE system, which by some accounts might ``see`` thousands of events per day, it is important to provide a software/hardware method of accurately analyzing the data. In fact, we are providing an onboard DSP system for FORTE, which will test the usefulness of our Event Classifier techniques in situ. At present we are constrained to work with data from the Blackbeard satellite, and will discuss the progress made to date.

  16. Sarsat location algorithms

    NASA Astrophysics Data System (ADS)

    Nardi, Jerry

    The Satellite Aided Search and Rescue (Sarsat) is designed to detect and locate distress beacons using satellite receivers. Algorithms used for calculating the positions of 406 MHz beacons and 121.5/243 MHz beacons are presented. The techniques for matching, resolving and averaging calculated locations from multiple satellite passes are also described along with results pertaining to single pass and multiple pass location estimate accuracy.

  17. Event-Synchronous Analysis for Connected-Speech Recognition.

    NASA Astrophysics Data System (ADS)

    Morgan, David Peter

    The motivation for event-synchronous speech analysis originates from linear system theory where the speech-source transfer function is excited by an impulse-like driving function. In speech processing, the impulse response obtained from this linear system contains both semantic information and the vocal tract transfer function. Typically, an estimate of the transfer function is obtained via the spectrum by assuming a short-time stationary signal within some analysis window. However, this spectrum is often distorted by the periodic effects which occur when multiple (pitch) impulses are included in the analysis window. One method to remove these effects would be to deconvolve the excitation function from the speech signal to obtain the transfer function. The more attractive approach is to locate and identify the excitation function and synchronize the analysis frame with it. Event-synchronous analysis differs from pitch -synchronous analysis in that there are many events useful for speech recognition which are not pitch excited. In addition, event-synchronous analysis locates the important boundaries between speech events, such as voiced to unvoiced and silence to burst transitions. In asynchronous processing, an analysis frame which contains portions of two adjacent but dissimilar speech events is often so ambiguous as to distort or mask the important "phonetic" features of both events. Thus event-syncronous processing is employed to obtain an accurate spectral estimate and in turn enhance the estimate of the vocal-tract transfer function. Among the issues which have been addressed in implementing an event-synchronous recognition system are those of developing robust event (pitch, burst, etc.) detectors, synchronous-analysis methodologies, more meaningful feature sets, and dynamic programming algorithms for nonlinear time alignment. An advantage of event-synchronous processing is that the improved representation of the transfer function creates an opportunity for

  18. Reversible micromachining locator

    DOEpatents

    Salzer, Leander J.; Foreman, Larry R.

    1999-01-01

    This invention provides a device which includes a locator, a kinematic mount positioned on a conventional tooling machine, a part carrier disposed on the locator and a retainer ring. The locator has disposed therein a plurality of steel balls, placed in an equidistant position circumferentially around the locator. The kinematic mount includes a plurality of magnets which are in registry with the steel balls on the locator. In operation, a blank part to be machined is placed between a surface of a locator and the retainer ring (fitting within the part carrier). When the locator (with a blank part to be machined) is coupled to the kinematic mount, the part is thus exposed for the desired machining process. Because the locator is removably attachable to the kinematic mount, it can easily be removed from the mount, reversed, and reinserted onto the mount for additional machining. Further, the locator can likewise be removed from the mount and placed onto another tooling machine having a properly aligned kinematic mount. Because of the unique design and use of magnetic forces of the present invention, positioning errors of less than 0.25 micrometer for each machining process can be achieved.

  19. Reversible micromachining locator

    DOEpatents

    Salzer, L.J.; Foreman, L.R.

    1999-08-31

    This invention provides a device which includes a locator, a kinematic mount positioned on a conventional tooling machine, a part carrier disposed on the locator and a retainer ring. The locator has disposed therein a plurality of steel balls, placed in an equidistant position circumferentially around the locator. The kinematic mount includes a plurality of magnets which are in registry with the steel balls on the locator. In operation, a blank part to be machined is placed between a surface of a locator and the retainer ring (fitting within the part carrier). When the locator (with a blank part to be machined) is coupled to the kinematic mount, the part is thus exposed for the desired machining process. Because the locator is removably attachable to the kinematic mount, it can easily be removed from the mount, reversed, and reinserted onto the mount for additional machining. Further, the locator can likewise be removed from the mount and placed onto another tooling machine having a properly aligned kinematic mount. Because of the unique design and use of magnetic forces of the present invention, positioning errors of less than 0.25 micrometer for each machining process can be achieved. 7 figs.

  20. Semantic Location Extraction from Crowdsourced Data

    NASA Astrophysics Data System (ADS)

    Koswatte, S.; Mcdougall, K.; Liu, X.

    2016-06-01

    Crowdsourced Data (CSD) has recently received increased attention in many application areas including disaster management. Convenience of production and use, data currency and abundancy are some of the key reasons for attracting this high interest. Conversely, quality issues like incompleteness, credibility and relevancy prevent the direct use of such data in important applications like disaster management. Moreover, location information availability of CSD is problematic as it remains very low in many crowd sourced platforms such as Twitter. Also, this recorded location is mostly related to the mobile device or user location and often does not represent the event location. In CSD, event location is discussed descriptively in the comments in addition to the recorded location (which is generated by means of mobile device's GPS or mobile communication network). This study attempts to semantically extract the CSD location information with the help of an ontological Gazetteer and other available resources. 2011 Queensland flood tweets and Ushahidi Crowd Map data were semantically analysed to extract the location information with the support of Queensland Gazetteer which is converted to an ontological gazetteer and a global gazetteer. Some preliminary results show that the use of ontologies and semantics can improve the accuracy of place name identification of CSD and the process of location information extraction.

  1. Assessment of User Home Location Geoinference Methods

    SciTech Connect

    Harrison, Joshua J.; Bell, Eric B.; Corley, Courtney D.; Dowling, Chase P.; Cowell, Andrew J.

    2015-05-29

    This study presents an assessment of multiple approaches to determine the home and/or other important locations to a Twitter user. In this study, we present a unique approach to the problem of geotagged data sparsity in social media when performing geoinferencing tasks. Given the sparsity of explicitly geotagged Twitter data, the ability to perform accurate and reliable user geolocation from a limited number of geotagged posts has proven to be quite useful. In our survey, we have achieved accuracy rates of over 86% in matching Twitter user profile locations with their inferred home locations derived from geotagged posts.

  2. System and Method of Locating Lightning Strikes

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro J. (Inventor); Starr, Stanley O. (Inventor)

    2002-01-01

    A system and method of determining locations of lightning strikes has been described. The system includes multiple receivers located around an area of interest, such as a space center or airport. Each receiver monitors both sound and electric fields. The detection of an electric field pulse and a sound wave are used to calculate an area around each receiver in which the lighting is detected. A processor is coupled to the receivers to accurately determine the location of the lighting strike. The processor can manipulate the receiver data to compensate for environmental variables such as wind, temperature, and humidity. Further, each receiver processor can discriminate between distant and local lightning strikes.

  3. Memory for time and place contributes to enhanced confidence in memories for emotional events.

    PubMed

    Rimmele, Ulrike; Davachi, Lila; Phelps, Elizabeth A

    2012-08-01

    Emotion strengthens the subjective sense of remembering. However, these confidently remembered emotional memories have not been found be more accurate for some types of contextual details. We investigated whether the subjective sense of recollecting negative stimuli is coupled with enhanced memory accuracy for three specific types of central contextual details using the remember/know paradigm and confidence ratings. Our results indicate that the subjective sense of remembering is indeed coupled with better recollection of spatial location and temporal context, but not higher memory accuracy for colored dots placed in the conceptual center of negative and neutral scenes. These findings show that the enhanced subjective recollective experience for negative stimuli reliably indicates objective recollection for spatial location and temporal context, but not for other types of details, whereas for neutral stimuli, the subjective sense of remembering is coupled with all the types of details assessed. Translating this finding to flashbulb memories, we found that, over time, more participants correctly remembered the location where they learned about the terrorist attacks on 9/11 than any other canonical feature. Likewise, participants' confidence was higher in their memory for location versus other canonical features. These findings indicate that the strong recollective experience of a negative event corresponds to an accurate memory for some kinds of contextual details but not for other kinds. This discrepancy provides further evidence that the subjective sense of remembering negative events is driven by a different mechanism than the subjective sense of remembering neutral events. PMID:22642353

  4. Ground truth events with source geometry in Eurasia and the Middle East

    NASA Astrophysics Data System (ADS)

    Shamsalsadati, S.; O'Donnell, J. P.; Nyblade, A.

    2015-12-01

    Accurate seismic source locations and their geometries are important to improve ground-based system monitoring; however, a few number of Ground Truth (GT) events are available in most regions within Eurasia and Middle East. In this study GT event locations were found for several earthquakes in the Middle East and Africa, including Saudi Arabia and Tanzania, with location errors of less that 5 km. These events were acquired through analyzing several local and near-regional waveforms for hundreds of earthquakes in these areas. A large number of earthquakes occurred beneath the Harrat Lunayyir in northwest Saudi Arabia in 2009. From the 15 Lunayyir GT events recorded on three-component seismographs, 5 with Mw between 3.4 and 5.9 were used for successfully obtaining source parameters. A moment tensor inversion was applied on filtered surface wave data to obtain the best-fitting source mechanism, moment magnitude, and depth. The uncertainty in the derived parameters was investigated by applying the same inversion to selected traces and frequency bands. Focal mechanism for these earthquakes demonstrates normal faulting with a NW-SE trend for all of the events except one, which has a NE-SW trend. The shallow 3 km depth obtained for all the events is consistent with previous studies of dyke intrusion in the area. Spectral analysis of S waves and source parameters for these earthquakes are in progress to find static stress drop, corner frequency and radiated energy.

  5. Memory for time: how people date events.

    PubMed

    Janssen, Steve M J; Chessa, Antonio G; Murre, Jaap M J

    2006-01-01

    The effect of different formats on the accuracy of dating news and the distribution of personal events was examined in four conditions. In the first, participants had to date events in the absolute time format (e.g., "July 2004"), and in the second, they had to date events in the relative time format (e.g., "3 weeks ago"). In the other conditions, they were asked to choose between the two formats. We found a small backward telescoping effect for recent news events and a large forward telescoping effect for remote events. Events dated in the absolute time format were more accurate than those dated in the relative time format. Furthermore, participants preferred to date news events with the relative time format and personal events with the absolute time format, as well as preferring to date remote events in the relative time format and recent events in the absolute time format. PMID:16686113

  6. Object locating system

    DOEpatents

    Novak, James L.; Petterson, Ben

    1998-06-09

    A sensing system locates an object by sensing the object's effect on electric fields. The object's effect on the mutual capacitance of electrode pairs varies according to the distance between the object and the electrodes. A single electrode pair can sense the distance from the object to the electrodes. Multiple electrode pairs can more precisely locate the object in one or more dimensions.

  7. Reversible micromachining locator

    SciTech Connect

    Salzer, Leander J.; Foreman, Larry R.

    2002-01-01

    A locator with a part support is used to hold a part onto the kinematic mount of a tooling machine so that the part can be held in or replaced in exactly the same position relative to the cutting tool for machining different surfaces of the part or for performing different machining operations on the same or different surfaces of the part. The locator has disposed therein a plurality of steel balls placed at equidistant positions around the planar surface of the locator and the kinematic mount has a plurality of magnets which alternate with grooves which accommodate the portions of the steel balls projecting from the locator. The part support holds the part to be machined securely in place in the locator. The locator can be easily detached from the kinematic mount, turned over, and replaced onto the same kinematic mount or another kinematic mount on another tooling machine without removing the part to be machined from the locator so that there is no need to touch or reposition the part within the locator, thereby assuring exact replication of the position of the part in relation to the cutting tool on the tooling machine for each machining operation on the part.

  8. Acoustic emission source location

    NASA Astrophysics Data System (ADS)

    Promboon, Yajai

    The objective of the research program was development of reliable source location techniques. The study comprised two phases. First, the research focused on development of source location methods for homogeneous plates. The specimens used in the program were steel railroad tank cars. Source location methods were developed and demonstrated for empty and water filled tanks. The second phase of the research was an exploratory study of source location method for fiber reinforced composites. Theoretical analysis and experimental measurement of wave propagation were carried out. This data provided the basis for development of a method using the intersection of the group velocity curves for the first three wave propagation modes. Simplex optimization was used to calculate the location of the source. Additional source location methods have been investigated and critically examined. Emphasis has been placed on evaluating different methods for determining the time of arrival of a wave. The behavior of wave in a water filled tank was studied and source location methods suitable for use in this situation have been examined through experiment and theory. Particular attention is paid to the problem caused by leaky Lamb waves. A preliminary study into the use of neural networks for source location in fiber reinforced composites was included in the research program. A preliminary neural network model and the results from training and testing data are reported.

  9. Reconstructing Spatial Distributions from Anonymized Locations

    SciTech Connect

    Horey, James L; Forrest, Stephanie; Groat, Michael

    2012-01-01

    Devices such as mobile phones, tablets, and sensors are often equipped with GPS that accurately report a person's location. Combined with wireless communication, these devices enable a wide range of new social tools and applications. These same qualities, however, leave location-aware applications vulnerable to privacy violations. This paper introduces the Negative Quad Tree, a privacy protection method for location aware applications. The method is broadly applicable to applications that use spatial density information, such as social applications that measure the popularity of social venues. The method employs a simple anonymization algorithm running on mobile devices, and a more complex reconstruction algorithm on a central server. This strategy is well suited to low-powered mobile devices. The paper analyzes the accuracy of the reconstruction method in a variety of simulated and real-world settings and demonstrates that the method is accurate enough to be used in many real-world scenarios.

  10. Event structure and cognitive control.

    PubMed

    Reimer, Jason F; Radvansky, Gabriel A; Lorsbach, Thomas C; Armendarez, Joseph J

    2015-09-01

    Recently, a great deal of research has demonstrated that although everyday experience is continuous in nature, it is parsed into separate events. The aim of the present study was to examine whether event structure can influence the effectiveness of cognitive control. Across 5 experiments we varied the structure of events within the AX-CPT by shifting the spatial location of cues and probes on a computer screen. When location shifts were present, a pattern of AX-CPT performance consistent with enhanced cognitive control was found. To test whether the location shift effects were caused by the presence of event boundaries per se, other aspects of the AX-CPT were manipulated, such as the color of cues and probes and the inclusion of a distractor task during the cue-probe delay. Changes in cognitive control were not found under these conditions, suggesting that the location shift effects were specifically related to the formation of separate event models. Together, these results can be accounted for by the Event Horizon Model and a representation-based theory of cognitive control, and suggest that cognitive control can be influenced by the surrounding environmental structure. PMID:25603168

  11. Sensors Locate Radio Interference

    NASA Technical Reports Server (NTRS)

    2009-01-01

    After receiving a NASA Small Business Innovation Research (SBIR) contract from Kennedy Space Center, Soneticom Inc., based in West Melbourne, Florida, created algorithms for time difference of arrival and radio interferometry, which it used in its Lynx Location System (LLS) to locate electromagnetic interference that can disrupt radio communications. Soneticom is collaborating with the Federal Aviation Administration (FAA) to install and test the LLS at its field test center in New Jersey in preparation for deploying the LLS at commercial airports. The software collects data from each sensor in order to compute the location of the interfering emitter.

  12. Comparison of mid-oceanic earthquake epicentral differences of travel time, centroid locations, and those determined by autonomous underwater hydrophone arrays

    NASA Astrophysics Data System (ADS)

    Pan, Jianfeng; Dziewonski, Adam M.

    2005-07-01

    Mid-oceanic interplate earthquakes are difficult to locate accurately because they normally occur far away from land-based seismic stations. Use of water-borne T waves recorded by autonomous underwater hydrophone (AUH) arrays records an order of magnitude more highly accurate regional low seismicity along the north Mid-Atlantic Ridge than the International Seismic Centre (ISC). Even though the physical meaning of an AUH locations is still not well known, AUH's small location errors are important for better constraining mid-oceanic earthquakes. Comparison of such AUH locations with those in ISC and Harvard centroid moment tensor (CMT) location catalog, and relocated ones based on the high-resolution bathymetry and teleseismic P phases, is made in this study. AUH locations are used as a reference to compare the teleseismically determined locations. For large earthquakes with known focal mechanisms, we find that relocated locations agree with AUH ones better than with ISC. We also note that the centroid vectors from relocated epicenters are usually larger than AUH centroid vectors. The relocated epicenters and AUH locations lie in similar azimuthal directions to the associated CMT epicenters. The larger relocated and AUH centroid vectors (than the error ellipses of AUH, CMT, and relocated ones combined) might be explained by the fault rupture process. For smaller events, relocated location confidence ellipses are usually large enough to cover AUH locations and their error ellipses. Overall, the highly accurate AUH locations can be used to confirm the mid-oceanic earthquake hypocenters and seismicity characteristics and for detail studies of the low-level seismicity associated with the plate motions.

  13. Lunar Impact Flash Locations

    NASA Technical Reports Server (NTRS)

    Moser, D. E.; Suggs, R. M.; Kupferschmidt, L.; Feldman, J.

    2015-01-01

    A bright impact flash detected by the NASA Lunar Impact Monitoring Program in March 2013 brought into focus the importance of determining the impact flash location. A process for locating the impact flash, and presumably its associated crater, was developed using commercially available software tools. The process was successfully applied to the March 2013 impact flash and put into production on an additional 300 impact flashes. The goal today: provide a description of the geolocation technique developed.

  14. Infrared horizon locator

    NASA Technical Reports Server (NTRS)

    Jalink, A., Jr. (Inventor)

    1973-01-01

    A precise method and apparatus for locating the earth's infrared horizon from space that is independent of season and latitude is described. First and second integrations of the earth's radiance profile are made from space to earth with the second delayed with respect to the first. The second integration is multiplied by a predetermined constant R and then compared with the first integration. When the two are equal the horizon is located.

  15. Object locating system

    DOEpatents

    Novak, J.L.; Petterson, B.

    1998-06-09

    A sensing system locates an object by sensing the object`s effect on electric fields. The object`s effect on the mutual capacitance of electrode pairs varies according to the distance between the object and the electrodes. A single electrode pair can sense the distance from the object to the electrodes. Multiple electrode pairs can more precisely locate the object in one or more dimensions. 12 figs.

  16. Regional Seismic Travel-Time Prediction, Uncertainty, and Location Improvement in Western Eurasia

    NASA Astrophysics Data System (ADS)

    Flanagan, M. P.; Myers, S. C.

    2004-12-01

    sample WENA1.0 and therefore provide an unbiased assessment of location performance. A statistically significant sample is achieved by generating 500 location realizations based on 5 events with location accuracy between 1 km and 5 km. Each realization is a randomly selected event with location determined by randomly selecting 5 stations from the available network. In 340 cases (68% of the instances), locations are improved, and average mislocation is reduced from 31 km to 26 km. Preliminary test of uncertainty estimates suggest that our uncertainty model produces location uncertainty ellipses that are representative of location accuracy. These results highlight the importance of accurate GT datasets in assessing regional travel-time models and demonstrate that an a priori 3D model can markedly improve our ability to locate small magnitude events in a regional monitoring context. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48, Contribution UCRL-CONF-206386.

  17. Automatic Event Bulletin Built By Waveform Cross Correlation Using The Global Grid Of Master Events With Adjustable Templates

    NASA Astrophysics Data System (ADS)

    Kitov, Ivan; Bobrov, Dmitry; Rozhkov, Mikhail

    2016-04-01

    We built an automatic seismic event bulletin for the whole globe using waveform cross correlation at array stations of the International Monitoring System (IMS). To detect signals and associate them into robust event hypotheses in an automatic pipeline we created a global grid (GG) of master events with a diversity of waveform templates. For the Comprehensive Nuclear-Test-Ban Treaty (CTBT), the GG provide an almost uniform distribution of monitoring capabilities and adjustable templates. For seismic areas, we select high quality signals at IMS stations from earthquakes. For test sites, signals from UNEs are best templates. Global detection and association with cross correlation technique for research and monitoring purposes demands templates from master events outside the regions of natural seismicity and test sites. We populate aseismic areas with masters having synthetic templates calculated for predefined sets of IMS array stations. We applied various technologies to synthesize most representative signals for cross correlation and tested them using the Reviewed Event Bulletin (REB) issued by the International Data Centre (IDC). At first, we tested these global sets of master events and synthetic templates using IMS seismic data for February 13, 2013 and demonstrated excellent detection and location capability. Then, using the REB and cross correlation bulletins (XSELs) experienced analysts from the IDC compared the relative performance of various templates and built reliable sets of events and detections for machine learning. In this study, we carefully compile global training sets for machine learning in order to establish statistical decision lines between reliable and unreliable event hypotheses, then apply classification procedures to the intermediate automatic cross correlation bulletin based on the GG, and compile the final XSEL, which is more accurate and has lower detection threshold than the REB.

  18. Scanning beacon locator system: A concept

    NASA Technical Reports Server (NTRS)

    Shores, P. W.

    1973-01-01

    If aircraft and ships are equipped with beacons capable of communicating with satellites, rescue efforts may speed up significantly. In event of disaster, beacons can transmit distress message to satellite which, in turn, will relay message to nearest rescue center, indicating distress location.

  19. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  20. Acoustic Location of Lightning Using Interferometric Techniques

    NASA Astrophysics Data System (ADS)

    Erives, H.; Arechiga, R. O.; Stock, M.; Lapierre, J. L.; Edens, H. E.; Stringer, A.; Rison, W.; Thomas, R. J.

    2013-12-01

    Acoustic arrays have been used to accurately locate thunder sources in lightning flashes. The acoustic arrays located around the Magdalena mountains of central New Mexico produce locations which compare quite well with source locations provided by the New Mexico Tech Lightning Mapping Array. These arrays utilize 3 outer microphones surrounding a 4th microphone located at the center, The location is computed by band-passing the signal to remove noise, and then computing the cross correlating the outer 3 microphones with respect the center reference microphone. While this method works very well, it works best on signals with high signal to noise ratios; weaker signals are not as well located. Therefore, methods are being explored to improve the location accuracy and detection efficiency of the acoustic location systems. The signal received by acoustic arrays is strikingly similar to th signal received by radio frequency interferometers. Both acoustic location systems and radio frequency interferometers make coherent measurements of a signal arriving at a number of closely spaced antennas. And both acoustic and interferometric systems then correlate these signals between pairs of receivers to determine the direction to the source of the received signal. The primary difference between the two systems is the velocity of propagation of the emission, which is much slower for sound. Therefore, the same frequency based techniques that have been used quite successfully with radio interferometers should be applicable to acoustic based measurements as well. The results presented here are comparisons between the location results obtained with current cross correlation method and techniques developed for radio frequency interferometers applied to acoustic signals. The data were obtained during the summer 2013 storm season using multiple arrays sensitive to both infrasonic frequency and audio frequency acoustic emissions from lightning. Preliminary results show that

  1. The Challenges of On-Campus Recruitment Events

    ERIC Educational Resources Information Center

    McCoy, Amy

    2012-01-01

    On-campus admissions events are the secret weapon that colleges and universities use to convince students to apply and enroll. On-campus events vary depending on the size, location, and type of institution; they include campus visitations, open houses, preview days, scholarship events, admitted student events, and summer yield events. These events…

  2. Marine cable location system

    SciTech Connect

    Zachariadis, R.G.

    1984-05-01

    An acoustic positioning system locates a marine cable at an exploration site, such cable employing a plurality of hydrophones at spaced-apart positions along the cable. A marine vessel measures water depth to the cable as the vessel passes over the cable and interrogates the hydrophones with sonar pulses along a slant range as the vessel travels in a parallel and horizontally offset path to the cable. The location of the hydrophones is determined from the recordings of water depth and slant range.

  3. Cable fault locator research

    NASA Astrophysics Data System (ADS)

    Cole, C. A.; Honey, S. K.; Petro, J. P.; Phillips, A. C.

    1982-07-01

    Cable fault location and the construction of four field test units are discussed. Swept frequency sounding of mine cables with RF signals was the technique most thoroughly investigated. The swept frequency technique is supplemented with a form of moving target indication to provide a method for locating the position of a technician along a cable and relative to a suspected fault. Separate, more limited investigations involved high voltage time domain reflectometry and acoustical probing of mine cables. Particular areas of research included microprocessor-based control of the swept frequency system, a microprocessor based fast Fourier transform for spectral analysis, and RF synthesizers.

  4. RFI emitter location techniques

    NASA Technical Reports Server (NTRS)

    Rao, B. L. J.

    1973-01-01

    The possibility is discussed of using Doppler techniques for determining the location of ground based emitters causing radio frequency interference with low orbiting satellites. An error analysis indicates that it is possible to find the emitter location within an error range of 2 n.mi. The parameters which determine the required satellite receiver characteristic are discussed briefly along with the non-real time signal processing which may by used in obtaining the Doppler curve. Finally, the required characteristics of the satellite antenna are analyzed.

  5. Events diary

    NASA Astrophysics Data System (ADS)

    2000-01-01

    as Imperial College, the Royal Albert Hall, the Royal College of Art, the Natural History and Science Museums and the Royal Geographical Society. Under the heading `Shaping the future together' BA2000 will explore science, engineering and technology in their wider cultural context. Further information about this event on 6 - 12 September may be obtained from Sandra Koura, BA2000 Festival Manager, British Association for the Advancement of Science, 23 Savile Row, London W1X 2NB (tel: 0171 973 3075, e-mail: sandra.koura@britassoc.org.uk ). Details of the creating SPARKS events may be obtained from creating.sparks@britassoc.org.uk or from the website www.britassoc.org.uk . Other events 3 - 7 July, Porto Alegre, Brazil VII Interamerican conference on physics education: The preparation of physicists and physics teachers in contemporary society. Info: IACPE7@if.ufrgs.br or cabbat1.cnea.gov.ar/iacpe/iacpei.htm 27 August - 1 September, Barcelona, Spain GIREP conference: Physics teacher education beyond 2000. Info: www.blues.uab.es/phyteb/index.html

  6. Simulation of heavy rainfall events over Indian region: a benchmark skill with a GCM

    NASA Astrophysics Data System (ADS)

    Goswami, Prashant; Kantha Rao, B.

    2015-10-01

    Extreme rainfall events (ERE) contribute a significant component of the Indian summer monsoon rainfall. Thus an important requirement for regional climate simulations is to attain desirable quality and reliability in simulating the extreme rainfall events. While the global circulation model (GCM) with coarse resolution are not preferred for simulation of extreme events, it is expected that the global domain in a GCM would allow better representation of scale interactions, resulting in adequate skill in simulating localized events in spite of lower resolution. At the same time, a GCM with skill in simulation of extreme events will provide a more reliable tool for seamless prediction. The present work provides an assessment of a GCM for simulating 40 ERE that occurred over India during 1998-2013. It is found that, expectedly, the GCM forecasts underestimate the observed (TRMM) rainfall in most cases, but not always. Somewhat surprisingly, the forecasts of location are quite accurate in spite of low resolution (~50 km). An interesting result is that the highest skill of the forecasts is realized at 48 h lead rather than at 24 or 96 h lead. Diagnostics of dynamical fields like convergence shows that the forecasts can capture contrasting features on pre-event, event and post-event days. The forecast configuration used is similar to one that has been used for long-range monsoon forecasting and tropical cyclones in earlier studies; the present results on ERE forecasting, therefore, provide an indication for the potential application of the model for seamless prediction.

  7. Particle impact location detector

    NASA Technical Reports Server (NTRS)

    Auer, S. O.

    1974-01-01

    Detector includes delay lines connected to each detector surface strip. When several particles strike different strips simultaneously, pulses generated by each strip are time delayed by certain intervals. Delay time for each strip is known. By observing time delay in pulse, it is possible to locate strip that is struck by particle.

  8. LOCATING AREAS OF CONCERN

    EPA Science Inventory

    A simple method to locate changes in vegetation cover, which can be used to identify areas under stress. The method only requires inexpensive NDVI data. The use of remotely sensed data is far more cost-effective than field studies and can be performed more quickly. Local knowledg...

  9. Location of Spirit's Home

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This image shows where Earth would set on the martian horizon from the perspective of the Mars Exploration Rover Spirit if it were facing northwest atop its lander at Gusev Crater. Earth cannot be seen in this image, but engineers have mapped its location. This image mosaic was taken by the hazard-identification camera onboard Spirit.

  10. Precisely locating the Klamath Falls, Oregon, earthquakes

    USGS Publications Warehouse

    Qamar, A.; Meagher, K.L.

    1993-01-01

    In this article we present preliminary results of a close-in, instrumental study of the Klamath Falls earthquake sequence, carried as a cooperative effort by scientists from the U.S Geological Survey (USGS) and universities in Washington, Orgeon, and California. In addition to obtaining much mroe accurate earthquake locations, this study has improved our understanding of the relationship between seismicity and mapped faults in the region. 

  11. Substance Abuse Treatment Facility Locator

    MedlinePlus

    ... Health Services Locator Buprenorphine Physician Locator Find a Facility in Your State To locate the drug and ... Service . Privacy Policy . Home | About the Locator | Find Facilities Near You | Find Facilities by City, County, State ...

  12. Geophysical event

    NASA Astrophysics Data System (ADS)

    Pagan Volcano, Mariana Islands, Western Pacific Ocean (18.13°N, 145.80°E). All times are local (GMT+10 h). A strong explosive eruption from North Pagan, the larger of the two stratovolcanos that form the Pagan volcano complex, began on May 15. While reporting strong felt seismicity on the island, radio operator Pedro Castro suddenly announced at 0915 that the volcano was erupting. Communication was then cut off. An infrared image returned from the Japanese geostationary weather satellite at 1000 showed a very bright circular cloud about 80 km in diameter over the volcano. The cloud spread SE at about 70 km/h, and by 1600 its maximum height was estimated at 13.5 km from satellite imagery. Weakening of activity was evident on the image returned at 1900, and on the next image, at 2200, feeding of the eruption cloud had stopped, with the proximal end of the cloud located about 120 km SE of the volcano. No additional activity has been detected on the satellite images, but by 0400 the next morning, remnants of the plume had reached 10°N and 155°E.

  13. Event reconstruction for line source releases

    SciTech Connect

    Zajic, Dragan; Brown, Michael J; Williams, Michael D

    2010-01-01

    The goal of source inversion, also called event reconstruction, is the calculation of source parameters from information obtained by network of concentration (or dosage) and meteorological sensors. Source parameters include source location and strength, but in certain cases there could be more than one source so the inversion procedure could deal with determination of number of sources, as well. In a case of limited time period pollutant emission events, as for example during accidents or intentional releases, it is of great use to estimate starting and ending times of the event. This kind of research is very useful for estimating the source parameters of industrial pollutants since it provides important information for regulation purposes. Also it provides information to fast responders in a case of accidental pollutant releases or for homeland security needs when chemical, biological or radiological agent is deliberately released. Development of faster and more accurate algorithms is very important since it could help reduce the populace's exposure to dangerous airborne contaminants, plan evacuation routes, and help assess the magnitude of clean up. During the last decade, the large number of research papers in area of source inversion was published where many different approaches were used. Most of the source inversion work publish to date apply to point source releases. The forward dispersion models used range from fast Gaussian plume and puff codes that enable almost instantaneous calculations of concentrations and dosages to Computational Fluid Dynamics (CFD) codes that provide more detailed and precise calculation but at the same time are expensive with respect to time and computer resources. The optimization methods were often used and examples are simulated annealing and genetic algorithms.

  14. Dipole Well Location

    1998-08-03

    The problem here is to model the three-dimensional response of an electromagnetic logging tool to a practical situation which is often encountered in oil and gas exploration. The DWELL code provide the electromagnetic fields on the axis of a borehole due to either an electric or a magnetic dipole located on the same axis. The borehole is cylindrical, and is located within a stratified formation in which the bedding planes are not horizontal. The anglemore » between the normal to the bedding planes and the axis of the borehole may assume any value, or in other words, the borehole axis may be tilted with respect to the bedding planes. Additionally, all of the formation layers may have invasive zones of drilling mud. The operating frequency of the source dipole(s) extends from a few Hertz to hundreds of Megahertz.« less

  15. Electric current locator

    DOEpatents

    King, Paul E.; Woodside, Charles Rigel

    2012-02-07

    The disclosure herein provides an apparatus for location of a quantity of current vectors in an electrical device, where the current vector has a known direction and a known relative magnitude to an input current supplied to the electrical device. Mathematical constants used in Biot-Savart superposition equations are determined for the electrical device, the orientation of the apparatus, and relative magnitude of the current vector and the input current, and the apparatus utilizes magnetic field sensors oriented to a sensing plane to provide current vector location based on the solution of the Biot-Savart superposition equations. Description of required orientations between the apparatus and the electrical device are disclosed and various methods of determining the mathematical constants are presented.

  16. Dipole Well Location

    SciTech Connect

    Newman, Gregory

    1998-08-03

    The problem here is to model the three-dimensional response of an electromagnetic logging tool to a practical situation which is often encountered in oil and gas exploration. The DWELL code provide the electromagnetic fields on the axis of a borehole due to either an electric or a magnetic dipole located on the same axis. The borehole is cylindrical, and is located within a stratified formation in which the bedding planes are not horizontal. The angle between the normal to the bedding planes and the axis of the borehole may assume any value, or in other words, the borehole axis may be tilted with respect to the bedding planes. Additionally, all of the formation layers may have invasive zones of drilling mud. The operating frequency of the source dipole(s) extends from a few Hertz to hundreds of Megahertz.

  17. Marine cable location system

    SciTech Connect

    Ottsen, H.; Barker, Th.

    1985-04-23

    An acoustic positioning system for locating a marine cable at an exploration site employs a plurality of acoustic transponders, each having a characteristic frequency, at spaced-apart positions along the cable. A marine vessel measures the depth to the transponders as the vessel passes over the cable and measures the slant range from the vessel to each of the acoustic transponders as the vessel travels in a parallel and horizontally offset path to the cable.

  18. Magnetic Location Indicator

    NASA Technical Reports Server (NTRS)

    Stegman, Thomas W.

    1992-01-01

    Ferrofluidic device indicates point of highest magnetic-flux density in workspace. Consists of bubble of ferrofluid in immiscible liquid carrier in clear plastic case. Used in flat block or tube. Axes of centering circle on flat-block version used to mark location of maximum flux density when bubble in circle. Device used to find point on wall corresponding to known point on opposite side of wall.

  19. Ammonia Leak Locator Study

    NASA Technical Reports Server (NTRS)

    Dodge, Franklin T.; Wuest, Martin P.; Deffenbaugh, Danny M.

    1995-01-01

    The thermal control system of International Space Station Alpha will use liquid ammonia as the heat exchange fluid. It is expected that small leaks (of the order perhaps of one pound of ammonia per day) may develop in the lines transporting the ammonia to the various facilities as well as in the heat exchange equipment. Such leaks must be detected and located before the supply of ammonia becomes critically low. For that reason, NASA-JSC has a program underway to evaluate instruments that can detect and locate ultra-small concentrations of ammonia in a high vacuum environment. To be useful, the instrument must be portable and small enough that an astronaut can easily handle it during extravehicular activity. An additional complication in the design of the instrument is that the environment immediately surrounding ISSA will contain small concentrations of many other gases from venting of onboard experiments as well as from other kinds of leaks. These other vapors include water, cabin air, CO2, CO, argon, N2, and ethylene glycol. Altogether, this local environment might have a pressure of the order of 10(exp -7) to 10(exp -6) torr. Southwest Research Institute (SwRI) was contracted by NASA-JSC to provide support to NASA-JSC and its prime contractors in evaluating ammonia-location instruments and to make a preliminary trade study of the advantages and limitations of potential instruments. The present effort builds upon an earlier SwRI study to evaluate ammonia leak detection instruments [Jolly and Deffenbaugh]. The objectives of the present effort include: (1) Estimate the characteristics of representative ammonia leaks; (2) Evaluate the baseline instrument in the light of the estimated ammonia leak characteristics; (3) Propose alternative instrument concepts; and (4) Conduct a trade study of the proposed alternative concepts and recommend promising instruments. The baseline leak-location instrument selected by NASA-JSC was an ion gauge.

  20. Machine tool locator

    DOEpatents

    Hanlon, John A.; Gill, Timothy J.

    2001-01-01

    Machine tools can be accurately measured and positioned on manufacturing machines within very small tolerances by use of an autocollimator on a 3-axis mount on a manufacturing machine and positioned so as to focus on a reference tooling ball or a machine tool, a digital camera connected to the viewing end of the autocollimator, and a marker and measure generator for receiving digital images from the camera, then displaying or measuring distances between the projection reticle and the reference reticle on the monitoring screen, and relating the distances to the actual position of the autocollimator relative to the reference tooling ball. The images and measurements are used to set the position of the machine tool and to measure the size and shape of the machine tool tip, and examine cutting edge wear. patent

  1. A rapid, economical, and accurate method to determining the physical risk of storm marine inundations using sedimentary evidence

    NASA Astrophysics Data System (ADS)

    Nott, Jonathan F.

    2015-04-01

    The majority of physical risk assessments from storm surge inundations are derived from synthetic time series generated from short climate records, which can often result in inaccuracies and are time-consuming and expensive to develop. A new method is presented here for the wet tropics region of northeast Australia. It uses lidar-generated topographic cross sections of beach ridge plains, which have been demonstrated to be deposited by marine inundations generated by tropical cyclones. Extreme value theory statistics are applied to data derived from the cross sections to generate return period plots for a given location. The results suggest that previous methods to estimate return periods using synthetic data sets have underestimated the magnitude/frequency relationship by at least an order of magnitude. The new method promises to be a more rapid, economical, and accurate assessment of the physical risk of these events.

  2. Accurate adjoint design sensitivities for nano metal optics.

    PubMed

    Hansen, Paul; Hesselink, Lambertus

    2015-09-01

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483

  3. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  4. Remote balance weighs accurately amid high radiation

    NASA Technical Reports Server (NTRS)

    Eggenberger, D. N.; Shuck, A. B.

    1969-01-01

    Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.

  5. Understanding the Code: keeping accurate records.

    PubMed

    Griffith, Richard

    2015-10-01

    In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met. PMID:26418404

  6. Automatic processing of induced events in the geothermal reservoirs Landau and Insheim, Germany

    NASA Astrophysics Data System (ADS)

    Olbert, Kai; Küperkoch, Ludger; Meier, Thomas

    2016-04-01

    Induced events can be a risk to local infrastructure that need to be understood and evaluated. They represent also a chance to learn more about the reservoir behavior and characteristics. Prior to the analysis, the waveform data must be processed consistently and accurately to avoid erroneous interpretations. In the framework of the MAGS2 project an automatic off-line event detection and a phase onset time determination algorithm are applied to induced seismic events in geothermal systems in Landau and Insheim, Germany. The off-line detection algorithm works based on a cross-correlation of continuous data taken from the local seismic network with master events. It distinguishes events between different reservoirs and within the individual reservoirs. Furthermore, it provides a location and magnitude estimation. Data from 2007 to 2014 are processed and compared with other detections using the SeisComp3 cross correlation detector and a STA/LTA detector. The detected events are analyzed concerning spatial or temporal clustering. Furthermore the number of events are compared to the existing detection lists. The automatic phase picking algorithm combines an AR-AIC approach with a cost function to find precise P1- and S1-phase onset times which can be used for localization and tomography studies. 800 induced events are processed, determining 5000 P1- and 6000 S1-picks. The phase onset times show a high precision with mean residuals to manual phase picks of 0s (P1) to 0.04s (S1) and standard deviations below ±0.05s. The received automatic picks are applied to relocate a selected number of events to evaluate influences on the location precision.

  7. Developmental aspects of memory for spatial location.

    PubMed

    Ellis, N R; Katz, E; Williams, J E

    1987-12-01

    The purpose was to show whether or not the encoding of location met criteria defining an automatic process (L. Hasher & R. T. Zacks, 1979, Journal of Experimental Psychology: General, 108, 356-388; 1984, American Psychologist, 39, 1372-1388). Among other criteria, automatic processes are not expected to show developmental changes beyond an early age, to be unrelated to intelligence level, and to be unaffected by instructions. In the first experiment preschool through sixth-grade children were compared on a 40-picturebook task following incidental (remember the names of pictures) or intentional (remember location) instruction. Subjects viewed and named pictures in sets of four, arranged in quadrants in the opened book, and then attempted to recall names of the objects pictured and to relocate pictures on blank pages. In the second experiment, second and sixth graders, college students, elderly persons, and mentally retarded persons were compared on a 60-picturebook task following either incidental or semantic incidental instructions (give the function of objects pictured). Memory for location was invariant across age groups and intelligence level. The only exception was that 3 and 4 year olds were more accurate following intentional instructions. Otherwise there were no differences between intentional and incidental instructions. Semantic instructions resulted in slightly more accurate locations. The results were interpreted as supportive of the Hasher and Zacks' automaticity hypothesis. PMID:3694123

  8. Time-reversal imaging techniques applied to tremor waveforms near Cholame, California to locate tectonic tremor

    NASA Astrophysics Data System (ADS)

    Horstmann, T.; Harrington, R. M.; Cochran, E. S.

    2012-12-01

    Thurber et al. (2006) interpolated to a grid spacing of 50 m. Such grid spacing corresponds to frequencies of up to 8 Hz, which is suitable to calculate the wave propagation of tremor. Our dataset contains continuous broadband data from 13 STS-2 seismometers deployed from May 2010 to July 2011 along the Cholame segment of the San Andreas Fault as well as data from the HRSN and PBO networks. Initial synthetic results from tests on a 2D plane using a line of 15 receivers suggest that we are able to recover accurate event locations to within 100 m horizontally and 300 m depth. We conduct additional synthetic tests to determine the influence of signal-to-noise ratio, number of stations used, and the uncertainty in the velocity model on the location result by adding noise to the seismograms and perturbations to the velocity model. Preliminary results show accurate show location results to within 400 m with a median signal-to-noise ratio of 3.5 and 5% perturbations in the velocity model. The next steps will entail performing the synthetic tests on the 3D velocity model, and applying the method to tremor waveforms. Furthermore, we will determine the spatial and temporal distribution of the source locations and compare our results to those by Sumy and others.

  9. Sonar Locator Systems

    NASA Technical Reports Server (NTRS)

    1985-01-01

    An underwater locator device called a Pinger is attached to an airplane's flight recorder for recovery in case of a crash. Burnett Electronics Pinger Model 512 resulted from a Burnett Electronics Laboratory, Inc./Langley Research Center contract for development of a search system for underwater mines. The Pinger's battery-powered transmitter is activated when immersed in water, and sends multidirectional signals for up to 500 hours. When a surface receiver picks up the signal, a diver can retrieve the pinger and the attached airplane flight recorder. Other pingers are used to track whales, mark underwater discoveries and assist oil drilling vessels.

  10. Location of Planet X

    SciTech Connect

    Harrington, R.S.

    1988-10-01

    Observed positions of Uranus and Neptune along with residuals in right ascension and declination are used to constrain the location of a postulated tenth planet. The residuals are converted into residuals in ecliptic longitude and latitude. The results are then combined into seasonal normal points, producing average geocentric residuals spaced slightly more than a year apart that are assumed to represent the equivalent heliocentric average residuals for the observed oppositions. Such a planet is found to most likely reside in the region of Scorpius, with considerably less likelihood that it is in Taurus. 8 references.

  11. [Electronic Apex Locator as a dental instrument].

    PubMed

    Lin, S; Winocur-Arias, O; Slutzky-Goldberg, I

    2009-04-01

    Electronic Apex Locators (EAL) have become widely used in the last decade. The first apex locator was introduced in 1962, based on a constant electrical resistance (6.5 K.) between the oral mucosa and periodontal ligament. The first and second generations of EAL were inaccurate and could not detect the apex in the presence of conducting fluids. The third generation solved this problem by using two alternating frequencies and calculating the impedance between them. This provided reliable and accurate results in dry canals, or in the presence of blood, electrolytes or other fluid in the root canals, when the pulp was necrotic or when there was a perforation along the root. The Root ZX and Apit (Endex) are the most documented devices. The new fourth generation of apex locators is a diverse group: some use multifrequency currents, others use a "lookup matrix" rather than calculate the readings. Several of the newer EALs are smaller, and others connect to computers. PMID:20162984

  12. METHOD OF LOCATING GROUNDS

    DOEpatents

    Macleish, K.G.

    1958-02-11

    ABS>This patent presents a method for locating a ground in a d-c circult having a number of parallel branches connected across a d-c source or generator. The complete method comprises the steps of locating the ground with reference to the mildpoint of the parallel branches by connecting a potentiometer across the terminals of the circuit and connecting the slider of the potentiometer to ground through a current indicating instrument, adjusting the slider to right or left of the mildpoint so as to cause the instrument to indicate zero, connecting the terminal of the network which is farthest from the ground as thus indicated by the potentiometer to ground through a condenser, impressing a ripple voltage on the circuit, and then measuring the ripple voltage at the midpoint of each parallel branch to find the branch in which is the lowest value of ripple voltage, and then measuring the distribution of the ripple voltage along this branch to determine the point at which the ripple voltage drops off to zero or substantially zero due to the existence of a ground. The invention has particular application where a circuit ground is present which will disappear if the normal circuit voltage is removed.

  13. Record-breaking events during the compressive failure of porous materials

    NASA Astrophysics Data System (ADS)

    Pál, Gergő; Raischel, Frank; Lennartz-Sassinek, Sabine; Kun, Ferenc; Main, Ian G.

    2016-03-01

    An accurate understanding of the interplay between random and deterministic processes in generating extreme events is of critical importance in many fields, from forecasting extreme meteorological events to the catastrophic failure of materials and in the Earth. Here we investigate the statistics of record-breaking events in the time series of crackling noise generated by local rupture events during the compressive failure of porous materials. The events are generated by computer simulations of the uniaxial compression of cylindrical samples in a discrete element model of sedimentary rocks that closely resemble those of real experiments. The number of records grows initially as a decelerating power law of the number of events, followed by an acceleration immediately prior to failure. The distribution of the size and lifetime of records are power laws with relatively low exponents. We demonstrate the existence of a characteristic record rank k*, which separates the two regimes of the time evolution. Up to this rank deceleration occurs due to the effect of random disorder. Record breaking then accelerates towards macroscopic failure, when physical interactions leading to spatial and temporal correlations dominate the location and timing of local ruptures. The size distribution of records of different ranks has a universal form independent of the record rank. Subsequences of events that occur between consecutive records are characterized by a power-law size distribution, with an exponent which decreases as failure is approached. High-rank records are preceded by smaller events of increasing size and waiting time between consecutive events and they are followed by a relaxation process. As a reference, surrogate time series are generated by reshuffling the event times. The record statistics of the uncorrelated surrogates agrees very well with the corresponding predictions of independent identically distributed random variables, which confirms that temporal and spatial

  14. Accurately Determining the Risks of Rising Sea Level

    NASA Astrophysics Data System (ADS)

    Marbaix, Philippe; Nicholls, Robert J.

    2007-10-01

    With the highest density of people and the greatest concentration of economic activity located in the coastal regions, sea level rise is an important concern as the climate continues to warm. Subsequent flooding may potentially disrupt industries, populations, and livelihoods, particularly in the long term if the climate is not quickly stabilized [McGranahan et al., 2007; Tol et al., 2006]. To help policy makers understand these risks, a more accurate description of hazards posed by rising sea levels is needed at the global scale, even though the impacts in specific regions are better known.

  15. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    PubMed

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  16. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    PubMed Central

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  17. Lunar Impact Flash Locations from NASA's Lunar Impact Monitoring Program

    NASA Technical Reports Server (NTRS)

    Moser, D. E.; Suggs, R. M.; Kupferschmidt, L.; Feldman, J.

    2015-01-01

    dependent upon LRO finding a fresh impact crater associated with one of the impact flashes recorded by Earth-based instruments, either the bright event of March 2013 or any other in the database of impact observations. To find the crater, LRO needed an accurate area to search. This Technical Memorandum (TM) describes the geolocation technique developed to accurately determine the impact flash location, and by association, the location of the crater, thought to lie directly beneath the brightest portion of the flash. The workflow and software tools used to geolocate the impact flashes are described in detail, along with sources of error and uncertainty and a case study applying the workflow to the bright impact flash in March 2013. Following the successful geolocation of the March 2013 flash, the technique was applied to all impact flashes detected by the MEO between November 7, 2005, and January 3, 2014.

  18. Empirical law for fault-creep events

    USGS Publications Warehouse

    Crough, S.T.; Burford, R.O.

    1977-01-01

    Fault-creep events measured on the San Andreas and related faults near Hollister, California, can be described by a rheological model consisting of a spring, power-law dashpotand sliding block connected in series. An empirical creep-event law, derived from many creep-event records analyzed within the constraints of the model, provides a remarkably simple and accurate representation of creep-event behavior. The empirical creep law is expressed by the equation: D(t)= Df [1-1/{ct(n-1)Dfn-1+1}/(n-1)] where D is the value of displacement at time t following the onset of an event, Df is the final equilibrium value of the event displacementand C is a proportionality constant. This discovery should help determine whether the time-displacement character of creep events is controlled by the material properties of fault gouge, or by other parameters. ?? 1977.

  19. The LHCb VERTEX LOCATOR performance and VERTEX LOCATOR upgrade

    NASA Astrophysics Data System (ADS)

    Rodríguez Pérez, P.

    2012-12-01

    LHCb is an experiment dedicated to the study of new physics in the decays of beauty and charm hadrons at the Large Hadron Collider (LHC) at CERN. The Vertex Locator (VELO) is the silicon detector surrounding the LHCb interaction point. The detector operates in a severe and highly non-uniform radiation environment. The small pitch and analogue readout result in a best single hit precision of 4 μm. The upgrade of the LHCb experiment, planned for 2018, will transform the entire readout to a trigger-less system operating at 40 MHz event rate. The vertex detector will have to cope with radiation levels up to 1016 1 MeVneq/cm2, more than an order of magnitude higher than those expected at the current experiment. A solution is under development with a pixel detector, based on the Timepix/Medipix family of chips with 55 x 55 μm pixels. In addition a micro-strip solution is also under development, with finer pitch, higher granularity and lower mass than the current detector. The current status of the VELO will be described together with recent testbeam results.

  20. Acoustic wave-equation-based earthquake location

    NASA Astrophysics Data System (ADS)

    Tong, Ping; Yang, Dinghui; Liu, Qinya; Yang, Xu; Harris, Jerry

    2016-04-01

    We present a novel earthquake location method using acoustic wave-equation-based traveltime inversion. The linear relationship between the location perturbation (δt0, δxs) and the resulting traveltime residual δt of a particular seismic phase, represented by the traveltime sensitivity kernel K(t0, xs) with respect to the earthquake location (t0, xs), is theoretically derived based on the adjoint method. Traveltime sensitivity kernel K(t0, xs) is formulated as a convolution between the forward and adjoint wavefields, which are calculated by numerically solving two acoustic wave equations. The advantage of this newly derived traveltime kernel is that it not only takes into account the earthquake-receiver geometry but also accurately honours the complexity of the velocity model. The earthquake location is obtained by solving a regularized least-squares problem. In 3-D realistic applications, it is computationally expensive to conduct full wave simulations. Therefore, we propose a 2.5-D approach which assumes the forward and adjoint wave simulations within a 2-D vertical plane passing through the earthquake and receiver. Various synthetic examples show the accuracy of this acoustic wave-equation-based earthquake location method. The accuracy and efficiency of the 2.5-D approach for 3-D earthquake location are further verified by its application to the 2004 Big Bear earthquake in Southern California.

  1. Surface Properties Associated With Dust Storm Plume's Point-Source Locations In The Border Region Of The US And Mexico

    NASA Astrophysics Data System (ADS)

    Bleiweiss, M. P.; DuBois, D. W.; Flores, M. I.

    2013-12-01

    Dust storms in the border region of the Southwest US and Northern Mexico are a serious problem for air quality (PM10 exceedances), health (Valley Fever is pandemic in the region) and transportation (road closures and deadly traffic accidents). In order to better understand the phenomena, we are attempting to identify critical characteristics of dust storm sources so that, possibly, one can perform more accurate predictions of events and, thus, mitigate some of the deleterious effects. Besides the emission mechanisms for dust storm production that are tied to atmospheric dynamics, one must know those locations whose source characteristics can be tied to dust production and, therefore, identify locations where a dust storm is eminent under favorable atmospheric dynamics. During the past 13 years, we have observed, on satellite imagery, more than 500 dust events in the region and are in the process of identifying the source regions for the dust plumes that make up an event. Where satellite imagery exists with high spatial resolution (less than or equal to 250m), dust 'plumes' appear to be made up of individual and merged plumes that are emitted from a 'point source' (smaller than the resolution of the imagery). In particular, we have observed events from the ASTER sensor whose spatial resolution is 15m as well as Landsat whose spatial resolution is 30m. Tying these source locations to surface properties such as NDVI, albedo, and soil properties (percent sand, silt, clay, and gravel; soil moisture; etc.) will identify regions with enhanced capability to produce a dust storm. This, along with atmospheric dynamics, will allow the forecast of dust events. The analysis of 10 events from the period 2004-2013, for which we have identified 1124 individual plumes, will be presented.

  2. Vents to events: determining an eruption event record from volcanic vent structures for the Harrat Rahat, Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Runge, Melody G.; Bebbington, Mark S.; Cronin, Shane J.; Lindsay, Jan M.; Kenedi, Catherine L.; Moufti, Mohammed Rashad H.

    2014-03-01

    Distributed "monogenetic" volcanic eruptions commonly occur in continental settings without obvious structural alignments or rifting/extensional structures. Nevertheless, these may develop as fissures, representing the surface expression of dykes with a range of orientations, especially when stress regimes vary over time and/or older crustal features and faults are exploited by rising magmas. Dykes reaching the surface as fissures can last hours to months and produce groups of closely aligned vents, hiding the true extent of the source fissure. Grouped or aligned vents in a distributed volcanic environment add complexity to hazard modelling where the majority of eruptions are single-vent, point-source features, represented by cones, craters or domes; i.e. vent groups may represent fissure events, or single eruptions coincidently located but erupted hundreds to tens of thousands of years apart. It is common practice in hazard estimation for intraplate monogenetic volcanism to assume that a single eruption cone or crater represents an individual eruptive event, but this could lead to a significant overestimate of temporal recurrence rates if multiple-site and fissure eruptions were common. For accurate recurrence rate estimates and hazard-event scenarios, a fissure eruption, with its multiple cones, must be considered as a single multi-dimensional eruptive event alongside the single-vent eruptions. We present a statistical method to objectively determine eruptive events from visible vents, and illustrate this using the 968 vents of the 10 Ma to 0.6 ka volcanic field of Harrat Rahat, Saudi Arabia. A further method is presented to estimate the number of hidden vents in a thick volcanic pile. By combining these two methods for Harrat Rahat, we determined an updated spatial recurrence rate estimate, and an average temporal recurrence rate of 7.5 × 10-5 events/year. This new analysis highlights more concentrated regions of higher temporal hazard in parts of Harrat Rahat

  3. Creating Special Events

    ERIC Educational Resources Information Center

    deLisle, Lee

    2009-01-01

    "Creating Special Events" is organized as a systematic approach to festivals and events for students who seek a career in event management. This book looks at the evolution and history of festivals and events and proceeds to the nuts and bolts of event management. The book presents event management as the means of planning, organizing, directing,…

  4. Groth Deep Locations Image

    NASA Technical Reports Server (NTRS)

    2003-01-01

    NASA's Galaxy Evolution Explorer photographed this ultraviolet color blowup of the Groth Deep Image on June 22 and June 23, 2003. Hundreds of galaxies are detected in this portion of the image, and the faint red galaxies are believed to be 6 billion light years away. The white boxes show the location of these distant galaxies, of which more than a 100 can be detected in this image. NASA astronomers expect to detect 10,000 such galaxies after extrapolating to the full image at a deeper exposure level.

    The Galaxy Evolution Explorer mission is led by the California Institute of Technology, which is also responsible for the science operations and data analysis. NASA's Jet Propulsion Laboratory, Pasadena, Calif., a division of Caltech, manages the mission and built the science instrument. The mission was developed under NASA's Explorers Program, managed by the Goddard Space Flight Center, Greenbelt, Md. The mission's international partners include South Korea and France.

  5. Probabilistic earthquake location and 3-D velocity models in routine earthquake location

    NASA Astrophysics Data System (ADS)

    Lomax, A.; Husen, S.

    2003-12-01

    Earthquake monitoring agencies, such as local networks or CTBTO, are faced with the dilemma of providing routine earthquake locations in near real-time with high precision and meaningful uncertainty information. Traditionally, routine earthquake locations are obtained from linearized inversion using layered seismic velocity models. This approach is fast and simple. However, uncertainties derived from a linear approximation to a set of non-linear equations can be imprecise, unreliable, or even misleading. In addition, 1-D velocity models are a poor approximation to real Earth structure in tectonically complex regions. In this paper, we discuss the routine location of earthquakes in near real-time with high precision using non-linear, probabilistic location methods and 3-D velocity models. The combination of non-linear, global search algorithms with probabilistic earthquake location provides a fast and reliable tool for earthquake location that can be used with any kind of velocity model. The probabilistic solution to the earthquake location includes a complete description of location uncertainties, which may be irregular and multimodal. We present applications of this approach to determine seismicity in Switzerland and in Yellowstone National Park, WY. Comparing our earthquake locations to earthquake locations obtained using linearized inversion and 1-D velocity models clearly demonstrates the advantages of probabilistic earthquake location and 3-D velocity models. For example, the more complete and reliable uncertainty information of non-linear, probabilistic earthquake location greatly facilitates the identification of poorly constrained hypocenters. Such events are often not identified in linearized earthquake location, since the location uncertainties are determined with a simplified, localized and approximate Gaussian statistic.

  6. A highly accurate interatomic potential for argon

    NASA Astrophysics Data System (ADS)

    Aziz, Ronald A.

    1993-09-01

    A modified potential based on the individually damped model of Douketis, Scoles, Marchetti, Zen, and Thakkar [J. Chem. Phys. 76, 3057 (1982)] is presented which fits, within experimental error, the accurate ultraviolet (UV) vibration-rotation spectrum of argon determined by UV laser absorption spectroscopy by Herman, LaRocque, and Stoicheff [J. Chem. Phys. 89, 4535 (1988)]. Other literature potentials fail to do so. The potential also is shown to predict a large number of other properties and is probably the most accurate characterization of the argon interaction constructed to date.

  7. Close binding of identity and location in visual feature perception

    NASA Technical Reports Server (NTRS)

    Johnston, J. C.; Pashler, H.

    1990-01-01

    The binding of identity and location information in disjunctive feature search was studied. Ss searched a heterogeneous display for a color or a form target, and reported both target identity and location. To avoid better than chance guessing of target identity (by choosing the target less likely to have been seen), the difficulty of the two targets was equalized adaptively; a mathematical model was used to quantify residual effects. A spatial layout was used that minimized postperceptual errors in reporting location. Results showed strong binding of identity and location perception. After correction for guessing, no perception of identity without location was found. A weak trend was found for accurate perception of target location without identity. We propose that activated features generate attention-calling "interrupt" signals, specifying only location; attention then retrieves the properties at that location.

  8. Location of Maximum Credible Beam Losses in LCLS Injector

    SciTech Connect

    Mao, Stan

    2010-12-13

    The memo describes the maximum credible beam the LCLS injector can produce and lose at various locations along the beamline. The estimation procedure is based upon three previous reports [1, 2, 3]. While specific numbers have been updated to accurately reflect the present design parameters, the conclusions are very similar to those given in Ref 1. The source of the maximum credible beam results from the explosive electron emission from the photocathode if the drive laser intensity exceeds the threshold for plasma production. In this event, the gun's RF field can extract a large number of electrons from this plasma which are accelerated out of the gun and into the beamline. This electron emission persists until it has depleted the gun of all its energy. Hence the number of electrons emitted per pulse is limited by the amount of stored RF energy in the gun. It needs to be emphasized that this type of emission is highly undesirable, as it causes permanent damage to the cathode.

  9. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.

    1997-01-01

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.

  10. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.

    1997-09-23

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.

  11. Atom location by electron channeling analysis

    SciTech Connect

    Pennycook, S.J.

    1984-07-01

    For many years the orientation dependence of the characteristic x-ray emission close to a Bragg reflection has been regarded as a hindrance to accurate microanalysis, and a random incident beam direction has always been recommended for accurate composition analysis. However, this orientation dependence can be put to use to extract information on the lattice location of foreign atoms within the crystalline matrix. Here a generalization of the technique is described which is applicable to any crystal structure including monatomic crystals, and can quantitatively determine substitutional fractions of impurities. The technique was referred to as electron channeling analysis, by analogy with the closely related and widely used bulk technique of ion channeling analysis, and was developed for lattice location studies of dopants in semiconductors at high spatial resolution. Only two spectra are required for each channeling analysis, one in each of the channeling conditions described above. If the matrix and dopant x-ray yields vary identically between the two orientations then the dopant necessarily lies within the reflecting matrix planes. If the dopant x-ray yield does not vary the dopant atoms are randomly located with respect to the matrix planes. 10 references, 2 figures.

  12. Object Locating System

    NASA Technical Reports Server (NTRS)

    Arndt, G. Dickey (Inventor); Carl, James R. (Inventor)

    2000-01-01

    A portable system is provided that is operational for determining, with three dimensional resolution, the position of a buried object or approximately positioned object that may move in space or air or gas. The system has a plurality of receivers for detecting the signal front a target antenna and measuring the phase thereof with respect to a reference signal. The relative permittivity and conductivity of the medium in which the object is located is used along with the measured phase signal to determine a distance between the object and each of the plurality of receivers. Knowing these distances. an iteration technique is provided for solving equations simultaneously to provide position coordinates. The system may also be used for tracking movement of an object within close range of the system by sampling and recording subsequent position of the object. A dipole target antenna. when positioned adjacent to a buried object, may be energized using a separate transmitter which couples energy to the target antenna through the medium. The target antenna then preferably resonates at a different frequency, such as a second harmonic of the transmitter frequency.

  13. Submerged marine streamer locator

    SciTech Connect

    Roberts, F.A.

    1987-01-06

    An apparatus is described for use in determining relative to known geographic locations on a sea floor the position of a moving submerged marine seismic streamer while being towed through the sea by an exploration vessel, which comprises: spaced apart acoustic receivers and at least one acoustic transducer-receiver carried by the streamer. The transducer-receiver is capable of emitting acoustic command signals when triggered by means controllable from the moving vessel and the receivers are capable of receiving and distinguishing distinctly different acoustic frequencies to transmit distinguishable signals responsive thereto along the streamer to recording means on the vessel; at least three sea floor transponders spatially displaced from each other at known positions relative to the sea floor and each of the transponders being capable of responding to a single acoustic command signal from the transducer-receiver in the moving streamer while being towed by the vessel. Each of the transponders emits signals of a distinctly different frequency; and means for recording the time interval from initiation of a command signal from the streamer transducer to the receipt of each signal relayed along the streamer from each of the receivers in response to the signals from the transponders. In this way, the distance of each of the streamer receivers from each of the known positions of the transponders may be calculated.

  14. AOTV bow shock location

    NASA Technical Reports Server (NTRS)

    Desautel, D.

    1985-01-01

    Hypersonic bow-shock location and geometry are of central importance to the aerodynamics and aerothermodynamics of aeroassisted orbital transfer vehicles (AOTVs), but they are difficult to predict for a given vehicle configuration. This paper reports experimental measurements of shock standoff distance for the 70 deg cone AOTV configuration in shock-tunnel-test flows at Mach numbers of 3.8 to 7.9 and for angles of attack from 0 deg to 20 deg. The controlling parameter for hypersonic bow-shock standoff distance (for a given forebody shape) is the mean normal-shock density ratio. Values for this parameter in the tests reported are in the same range as those of the drag-brake AOTV perigee regime. Results for standoff distance are compared with those previously reported in the literature for this AOTV configuration. It is concluded that the AOTV shock standoff distance for the conical configuration, based on frustrum (base) radius, is equivalent to that of a sphere with a radius about 35 percent greater than that of the cone; the distance is, therefore, much less than reported in previous studies. Some reasons for the discrepancies between the present and previous are advanced. The smaller standoff distance determined here implies there will be less radiative heat transfer than was previously expected.

  15. Detecting and characterising ramp events in wind power time series

    NASA Astrophysics Data System (ADS)

    Gallego, Cristóbal; Cuerva, Álvaro; Costa, Alexandre

    2014-12-01

    In order to implement accurate models for wind power ramp forecasting, ramps need to be previously characterised. This issue has been typically addressed by performing binary ramp/non-ramp classifications based on ad-hoc assessed thresholds. However, recent works question this approach. This paper presents the ramp function, an innovative wavelet- based tool which detects and characterises ramp events in wind power time series. The underlying idea is to assess a continuous index related to the ramp intensity at each time step, which is obtained by considering large power output gradients evaluated under different time scales (up to typical ramp durations). The ramp function overcomes some of the drawbacks shown by the aforementioned binary classification and permits forecasters to easily reveal specific features of the ramp behaviour observed at a wind farm. As an example, the daily profile of the ramp-up and ramp-down intensities are obtained for the case of a wind farm located in Spain.

  16. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  17. Rare event simulation in radiation transport

    SciTech Connect

    Kollman, C.

    1993-10-01

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.

  18. Exploring Large-Scale Cross-Correlation for Teleseismic and Regional Seismic Event Characterization

    NASA Astrophysics Data System (ADS)

    Hauk, T. F.; Dodge, D. A.; Addair, T.; Walter, W. R.; Myers, S. C.; Ford, S. R.; Harris, D. B.; Ruppert, S. D.

    2013-12-01

    The decrease in cost of digital storage space and computation power invites new methods of seismic data processing. At Lawrence Livermore National Laboratory (LLNL), we operate a research database of seismic events and waveforms for nuclear explosion monitoring and other applications. The LLNL database contains several million events associated with more than 330 million waveforms at thousands of stations. We are using this database to explore the power of seismic waveform correlation to quantify signal similarities, to discover new events not in catalogs, and to more accurately locate events and identify source types. The results presented here are preliminary, and apply mostly to a subset of seismicity in Eurasia and North America. Much more remains to be done to understand and make use of these results. We computed the waveform correlation for event pairs in the LLNL database in 15 frequency bands and for 8 phase windows. The correlation coefficient exceeds 0.6 for over 370 million waveform pairs. Overall, about 16% of the events in our waveform database correlate with one or more events on at least one channel. However, at very short distances, this number rises to as high as 55%. At distances > 20 degrees the percent of correlated events ranges from ~1% to 10%. The majority of correlated waveforms are found at relatively small (< 10 degrees) station-event separations in short-period bands. Most correlations at teleseismic distances are in long-period bands. Also, for most events, correlated traces are found at only a few stations. The mean magnitude of correlated events is about 1 unit lower than the mean of the events in our waveform database and the standard deviation of the magnitude difference of correlated events is 0.6. Apparently, large-scale correlation processing is most likely to work well for small events of similar magnitude. We have clustered the correlation results in both short- and long-period bands and have identified 439 long-period and 1333

  19. Transfer of location-specific control to untrained locations.

    PubMed

    Weidler, Blaire J; Bugg, Julie M

    2016-11-01

    Recent research highlights a seemingly flexible and automatic form of cognitive control that is triggered by potent contextual cues, as exemplified by the location-specific proportion congruence effect--reduced compatibility effects in locations associated with a high as compared to low likelihood of conflict. We investigated just how flexible location-specific control is by examining whether novel locations effectively cue control for congruency-unbiased stimuli. In two experiments, biased (mostly compatible or mostly incompatible) training stimuli appeared in distinct locations. During a final block, unbiased (50% compatible) stimuli appeared in novel untrained locations spatially linked to biased locations. The flanker compatibly effect was reduced for unbiased stimuli in novel locations linked to a mostly incompatible compared to a mostly compatible location, indicating transfer. Transfer was observed when stimuli appeared along a linear function (Experiment 1) or in rings of a bullseye (Experiment 2). The novel transfer effects imply that location-specific control is more flexible than previously reported and further counter the complex stimulus-response learning account of location-specific proportion congruence effects. We propose that the representation and retrieval of control settings in untrained locations may depend on environmental support and the presentation of stimuli in novel locations that fall within the same categories of space as trained locations. PMID:26800157

  20. Does the Nature of the Experience Influence Suggestibility? A Study of Children's Event Memory.

    ERIC Educational Resources Information Center

    Gobbo, Camilla; Mega, Carolina; Pipe, Margaret-Ellen

    2002-01-01

    Two experiments examined effects of event modality on young children's memory and suggestibility. Findings indicated that 5-year-olds were more accurate than 3-year-olds and those participating in the event were more accurate than those either observing or listening to a narrative. Assessment method, level of event learning, delay to testing, and…

  1. Assessing Special Events.

    ERIC Educational Resources Information Center

    Neff, Bonita Dostal

    Special events defined as being "newsworthy events" are becoming a way of American life. They are also a means for making a lot of money. Examples of special events that are cited most frequently are often the most minor of events; e.g., the open house, the new business opening day gala, or a celebration of some event in an organization. Little…

  2. Memory for time and place contributes to enhanced confidence in memories for emotional events

    PubMed Central

    Rimmele, Ulrike; Davachi, Lila; Phelps, Elizabeth A.

    2012-01-01

    Emotion strengthens the subjective sense of remembering. However, these confidently remembered emotional memories have not been found be more accurate for some types of contextual details. We investigated whether the subjective sense of recollecting negative stimuli is coupled with enhanced memory accuracy for three specific types of central contextual details using the remember/know paradigm and confidence ratings. Our results indicate that the subjective sense of remembering is indeed coupled with better recollection of spatial location and temporal context. In contrast, we found a double-dissociation between the subjective sense of remembering and memory accuracy for colored dots placed in the conceptual center of negative and neutral scenes. These findings show that the enhanced subjective recollective experience for negative stimuli reliably indicates objective recollection for spatial location and temporal context, but not for other types of details, whereas for neutral stimuli, the subjective sense of remembering is coupled with all the types of details assessed. Translating this finding to flashbulb memories, we found that, over time, more participants correctly remembered the location where they learned about the terrorist attacks on 9/11 than any other canonical feature. Likewise participants’ confidence was higher in their memory for location vs. other canonical features. These findings indicate that the strong recollective experience of a negative event corresponds to an accurate memory for some kinds of contextual details, but not other kinds. This discrepancy provides further evidence that the subjective sense of remembering negative events is driven by a different mechanism than the subjective sense of remembering neutral events. PMID:22642353

  3. Event Segmentation Ability Uniquely Predicts Event Memory

    PubMed Central

    Sargent, Jesse Q.; Zacks, Jeffrey M.; Hambrick, David Z.; Zacks, Rose T.; Kurby, Christopher A.; Bailey, Heather R.; Eisenberg, Michelle L.; Beck, Taylor M.

    2013-01-01

    Memory for everyday events plays a central role in tasks of daily living, autobiographical memory, and planning. Event memory depends in part on segmenting ongoing activity into meaningful units. This study examined the relationship between event segmentation and memory in a lifespan sample to answer the following question: Is the ability to segment activity into meaningful events a unique predictor of subsequent memory, or is the relationship between event perception and memory accounted for by general cognitive abilities? Two hundred and eight adults ranging from 20 to 79 years old segmented movies of everyday events and attempted to remember the events afterwards. They also completed psychometric ability tests and tests measuring script knowledge for everyday events. Event segmentation and script knowledge both explained unique variance in event memory above and beyond the psychometric measures, and did so as strongly in older as in younger adults. These results suggest that event segmentation is a basic cognitive mechanism, important for memory across the lifespan. PMID:23942350

  4. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  5. Bayesian analysis of rare events

    NASA Astrophysics Data System (ADS)

    Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  6. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  7. 9. SITE MAP HIGHLIGHTING SIGNIFICANT BUILDINGS AND SHOWING LOCATION LOCATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. SITE MAP HIGHLIGHTING SIGNIFICANT BUILDINGS AND SHOWING LOCATION LOCATION OF OUTPATIENT CLINIC ADDITION - U.S. Veterans Administration Medical Center, 600 South Seventieth Street, Lincoln, Lancaster County, NE

  8. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  9. Accurate colorimetric feedback for RGB LED clusters

    NASA Astrophysics Data System (ADS)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  10. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  11. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  12. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  13. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  14. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  15. Event-Based Science.

    ERIC Educational Resources Information Center

    Wright, Russell G.

    1992-01-01

    Suggests that an event-based science curriculum can provide the framework for deciding what to retain in an overloaded science curriculum. Provides examples of current events and the science concepts explored related to the event. (MDH)

  16. Fault Location Methods for Ungrounded Distribution Systems Using Local Measurements

    NASA Astrophysics Data System (ADS)

    Xiu, Wanjing; Liao, Yuan

    2013-08-01

    This article presents novel fault location algorithms for ungrounded distribution systems. The proposed methods are capable of locating faults by using obtained voltage and current measurements at the local substation. Two types of fault location algorithms, using line to neutral and line to line measurements, are presented. The network structure and parameters are assumed to be known. The network structure needs to be updated based on information obtained from utility telemetry system. With the help of bus impedance matrix, local voltage changes due to the fault can be expressed as a function of fault currents. Since the bus impedance matrix contains information about fault location, superimposed voltages at local substation can be expressed as a function of fault location, through which fault location can be solved. Simulation studies have been carried out based on a sample distribution power system. From the evaluation study, it is evinced that very accurate fault location estimates are obtained from both types of methods.

  17. Continuous wavelet transform analysis and modal location analysis acoustic emission source location for nuclear piping crack growth monitoring

    SciTech Connect

    Mohd, Shukri; Holford, Karen M.; Pullin, Rhys

    2014-02-12

    Source location is an important feature of acoustic emission (AE) damage monitoring in nuclear piping. The ability to accurately locate sources can assist in source characterisation and early warning of failure. This paper describe the development of a novelAE source location technique termed 'Wavelet Transform analysis and Modal Location (WTML)' based on Lamb wave theory and time-frequency analysis that can be used for global monitoring of plate like steel structures. Source location was performed on a steel pipe of 1500 mm long and 220 mm outer diameter with nominal thickness of 5 mm under a planar location test setup using H-N sources. The accuracy of the new technique was compared with other AE source location methods such as the time of arrival (TOA) techniqueand DeltaTlocation. Theresults of the study show that the WTML method produces more accurate location resultscompared with TOA and triple point filtering location methods. The accuracy of the WTML approach is comparable with the deltaT location method but requires no initial acoustic calibration of the structure.

  18. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  19. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  20. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  1. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  2. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  3. Skylab short-lived event alert program

    NASA Technical Reports Server (NTRS)

    Citron, R. A.

    1974-01-01

    During the three manned Skylab missions, the Center for Short-Lived Phenomena (CSLP) reported a total of 39 significant events to the Johnson Space Center (JSC) as part of the Skylab Short-Lived Event Alert Program. The telegraphed daily status reports included the names and locations of the events, the track number and revolution number during which the event could be observed, the time (GMT) to within plus or minus 2 sec when Skylab was closest to the event area, and the light condition (daylight or darkness) at that time and place. The messages sent to JSC during the Skylab 4 mission also included information pertaining to ground-truth studies and observations being conducted on the events. Photographic priorities were assigned for each event.

  4. New method for lightning location using optical ground wire

    NASA Astrophysics Data System (ADS)

    Qin, Zhaoyu; Cheng, Zhaogu; Zhang, Zhiping; Zhu, Jianqiang; Li, Feng

    2006-12-01

    A new technology of lightning location is described, which is based on detecting the state of polarization (SOP) fluctuation of the laser light in the optic ground wire (OPGW). Compared with the conventional lightning location method, the new method is more accurate, more stable, and cheaper. Theories of Stokes parameters and Poincare sphere are introduced to analyze the SOP at the lightning strike point. It can be concluded that although the initial points of SOP on the Poincare sphere are random, the SOP fluctuation generated by lightning strike can still be accurately identified by detecting the velocity of polarization motion. A new algorithm to quantify the velocity is also introduced.

  5. Seismicity patterns along the Ecuadorian subduction zone: new constraints from earthquake location in a 3-D a priori velocity model

    NASA Astrophysics Data System (ADS)

    Font, Yvonne; Segovia, Monica; Vaca, Sandro; Theunissen, Thomas

    2013-04-01

    To improve earthquake location, we create a 3-D a priori P-wave velocity model (3-DVM) that approximates the large velocity variations of the Ecuadorian subduction system. The 3-DVM is constructed from the integration of geophysical and geological data that depend on the structural geometry and velocity properties of the crust and the upper mantle. In addition, specific station selection is carried out to compensate for the high station density on the Andean Chain. 3-D synthetic experiments are then designed to evaluate the network capacity to recover the event position using only P arrivals and the MAXI technique. Three synthetic earthquake location experiments are proposed: (1) noise-free and (2) noisy arrivals used in the 3-DVM, and (3) noise-free arrivals used in a 1-DVM. Synthetic results indicate that, under the best conditions (exact arrival data set and 3-DVM), the spatiotemporal configuration of the Ecuadorian network can accurately locate 70 per cent of events in the frontal part of the subduction zone (average azimuthal gap is 289° ± 44°). Noisy P arrivals (up to ± 0.3 s) can accurately located 50 per cent of earthquakes. Processing earthquake location within a 1-DVM almost never allows accurate hypocentre position for offshore earthquakes (15 per cent), which highlights the role of using a 3-DVM in subduction zone. For the application to real data, the seismicity distribution from the 3-D-MAXI catalogue is also compared to the determinations obtained in a 1-D-layered VM. In addition to good-quality location uncertainties, the clustering and the depth distribution confirm the 3-D-MAXI catalogue reliability. The pattern of the seismicity distribution (a 13 yr record during the inter-seismic period of the seismic cycle) is compared to the pattern of rupture zone and asperity of the Mw = 7.9 1942 and the Mw = 7.7 1958 events (the Mw = 8.8 1906 asperity patch is not defined). We observe that the nucleation of 1942, 1958 and 1906 events coincides with

  6. LLNL Seismic Locations: Validating Improvement Through Integration of Regionalized Models and Empirical Corrections

    SciTech Connect

    Schultz, C.A.; Flanagan, M.P.; Myers, S.C.; Pasyanos, M.E.; Swenson, J.L.; Hanley, W.; Ryall, F.; Dodge, D.

    2001-07-27

    The monitoring of nuclear explosions on a global basis requires accurate event locations. As an example, a typical size used for an on-site inspection search area is 1,000 square kilometers or approximately 17 km accuracy, assuming a circular area. This level of accuracy is a significant challenge for small events that are recorded using a sparse regional network. In such cases, the travel time of seismic energy is strongly affected by crustal and upper mantle heterogeneity and large biases can result. This can lead to large systematic errors in location and, more importantly, to invalid error bounds associated with location estimates. Calibration data and methods are being developed and integrated to correct for these biases. Our research over the last few years has shown that one of the most effective approaches to generate path corrections is the hybrid technique that combine both regionalized models with three-dimensional empirical travel-time corrections. We implement a rigorous and comprehensive uncertainty framework for these hybrid approaches. Qualitative and quantitative validations are presented in the form of single component consistency checks, sensitivity analysis, robustness measures, outlier testing along with end-to-end testing of confidence measures. We focus on screening and validating both empirical and model based calibrations as well as the hybrid form that combines these two types of calibration. We demonstrate that the hybrid approach very effectively calibrates both travel-time and slowness attributes for seismic location in the Middle East North Africa, and Western Eurasia (ME/NAAVE). Furthermore, it provides highly reliable uncertainty estimates. Finally, we summarize the NNSA validated data sets that have been provided to contractors in the last year.

  7. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  8. Acoustic emission source location in complex structures using full automatic delta T mapping technique

    NASA Astrophysics Data System (ADS)

    Al-Jumaili, Safaa Kh.; Pearson, Matthew R.; Holford, Karen M.; Eaton, Mark J.; Pullin, Rhys

    2016-05-01

    An easy to use, fast to apply, cost-effective, and very accurate non-destructive testing (NDT) technique for damage localisation in complex structures is key for the uptake of structural health monitoring systems (SHM). Acoustic emission (AE) is a viable technique that can be used for SHM and one of the most attractive features is the ability to locate AE sources. The time of arrival (TOA) technique is traditionally used to locate AE sources, and relies on the assumption of constant wave speed within the material and uninterrupted propagation path between the source and the sensor. In complex structural geometries and complex materials such as composites, this assumption is no longer valid. Delta T mapping was developed in Cardiff in order to overcome these limitations; this technique uses artificial sources on an area of interest to create training maps. These are used to locate subsequent AE sources. However operator expertise is required to select the best data from the training maps and to choose the correct parameter to locate the sources, which can be a time consuming process. This paper presents a new and improved fully automatic delta T mapping technique where a clustering algorithm is used to automatically identify and select the highly correlated events at each grid point whilst the "Minimum Difference" approach is used to determine the source location. This removes the requirement for operator expertise, saving time and preventing human errors. A thorough assessment is conducted to evaluate the performance and the robustness of the new technique. In the initial test, the results showed excellent reduction in running time as well as improved accuracy of locating AE sources, as a result of the automatic selection of the training data. Furthermore, because the process is performed automatically, this is now a very simple and reliable technique due to the prevention of the potential source of error related to manual manipulation.

  9. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    PubMed Central

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  10. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    PubMed

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  11. FFTF Asbestos Location Tracking Program

    SciTech Connect

    Reynolds, J.A.

    1994-09-15

    An Asbestos Location Tracking Program was prepared to list, locate, and determine Asbestos content and to provide baseline {open_quotes}good faith{close_quotes} for yearly condition inspections for the FFTF Plant and buildings and grounds.

  12. Estimating the Probability of Negative Events

    ERIC Educational Resources Information Center

    Harris, Adam J. L.; Corner, Adam; Hahn, Ulrike

    2009-01-01

    How well we are attuned to the statistics of our environment is a fundamental question in understanding human behaviour. It seems particularly important to be able to provide accurate assessments of the probability with which negative events occur so as to guide rational choice of preventative actions. One question that arises here is whether or…

  13. Tools for Accurate and Efficient Analysis of Complex Evolutionary Mechanisms in Microbial Genomes. Final Report

    SciTech Connect

    Nakhleh, Luay

    2014-03-12

    I proposed to develop computationally efficient tools for accurate detection and reconstruction of microbes' complex evolutionary mechanisms, thus enabling rapid and accurate annotation, analysis and understanding of their genomes. To achieve this goal, I proposed to address three aspects. (1) Mathematical modeling. A major challenge facing the accurate detection of HGT is that of distinguishing between these two events on the one hand and other events that have similar "effects." I proposed to develop a novel mathematical approach for distinguishing among these events. Further, I proposed to develop a set of novel optimization criteria for the evolutionary analysis of microbial genomes in the presence of these complex evolutionary events. (2) Algorithm design. In this aspect of the project, I proposed to develop an array of e cient and accurate algorithms for analyzing microbial genomes based on the formulated optimization criteria. Further, I proposed to test the viability of the criteria and the accuracy of the algorithms in an experimental setting using both synthetic as well as biological data. (3) Software development. I proposed the nal outcome to be a suite of software tools which implements the mathematical models as well as the algorithms developed.

  14. The use of waveform cross correlation for creation of an accurate catalogue of mining explosions within the Russian platform using joint capabilities of seismic array Miknevo and IMS arrays

    NASA Astrophysics Data System (ADS)

    Rozhkov, M.; Kitov, I.; Sanina, I.

    2014-12-01

    For seismic monitoring, the task of finding and indentifying the sources of various seismic events is getting more and more difficult when the size (magnitude, yield, energy) of these events decreases. Firstly, the number of seismic events dramatically increases with falling magnitude - approximately by an order of magnitude per unit of seismic magnitude. Secondly, mining explosions become detectable and represent one of the biggest challenges for monitoring for magnitudes below 3.5 to 4.0. In the current study of mining activity within the Russian platform, we use the advantages of location and historical bulletins/catalogues of mining explosions recorded by small-aperture seismic array Mikhnevo (MHVAR) and extensive data from several IMS arrays at regional and far regional distances from the studied area. The Institute of Geosphere Dynamics (IDG) of the Russian Academy of Sciences runs seismic array MHVAR (54.950 N; 37.767 E) since 2004. Approximately 50 areas with different levels of mining activity have been identified by MHVAR and reported in the IDG catalogue as mining events. Signals from select mining events detected by MHVAR are sought at IMS arrays. Continuous data from MHVAR and IMS arrays (e.g. AKASG) are processed jointly using waveform cross correlation technique. This technique allows reducing the detection threshold of repeated events by an order of magnitude as well as accurately locating and identifying mining explosions. To achieve the highest performance of cross correlation, we have selected the best sets of waveform templates recorded from a carefully tested set of master events for each of the studied mines. We also test the possibility to use the Principal and Independent Component Analysis to produce sets of synthetic templates, which best fit the whole set of master events for a given mine.

  15. Impact-Locator Sensor Panels

    NASA Technical Reports Server (NTRS)

    Christiansen, Eric L.; Byers, Terry; Gibbons, Frank

    2008-01-01

    Electronic sensor systems for detecting and locating impacts of rapidly moving particles on spacecraft have been invented. Systems of this type could also be useful on Earth in settings in which the occurrence of impacts and/or the locations of impacts are not immediately obvious and there are requirements to detect and quickly locate impacts to prevent or minimize damage.

  16. DIORAMA Location Type User's Guide

    SciTech Connect

    Terry, James Russell

    2015-01-29

    The purpose of this report is to present the current design and implementation of the DIORAMA location type object (LocationType) and to provide examples and use cases. The LocationType object is included in the diorama-app package in the diorama::types namespace. Abstractly, the object is intended to capture the full time history of the location of an object or reference point. For example, a location may be speci ed as a near-Earth orbit in terms of a two-line element set, in which case the location type is capable of propagating the orbit both forward and backward in time to provide a location for any given time. Alternatively, the location may be speci ed as a xed set of geodetic coordinates (latitude, longitude, and altitude), in which case the geodetic location of the object is expected to remain constant for all time. From an implementation perspective, the location type is de ned as a union of multiple independent objects defi ned in the DIORAMA tle library. Types presently included in the union are listed and described in subsections below, and all conversions or transformation between these location types are handled by utilities provided by the tle library with the exception of the \\special-values" location type.

  17. Spring loaded locator pin assembly

    DOEpatents

    Groll, T.A.; White, J.P.

    1998-03-03

    This invention deals with spring loaded locator pins. Locator pins are sometimes referred to as captured pins. This is a mechanism which locks two items together with the pin that is spring loaded so that it drops into a locator hole on the work piece. 5 figs.

  18. Spring loaded locator pin assembly

    DOEpatents

    Groll, Todd A.; White, James P.

    1998-01-01

    This invention deals with spring loaded locator pins. Locator pins are sometimes referred to as captured pins. This is a mechanism which locks two items together with the pin that is spring loaded so that it drops into a locator hole on the work piece.

  19. Challenges in Forecasting SEP Events

    NASA Astrophysics Data System (ADS)

    Luhmann, Janet; Mays, M. Leila; Odstrcil, Dusan; Bain, Hazel; Li, Yan; Leske, Richard; Cohen, Christina

    2015-04-01

    A long-standing desire of space weather prediction providers has been the ability to forecast SEP (Solar Energetic Particle) events as a part of their offerings. SEPs can have deleterious effects on the space environment and space hardware, that also impact human exploration missions. Developments of observationally driven, physics based models in the last solar cycle have made it possible to use solar magnetograms and coronagraph images to simulate, up to a month in advance for solar wind structure, and up to days in advance for interplanetary Coronal Mass Ejection (ICME) driven shocks, time series of upstream parameters similar in content to those obtained by L1 spacecraft. However, SEPs have been missing from these predictions. Because SEP event modeling requires different physical considerations it has typically been approached with cosmic ray transport concepts and treatments. However, many extra complications arise because of the moving, evolving nature of the ICME shock source of the largest events. In general, a realistic SEP event model for these so-called 'gradual' events requires an accurate description of the time-dependent 3D heliosphere as an underlying framework. We describe some applications of an approach to SEP event simulations that uses the widely-applied ENLIL heliospheric model to describe both underlying solar wind and ICME shock characteristics. Experimentation with this set-up illustrates the importance of knowing the shock connectivity to the observer, and of the need to include even non-observer-impacting CMEs in the heliospheric model. It also provides a possible path forward toward the goal of having routine SEP forecasts together with the other heliospheric predictions.

  20. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  1. Locating influential nodes via dynamics-sensitive centrality

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Lin, Jian-Hong; Guo, Qiang; Zhou, Tao

    2016-02-01

    With great theoretical and practical significance, locating influential nodes of complex networks is a promising issue. In this paper, we present a dynamics-sensitive (DS) centrality by integrating topological features and dynamical properties. The DS centrality can be directly applied in locating influential spreaders. According to the empirical results on four real networks for both susceptible-infected-recovered (SIR) and susceptible-infected (SI) spreading models, the DS centrality is more accurate than degree, k-shell index and eigenvector centrality.

  2. Instrument accurately measures weld angle and offset

    NASA Technical Reports Server (NTRS)

    Boyd, W. G.

    1967-01-01

    Weld angle is measured to the nearest arc minute and offset to one thousandth of an inch by an instrument designed to use a reference plane at two locations on a test coupon. A special table for computation has been prepared for use with the instrument.

  3. Identifying structures in clouds of induced microseismic events

    SciTech Connect

    Fehler, M.; House, L.; Phillips, W.S.

    1997-07-01

    A method for finding improved relative locations of microearthquakes accompanying fluid production and injection is presented. The method is based on the assumption that the microearthquake locations are more clustered than found when events are located using conventional techniques. By allowing the rms misfit between measured arrival times and predicted arrival times to increase if events move closer together, the authors find that there is more structure in the pattern of seismic locations. The method is demonstrated using a dataset of microearthquakes induced by hydraulic fracturing. The authors find that structures found using relative arrival times of events having similar waveforms to find improved relative locations of events can also be recovered using the new inversion method but without the laborious repicking procedure. The method provides improved relative locations and hence, an improved image of the structure within the seismic zone that may allow for a better relation between microearthquake locations and zones of increased fluid permeability to be found.

  4. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  5. Are Kohn-Sham conductances accurate?

    PubMed

    Mera, H; Niquet, Y M

    2010-11-19

    We use Fermi-liquid relations to address the accuracy of conductances calculated from the single-particle states of exact Kohn-Sham (KS) density functional theory. We demonstrate a systematic failure of this procedure for the calculation of the conductance, and show how it originates from the lack of renormalization in the KS spectral function. In certain limits this failure can lead to a large overestimation of the true conductance. We also show, however, that the KS conductances can be accurate for single-channel molecular junctions and systems where direct Coulomb interactions are strongly dominant. PMID:21231333

  6. Accurate density functional thermochemistry for larger molecules.

    SciTech Connect

    Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.

    1997-06-20

    Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).

  7. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  8. CT-Analyst: fast and accurate CBR emergency assessment

    NASA Astrophysics Data System (ADS)

    Boris, Jay; Fulton, Jack E., Jr.; Obenschain, Keith; Patnaik, Gopal; Young, Theodore, Jr.

    2004-08-01

    An urban-oriented emergency assessment system for airborne Chemical, Biological, and Radiological (CBR) threats, called CT-Analyst and based on new principles, gives greater accuracy and much greater speed than possible with current alternatives. This paper explains how this has been done. The increased accuracy derives from detailed, three-dimensional CFD computations including, solar heating, buoyancy, complete building geometry specification, trees, wind fluctuations, and particle and droplet distributions (as appropriate). This paper shows how a very finite number of such computations for a given area can be extended to all wind directions and speeds, and all likely sources and source locations using a new data structure called Dispersion Nomographs. Finally, we demonstrate a portable, entirely graphical software tool called CT-Analyst that embodies this entirely new, high-resolution technology and runs effectively on small personal computers. Real-time users don't have to wait for results because accurate answers are available with near zero-latency (that is 10 - 20 scenarios per second). Entire sequences of cases (e.g. a continuously changing source location or wind direction) can be computed and displayed as continuous-action movies. Since the underlying database has been precomputed, the door is wide open for important new real-time, zero-latency functions such as sensor data fusion, backtracking to an unknown source location, and even evacuation route planning. Extensions of the technology to sensor location optimization, buildings, tunnels, and integration with other advanced technologies, e.g. micrometeorology or detailed wind field measurements, will be discussed briefly here.

  9. Efficient and Accurate Indoor Localization Using Landmark Graphs

    NASA Astrophysics Data System (ADS)

    Gu, F.; Kealy, A.; Khoshelham, K.; Shang, J.

    2016-06-01

    Indoor localization is important for a variety of applications such as location-based services, mobile social networks, and emergency response. Fusing spatial information is an effective way to achieve accurate indoor localization with little or with no need for extra hardware. However, existing indoor localization methods that make use of spatial information are either too computationally expensive or too sensitive to the completeness of landmark detection. In this paper, we solve this problem by using the proposed landmark graph. The landmark graph is a directed graph where nodes are landmarks (e.g., doors, staircases, and turns) and edges are accessible paths with heading information. We compared the proposed method with two common Dead Reckoning (DR)-based methods (namely, Compass + Accelerometer + Landmarks and Gyroscope + Accelerometer + Landmarks) by a series of experiments. Experimental results show that the proposed method can achieve 73% accuracy with a positioning error less than 2.5 meters, which outperforms the other two DR-based methods.

  10. The Chelyabinsk event

    NASA Astrophysics Data System (ADS)

    Borovička, Jiri

    2015-08-01

    On February 15, 2013, 3:20 UT, an asteroid of the size of about 19 meters and mass of 12,000 metric tons entered the Earth's atmosphere unexpectedly near the border of Kazakhstan and Russia. It was the largest confirmed Earth impactor since the Tunguska event in 1908. The body moved approximately westwards with a speed of 19 km/s, on a trajectory inclined 18 degrees to the surface, creating a fireball of steadily increasing brightness. Eleven seconds after the first sightings, the fireball reached its maximum brightness. At that point, it was located less than 40 km south from Chelyabinsk, a Russian city of population more than one million, at an altitude of 30 km. For people directly underneath, the fireball was 30 times brighter than the Sun. The cosmic body disrupted into fragments; the largest of them was visible for another five seconds before it disappeared at an altitude of 12.5 km, when it was decelerated to 3 km/s. Fifty six second later, that ~ 600 kg fragment landed in Lake Chebarkul and created an 8 m wide hole in the ice. More material remained, however, in the atmosphere forming a dust trail up to 2 km wide and extending along the fireball trajectory from altitude 18 to 70 km. People observing the dust trail from Chelyabinsk and other places were surprised by the arrival of a very strong blast wave 90 - 150 s after the fireball passage (depending on location). The wave, produced by the supersonic flight of the body, broke ~10% of windows in Chelyabinsk (~40% of buildings were affected). More than 1600 people were injured, mostly from broken glass. Small meteorites landed in an area 60 km long and several km wide and caused no damage. The meteorites were classified as LL ordinary chondrites and were interesting by the presence of two phases, light and dark. The dust left in the atmosphere circled the Earth within few days and formed a ring around the northern hemisphere.The whole event was well documented by video cameras, seismic and infrasonic

  11. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    SciTech Connect

    Helmberger, D; Tromp, J; Rodgers, A

    2007-07-16

    Comprehensive test ban monitoring in terms of location and discrimination has progressed significantly in recent years. However, the characterization of sources and the estimation of low yields remains a particular challenge. As the recent Korean shot demonstrated, we can probably expect to have a small set of teleseismic, far-regional and high-frequency regional data to analyze in estimating the yield of an event. Since stacking helps to bring signals out of the noise, it becomes useful to conduct comparable analyses on neighboring events, earthquakes in this case. If these auxiliary events have accurate moments and source descriptions, we have a means of directly comparing effective source strengths. Although we will rely on modeling codes, 1D, 2D, and 3D, we will also apply a broadband calibration procedure to use longer periods (P>5s) waveform data to calibrate short-period (P between .5 to 2 Hz) and high-frequency (P between 2 to 10 Hz) as path specify station corrections from well-known regional sources. We have expanded our basic Cut-and-Paste (CAP) methodology to include not only timing shifts but also amplitude (f) corrections at recording sites. The name of this method was derived from source inversions that allow timing shifts between 'waveform segments' (or cutting the seismogram up and re-assembling) to correct for crustal variation. For convenience, we will refer to these f-dependent refinements as CAP+ for (SP) and CAP++ for still higher frequency. These methods allow the retrieval of source parameters using only P-waveforms where radiation patterns are obvious as demonstrated in this report and are well suited for explosion P-wave data. The method is easily extended to all distances because it uses Green's function although there may be some changes required in t* to adjust for offsets between local vs. teleseismic distances. In short, we use a mixture of model-dependent and empirical corrections to tackle the path effects. Although we reply on the

  12. Location-based Web Search

    NASA Astrophysics Data System (ADS)

    Ahlers, Dirk; Boll, Susanne

    In recent years, the relation of Web information to a physical location has gained much attention. However, Web content today often carries only an implicit relation to a location. In this chapter, we present a novel location-based search engine that automatically derives spatial context from unstructured Web resources and allows for location-based search: our focused crawler applies heuristics to crawl and analyze Web pages that have a high probability of carrying a spatial relation to a certain region or place; the location extractor identifies the actual location information from the pages; our indexer assigns a geo-context to the pages and makes them available for a later spatial Web search. We illustrate the usage of our spatial Web search for location-based applications that provide information not only right-in-time but also right-on-the-spot.

  13. The importance of accurate convergence in addressing stereoscopic visual fatigue

    NASA Astrophysics Data System (ADS)

    Mayhew, Christopher A.

    2015-03-01

    Visual fatigue (asthenopia) continues to be a problem in extended viewing of stereoscopic imagery. Poorly converged imagery may contribute to this problem. In 2013, the Author reported that in a study sample a surprisingly high number of 3D feature films released as stereoscopic Blu-rays contained obvious convergence errors.1 The placement of stereoscopic image convergence can be an "artistic" call, but upon close examination, the sampled films seemed to have simply missed their intended convergence location. This failure maybe because some stereoscopic editing tools do not have the necessary fidelity to enable a 3D editor to obtain a high degree of image alignment or set an exact point of convergence. Compounding this matter further is the fact that a large number of stereoscopic editors may not believe that pixel accurate alignment and convergence is necessary. The Author asserts that setting a pixel accurate point of convergence on an object at the start of any given stereoscopic scene will improve the viewer's ability to fuse the left and right images quickly. The premise is that stereoscopic performance (acuity) increases when an accurately converged object is available in the image for the viewer to fuse immediately. Furthermore, this increased viewer stereoscopic performance should reduce the amount of visual fatigue associated with longer-term viewing because less mental effort will be required to perceive the imagery. To test this concept, we developed special stereoscopic imagery to measure viewer visual performance with and without specific objects for convergence. The Company Team conducted a series of visual tests with 24 participants between 25 and 60 years of age. This paper reports the results of these tests.

  14. Accurate interlaminar stress recovery from finite element analysis

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Riggs, H. Ronald

    1994-01-01

    The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.

  15. Accurate 3D quantification of the bronchial parameters in MDCT

    NASA Astrophysics Data System (ADS)

    Saragaglia, A.; Fetita, C.; Preteux, F.; Brillet, P. Y.; Grenier, P. A.

    2005-08-01

    The assessment of bronchial reactivity and wall remodeling in asthma plays a crucial role in better understanding such a disease and evaluating therapeutic responses. Today, multi-detector computed tomography (MDCT) makes it possible to perform an accurate estimation of bronchial parameters (lumen and wall areas) by allowing a quantitative analysis in a cross-section plane orthogonal to the bronchus axis. This paper provides the tools for such an analysis by developing a 3D investigation method which relies on 3D reconstruction of bronchial lumen and central axis computation. Cross-section images at bronchial locations interactively selected along the central axis are generated at appropriate spatial resolution. An automated approach is then developed for accurately segmenting the inner and outer bronchi contours on the cross-section images. It combines mathematical morphology operators, such as "connection cost", and energy-controlled propagation in order to overcome the difficulties raised by vessel adjacencies and wall irregularities. The segmentation accuracy was validated with respect to a 3D mathematically-modeled phantom of a pair bronchus-vessel which mimics the characteristics of real data in terms of gray-level distribution, caliber and orientation. When applying the developed quantification approach to such a model with calibers ranging from 3 to 10 mm diameter, the lumen area relative errors varied from 3.7% to 0.15%, while the bronchus area was estimated with a relative error less than 5.1%.

  16. Time forecast of a break-off event from a hanging glacier

    NASA Astrophysics Data System (ADS)

    Faillettaz, Jérome; Funk, Martin; Vagliasindi, Marco

    2016-06-01

    A cold hanging glacier located on the south face of the Grandes Jorasses (Mont Blanc, Italy) broke off on the 23 and 29 September 2014 with a total estimated ice volume of 105 000 m3. Thanks to accurate surface displacement measurements taken up to the final break-off, this event was successfully predicted 10 days in advance, enabling local authorities to take the necessary safety measures. The break-off event also confirmed that surface displacements experienced a power law acceleration along with superimposed log-periodic oscillations prior to the final rupture. This paper describes the methods used to achieve a satisfactory time forecast in real time and demonstrates, using a retrospective analysis, their potential for the development of early-warning systems in real time.

  17. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  18. Accurate radiative transfer calculations for layered media.

    PubMed

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700

  19. Fast and accurate propagation of coherent light

    PubMed Central

    Lewis, R. D.; Beylkin, G.; Monzón, L.

    2013-01-01

    We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184

  20. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  1. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  2. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  3. Accurate determination of characteristic relative permeability curves

    NASA Astrophysics Data System (ADS)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  4. Global Seismic Event Detection Using Surface Waves: 15 Possible Antarctic Glacial Sliding Events

    NASA Astrophysics Data System (ADS)

    Chen, X.; Shearer, P. M.; Walker, K. T.; Fricker, H. A.

    2008-12-01

    To identify overlooked or anomalous seismic events not listed in standard catalogs, we have developed an algorithm to detect and locate global seismic events using intermediate-period (35-70s) surface waves. We apply our method to continuous vertical-component seismograms from the global seismic networks as archived in the IRIS UV FARM database from 1997 to 2007. We first bandpass filter the seismograms, apply automatic gain control, and compute envelope functions. We then examine 1654 target event locations defined at 5 degree intervals and stack the seismogram envelopes along the predicted Rayleigh-wave travel times. The resulting function has spatial and temporal peaks that indicate possible seismic events. We visually check these peaks using a graphical user interface to eliminate artifacts and assign an overall reliability grade (A, B or C) to the new events. We detect 78% of events in the Global Centroid Moment Tensor (CMT) catalog. However, we also find 840 new events not listed in the PDE, ISC and REB catalogs. Many of these new events were previously identified by Ekstrom (2006) using a different Rayleigh-wave detection scheme. Most of these new events are located along oceanic ridges and transform faults. Some new events can be associated with volcanic eruptions such as the 2000 Miyakejima sequence near Japan and others with apparent glacial sliding events in Greenland (Ekstrom et al., 2003). We focus our attention on 15 events detected from near the Antarctic coastline and relocate them using a cross-correlation approach. The events occur in 3 groups which are well-separated from areas of cataloged earthquake activity. We speculate that these are iceberg calving and/or glacial sliding events, and hope to test this by inverting for their source mechanisms and examining remote sensing data from their source regions.

  5. Intrusion-Tolerant Location Information Services in Intelligent Vehicular Networks

    NASA Astrophysics Data System (ADS)

    Yan, Gongjun; Yang, Weiming; Shaner, Earl F.; Rawat, Danda B.

    Intelligent Vehicular Networks, known as Vehicle-to-Vehicle and Vehicle-to-Roadside wireless communications (also called Vehicular Ad hoc Networks), are revolutionizing our daily driving with better safety and more infortainment. Most, if not all, applications will depend on accurate location information. Thus, it is of importance to provide intrusion-tolerant location information services. In this paper, we describe an adaptive algorithm that detects and filters the false location information injected by intruders. Given a noisy environment of mobile vehicles, the algorithm estimates the high resolution location of a vehicle by refining low resolution location input. We also investigate results of simulations and evaluate the quality of the intrusion-tolerant location service.

  6. Verb Aspect and the Activation of Event Knowledge

    ERIC Educational Resources Information Center

    Ferretti, Todd R.; Kutas, Marta; McRae, Ken

    2007-01-01

    The authors show that verb aspect influences the activation of event knowledge with 4 novel results. First, common locations of events (e.g., arena) are primed following verbs with imperfective aspect (e.g., was skating) but not verbs with perfect aspect (e.g., had skated). Second, people generate more locative prepositional phrases as…

  7. Fast and robust microseismic event detection using very fast simulated annealing

    NASA Astrophysics Data System (ADS)

    Velis, Danilo R.; Sabbione, Juan I.; Sacchi, Mauricio D.

    2013-04-01

    The study of microseismic data has become an essential tool in many geoscience fields, including oil reservoir geophysics, mining and CO2 sequestration. In hydraulic fracturing, microseismicity studies permit the characterization and monitoring of the reservoir dynamics in order to optimize the production and the fluid injection process itself. As the number of events is usually large and the signal-to-noise ratio is in general very low, fast, automated, and robust detection algorithms are required for most applications. Also, real-time functionality is commonly needed to control the fluid injection in the field. Generally, events are located by means of grid search algorithms that rely on some approximate velocity model. These techniques are very effective and accurate, but computationally intensive when dealing with large three or four-dimensional grids. Here, we present a fast and robust method that allows to automatically detect and pick an event in 3C microseismic data without any input information about the velocity model. The detection is carried out by means of a very fast simulated annealing (VFSA) algorithm. To this end, we define an objective function that measures the energy of a potential microseismic event along the multichannel signal. This objective function is based on the stacked energy of the envelope of the signals calculated within a predefined narrow time window that depends on the source position, receivers geometry and velocity. Once an event has been detected, the source location can be estimated, in a second stage, by inverting the corresponding traveltimes using a standard technique, which would naturally require some knowledge of the velocity model. Since the proposed technique focuses on the detection of the microseismic events only, the velocity model is not required, leading to a fast algorithm that carries out the detection in real-time. Besides, the strategy is applicable to data with very low signal-to-noise ratios, for it relies

  8. High precision Differential Earthquake Location in 3D models: Evidence for a rheological barrier controlling the microseismicity at the Irpinia fault zone in southern Apennines

    NASA Astrophysics Data System (ADS)

    De Landro, Grazia; Amoroso, Ortensia; Alfredo Stabile, Tony; Matrullo, Emanuela; Lomax, Anthony; Zollo, Aldo

    2015-04-01

    A non-linear, global-search, probabilistic, double-difference earthquake location technique is illustrated. The main advantages of this method are the determination of comprehensive and complete solutions through the probability density function (PDF), the use of differential arrival-times as data, and the possibility to use a 3D velocity model both for absolute and relative locations, essential to obtain accurate differentials locations in structurally complex geological media. The joint use of this methodology and an accurate differential times data-set allowed us to carry out an high-resolution, earthquake location analysis, which helped to characterize the active fault geometries in the studied region. We investigated the recent micro-seismicity occurring at the Campanian-Lucanian Apennines, in the crustal volume embedding the fault system which generated the 1980, M 6.9 earthquake in Irpinia. In order to obtain highly accurate seismicity locations we applied the method to the P and S arrival time data set from 1312 events (M<3) that occurred from August 2005 to April 2011, and used the 3D P- and S-wave velocity models, optimized for the area under study. Both catalogue and cross-correlation first arrival-times have been used. The refined seismicity locations show that the events occur in a volume delimited by the faults activated during the 1980 Irpinia M 6.9 earthquake on sub-parallel, predominantly normal faults. Corresponding to a contact zone between different rheology rock formations (carbonate platform and basin residuals), we evidence an abrupt interruption of the seismicity across a SW-NE oriented structural discontinuity. This "barrier" appears to be located in the area bounded by the fault segments activated during the first (0 sec) and the second (20 sec) rupture episodes of the 80's Irpinia earthquake. We hypothesize that this geometrical barrier can have played a key role during the 1980 Irpinia event, and possibly controlled the delayed times of

  9. High-precision differential earthquake location in 3-D models: evidence for a rheological barrier controlling the microseismicity at the Irpinia fault zone in southern Apennines

    NASA Astrophysics Data System (ADS)

    De Landro, Grazia; Amoroso, Ortensia; Stabile, Tony Alfredo; Matrullo, Emanuela; Lomax, Antony; Zollo, Aldo

    2015-12-01

    A non-linear, global-search, probabilistic, double-difference earthquake location technique is illustrated. The main advantages of this method are the determination of comprehensive and complete solutions through the probability density function (PDF), the use of differential arrival times as data and the possibility to use a 3-D velocity model both for absolute and double-difference locations, all of which help to obtain accurate differential locations in structurally complex geological media. The joint use of this methodology and an accurate differential time data set allowed us to carry out a high-resolution, earthquake location analysis, which helps to characterize the active fault geometries in the studied region. We investigated the recent microseismicity occurring at the Campanian-Lucanian Apennines in the crustal volume embedding the fault system that generated the 1980 MS 6.9 earthquake in Irpinia. In order to obtain highly accurate seismicity locations, we applied the method to the P and S arrival time data set from 1312 events (ML < 3.1) that occurred from August 2005 to April 2011 and used the 3-D P- and S-wave velocity models optimized for the area under study. Both manually refined and cross-correlation refined absolute arrival times have been used. The refined seismicity locations show that the events occur in a volume delimited by the faults activated during the 1980 MS 6.9 Irpinia earthquake on subparallel, predominantly normal faults. We find an abrupt interruption of the seismicity across an SW-NE oriented structural discontinuity corresponding to a contact zone between different rheology rock formations (carbonate platform and basin residuals). This `barrier' appears to be located in the area bounded by the fault segments activated during the first (0 s) and the second (18 s) rupture episodes of the 1980s Irpinia earthquake. We hypothesize that this geometrical barrier could have played a key role during the 1980 Irpinia event, and possibly

  10. Grid-Search Location Methods for Ground-Truth Collection from Local and Regional Seismic Networks

    SciTech Connect

    Schultz, C A; Rodi, W; Myers, S C

    2003-07-24

    The objective of this project is to develop improved seismic event location techniques that can be used to generate more and better quality reference events using data from local and regional seismic networks. Their approach is to extend existing methods of multiple-event location with more general models of the errors affecting seismic arrival time data, including picking errors and errors in model-based travel-times (path corrections). Toward this end, they are integrating a grid-search based algorithm for multiple-event location (GMEL) with a new parameterization of travel-time corrections and new kriging method for estimating the correction parameters from observed travel-time residuals. Like several other multiple-event location algorithms, GMEL currently assumes event-independent path corrections and is thus restricted to small event clusters. The new parameterization assumes that travel-time corrections are a function of both the event and station location, and builds in source-receiver reciprocity and correlation between the corrections from proximate paths as constraints. The new kriging method simultaneously interpolates travel-time residuals from multiple stations and events to estimate the correction parameters as functions of position. They are currently developing the algorithmic extensions to GMEL needed to combine the new parameterization and kriging method with the simultaneous location of events. The result will be a multiple-event location method which is applicable to non-clustered, spatially well-distributed events. They are applying the existing components of the new multiple-event location method to a data set of regional and local arrival times from Nevada Test Site (NTS) explosions with known origin parameters. Preliminary results show the feasibility and potential benefits of combining the location and kriging techniques. They also show some preliminary work on generalizing of the error model used in GMEL with the use of mixture

  11. Episodes, events, and models.

    PubMed

    Khemlani, Sangeet S; Harrison, Anthony M; Trafton, J Gregory

    2015-01-01

    We describe a novel computational theory of how individuals segment perceptual information into representations of events. The theory is inspired by recent findings in the cognitive science and cognitive neuroscience of event segmentation. In line with recent theories, it holds that online event segmentation is automatic, and that event segmentation yields mental simulations of events. But it posits two novel principles as well: first, discrete episodic markers track perceptual and conceptual changes, and can be retrieved to construct event models. Second, the process of retrieving and reconstructing those episodic markers is constrained and prioritized. We describe a computational implementation of the theory, as well as a robotic extension of the theory that demonstrates the processes of online event segmentation and event model construction. The theory is the first unified computational account of event segmentation and temporal inference. We conclude by demonstrating now neuroimaging data can constrain and inspire the construction of process-level theories of human reasoning. PMID:26578934

  12. Episodes, events, and models

    PubMed Central

    Khemlani, Sangeet S.; Harrison, Anthony M.; Trafton, J. Gregory

    2015-01-01

    We describe a novel computational theory of how individuals segment perceptual information into representations of events. The theory is inspired by recent findings in the cognitive science and cognitive neuroscience of event segmentation. In line with recent theories, it holds that online event segmentation is automatic, and that event segmentation yields mental simulations of events. But it posits two novel principles as well: first, discrete episodic markers track perceptual and conceptual changes, and can be retrieved to construct event models. Second, the process of retrieving and reconstructing those episodic markers is constrained and prioritized. We describe a computational implementation of the theory, as well as a robotic extension of the theory that demonstrates the processes of online event segmentation and event model construction. The theory is the first unified computational account of event segmentation and temporal inference. We conclude by demonstrating now neuroimaging data can constrain and inspire the construction of process-level theories of human reasoning. PMID:26578934

  13. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  14. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  15. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  16. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  17. The thermodynamic cost of accurate sensory adaptation

    NASA Astrophysics Data System (ADS)

    Tu, Yuhai

    2015-03-01

    Living organisms need to obtain and process environment information accurately in order to make decisions critical for their survival. Much progress have been made in identifying key components responsible for various biological functions, however, major challenges remain to understand system-level behaviors from the molecular-level knowledge of biology and to unravel possible physical principles for the underlying biochemical circuits. In this talk, we will present some recent works in understanding the chemical sensory system of E. coli by combining theoretical approaches with quantitative experiments. We focus on addressing the questions on how cells process chemical information and adapt to varying environment, and what are the thermodynamic limits of key regulatory functions, such as adaptation.

  18. Accurate numerical solutions of conservative nonlinear oscillators

    NASA Astrophysics Data System (ADS)

    Khan, Najeeb Alam; Nasir Uddin, Khan; Nadeem Alam, Khan

    2014-12-01

    The objective of this paper is to present an investigation to analyze the vibration of a conservative nonlinear oscillator in the form u" + lambda u + u^(2n-1) + (1 + epsilon^2 u^(4m))^(1/2) = 0 for any arbitrary power of n and m. This method converts the differential equation to sets of algebraic equations and solve numerically. We have presented for three different cases: a higher order Duffing equation, an equation with irrational restoring force and a plasma physics equation. It is also found that the method is valid for any arbitrary order of n and m. Comparisons have been made with the results found in the literature the method gives accurate results.

  19. Accurate Telescope Mount Positioning with MEMS Accelerometers

    NASA Astrophysics Data System (ADS)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  20. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293

  1. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  2. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  3. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  4. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  5. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities. PMID:12747164

  6. Accurate Weather Forecasting for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  7. Cobalt processing - flask positioner location sensing system

    SciTech Connect

    Braun, P.F.

    1986-01-01

    Canada deuterium uranium (CANDU) reactors offer unique opportunities for economical production of /sup 60/Co in the adjuster rods used for xenon override and maximization of core output. Cobalt is effectively a by-product in CANDU reactors with the standards stainless steel adjuster rods replaced with cobalt adjuster rods. The Flask Positioner unit is a part of the cobalt adjuster element processing system (CAEPS) equipment which is used for removing irradiated cobalt adjuster elements from the reactor and safely transporting them to the irradiated fuel bay, where they are dismantled and prepared for shipment. The flask positioner equipment is similar to a crane, carries the CAEPS flask and locates it in an accurate position concentric with any adjuster site centerline. This enables the required operations for safe transfer of the irradiated adjuster element into the flask. The positioner is located above the reactivity mechanism deck. The CAEPS system has been made operational on several CANDU reactors. The location sensing system has been demonstrated to work very satisfactorily on all installations.

  8. Evaluation of workplace air monitoring locations

    SciTech Connect

    Stoetzel, G.A.; Cicotte, G.R.; Lynch, T.P. ); Aldrich, L.K. )

    1991-10-01

    Current federal guidance on occupational radiation protection recognizes the importance of conducting air flow studies to assist in the placement of air sampling and monitoring equipment. In support of this, Pacific Northwest Laboratory has provided technical assistance to Westinghouse Hanford Company for the purpose of evaluating the adequacy of air sampling and monitoring locations at selected Hanford facilities. Qualitative air flow studies were performed using smoke aerosols to visually determine air movement. Three examples are provided of how air flow studies results, along with information on the purpose of the air sample being collected, were used as a guide in placing the air samplers and monitors. Preparatory steps in conducting an air flow study should include: (1) identifying type of work performed in the work area including any actual or potential release points; (2) determining the amounts of radioactive material available for release and its chemical and physical form; (3) obtaining accurate work area descriptions and diagrams; (4) identifying the location of existing air samplers and monitors; (5) documenting physical and ventilation configurations; (6) notifying appropriate staff of the test; and (7) obtaining necessary equipment and supplies. The primary steps in conducting an air flow study are measurements of air velocities in the work area, release of the smoke aerosol at selected locations in the work area and the observation of air flow patterns, and finally evaluation and documentation of the results. 2 refs., 3 figs.

  9. Accurate analysis of multicomponent fuel spray evaporation in turbulent flow

    NASA Astrophysics Data System (ADS)

    Rauch, Bastian; Calabria, Raffaela; Chiariello, Fabio; Le Clercq, Patrick; Massoli, Patrizio; Rachner, Michael

    2012-04-01

    The aim of this paper is to perform an accurate analysis of the evaporation of single component and binary mixture fuels sprays in a hot weakly turbulent pipe flow by means of experimental measurement and numerical simulation. This gives a deeper insight into the relationship between fuel composition and spray evaporation. The turbulence intensity in the test section is equal to 10%, and the integral length scale is three orders of magnitude larger than the droplet size while the turbulence microscale (Kolmogorov scales) is of same order as the droplet diameter. The spray produced by means of a calibrated droplet generator was injected in a gas flow electrically preheated. N-nonane, isopropanol, and their mixtures were used in the tests. The generalized scattering imaging technique was applied to simultaneously determine size, velocity, and spatial location of the droplets carried by the turbulent flow in the quartz tube. The spray evaporation was computed using a Lagrangian particle solver coupled to a gas-phase solver. Computations of spray mean diameter and droplet size distributions at different locations along the pipe compare very favorably with the measurement results. This combined research tool enabled further investigation concerning the influencing parameters upon the evaporation process such as the turbulence, droplet internal mixing, and liquid-phase thermophysical properties.

  10. Accurate Focal Depth Determination of Oceanic Earthquakes Using Water-column Reverberation and Some Implications for the Shrinking Plate Hypothesis

    NASA Astrophysics Data System (ADS)

    Niu, F.; Huang, J.; Gordon, R. G.

    2015-12-01

    Investigation of oceanic earthquakes can play an important role in constraining the lateral and depth variations of the stress and strain-rate fields in oceanic lithosphere and of the thickness of the seismogenic layer as a function of lithosphere age, thereby providing us with critical insight into thermal and dynamic processes associated with the cooling and evolution of oceanic lithosphere. With the goal of estimating hypocentral depths more accurately, we observe clear water reverberations after the direct P wave on teleseismic records of oceanic earthquakes and develop a technique to estimate earthquake depths by using these reverberations. The Z-H grid search method allows the simultaneous determination of the sea floor depth (H) and earthquake depth (Z) with an uncertainty less than 1 km, which compares favorably with alternative approaches. We apply this method to two closely located earthquakes beneath the eastern Pacific. These earthquakes occur in ≈25 Ma-old lithosphere and were previously estimated to have very similar depths of ≈10-12 km. We find that the two events actually occurred at dissimilar depths of 2.5 km and 16.8 km beneath the seafloor, respectively within the oceanic crust and lithospheric mantle. The shallow and deep events are determined to be a thrust and normal earthquake, respectively, indicating that the stress field within the oceanic lithosphere changes from horizontal compression to horizontal extension as depth increases, which is consistent with the prediction of the lithospheric cooling model. Furthermore, we show that the P-axis of the newly investigated thrust-faulting earthquake is roughly perpendicular to that of the previously studied thrust event, consistent with the predictions of the shrinking-plate hypothesis.

  11. Accurate focal depth determination of oceanic earthquakes using water-column reverberation and some implications for the shrinking plate hypothesis

    NASA Astrophysics Data System (ADS)

    Huang, Jianping; Niu, Fenglin; Gordon, Richard G.; Cui, Chao

    2015-12-01

    Investigation of oceanic earthquakes is useful for constraining the lateral and depth variations of the stress and strain-rate fields in oceanic lithosphere, and the thickness of the seismogenic layer as a function of lithosphere age, thereby providing us with critical insight into thermal and dynamic processes associated with the cooling and evolution of oceanic lithosphere. With the goal of estimating hypocentral depths more accurately, we observe clear water reverberations after the direct P wave on teleseismic records of oceanic earthquakes and develop a technique to estimate earthquake depths by using these reverberations. The Z-H grid search method allows the simultaneous determination of the sea floor depth (H) and earthquake depth (Z) with an uncertainty less than 1 km, which compares favorably with alternative approaches. We apply this method to two closely located earthquakes beneath the eastern Pacific. These earthquakes occurred in ∼25 Ma-old lithosphere and were previously estimated to have similar depths of ∼10-12 km. We find that the two events actually occurred at dissimilar depths of 2.5 km and 16.8 km beneath the seafloor, respectively, within the oceanic crust and lithospheric mantle. The shallow and deep events are determined to be a thrust and normal earthquake, respectively, indicating that the stress field within the oceanic lithosphere changes from horizontal deviatoric compression to horizontal deviatoric tension as depth increases, which is consistent with the prediction of lithospheric cooling models. Furthermore, we show that the P-axis of the newly investigated thrust-faulting earthquake is perpendicular to that of the previously studied thrust event, consistent with the predictions of the shrinking-plate hypothesis.

  12. The global event system

    SciTech Connect

    Winans, J.

    1994-03-02

    The support for the global event system has been designed to allow an application developer to control the APS event generator and receiver boards. This is done by the use of four new record types. These records are customized and are only supported by the device support modules for the APS event generator and receiver boards. The use of the global event system and its associated records should not be confused with the vanilla EPICS events and the associated event records. They are very different.

  13. 78 FR 74048 - Eleventh Coast Guard District Annual Fireworks Events

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-10

    ...The Coast Guard proposes to amend several permanent safety zones located in the Eleventh Coast Guard District that are established to protect public safety during annual firework displays. These amendments will standardize the safety zone language, update listed events, delete events that are no longer occurring, add new annual fireworks events, and establish a standardized format using a......

  14. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  15. Modal Acoustic Emission Used at Elevated Temperatures to Detect Damage and Failure Location in Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Morscher, Gregory N.

    1999-01-01

    Ceramic matrix composites are being developed for elevated-temperature engine applications. A leading material system in this class of materials is silicon carbide (SiC) fiber-reinforced SiC matrix composites. Unfortunately, the nonoxide fibers, matrix, and interphase (boron nitride in this system) can react with oxygen or water vapor in the atmosphere, leading to strength degradation of the composite at elevated temperatures. For this study, constant-load stress-rupture tests were performed in air at temperatures ranging from 815 to 960 C until failure. From these data, predictions can be made for the useful life of such composites under similar stressed-oxidation conditions. During these experiments, the sounds of failure events (matrix cracking and fiber breaking) were monitored with a modal acoustic emission (AE) analyzer through transducers that were attached at the ends of the tensile bars. Such failure events, which are caused by applied stress and oxidation reactions, cause these composites to fail prematurely. Because of the nature of acoustic waveform propagation in thin tensile bars, the location of individual source events and the eventual failure event could be detected accurately.

  16. Low latency counter event indication

    DOEpatents

    Gara, Alan G.; Salapura, Valentina

    2008-09-16

    A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.

  17. Low latency counter event indication

    DOEpatents

    Gara, Alan G.; Salapura, Valentina

    2010-08-24

    A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.

  18. Decision support system for managing oil spill events.

    PubMed

    Keramitsoglou, Iphigenia; Cartalis, Constantinos; Kassomenos, Pavlos

    2003-08-01

    The Mediterranean environment is exposed to various hazards, including oil spills, forest fires, and floods, making the development of a decision support system (DSS) for emergency management an objective of utmost importance. The present work presents a complete DSS for managing marine pollution events caused by oil spills. The system provides all the necessary tools for early detection of oil-spills from satellite images, monitoring of their evolution, estimation of the accident consequences and provision of support to responsible Public Authorities during clean-up operations. The heart of the system is an image processing-geographic information system and other assistant individual software tools that perform oil spill evolution simulation and all other necessary numerical calculations as well as cartographic and reporting tasks related to a specific management of the oil spill event. The cartographic information is derived from the extant general maps representing detailed information concerning several regional environmental and land-cover characteristics as well as financial activities of the application area. Early notification of the authorities with up-to-date accurate information on the position and evolution of the oil spill, combined with the detailed coastal maps, is of paramount importance for emergency assessment and effective clean-up operations that would prevent environmental hazard. An application was developed for the Region of Crete, an area particularly vulnerable to oil spills due to its location, ecological characteristics, and local economic activities. PMID:14753653

  19. Higher order accurate partial implicitization: An unconditionally stable fourth-order-accurate explicit numerical technique

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1975-01-01

    The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.

  20. Improvements in mining induced microseismic source locations at the Lucky Friday mine using an automated whole-waveform analysis system

    NASA Astrophysics Data System (ADS)

    Dodge, Douglas A.; Sprenke, Kenneth F.

    1992-09-01

    For years, severe rockburst problems at the Lucky Friday mine in northern Idaho have been a persistent safety hazard and an impediment to production. An MP250 based microseismic monitoring system, which uses simple voltage threshold picking of first arrivals, has been used in this mine since 1973 to provide source locations and energy estimates of seismic events. Recently, interest has been expressed in developing a whole waveform microseismic monitoring system for the mine to provide more accurate source locations and information about source characteristics. For this study, we have developed a prototype whole-waveform microseismic monitoring system based on a 80386 computer equipped with a 50 kHz analog-digital convertor board. The software developed includes a data collection program, a data analysis program, and an event detection program. Whole-waveform data collected and analyzed using this system during a three-day test have been employed to investigate sources of error in the hypocenter location process and to develop an automatic phase picker appropriate for microseismic events. Comparison of hypocenter estimates produced by the MP250 system to those produced by the whole-waveform system shows that significant timing errors are common in the MP250 system and that these errors caused a large part of the scatter evident in the daily activity plots produced at the mine. Simulations and analysis of blast data show that analytical control over the solutions is strongly influenced by the array geometry. Within the geophone array, large errors in the velocity model or moderate timing errors may result in small changes in the solution, but outside the array, the solution is very sensitive to small changes in the data. Our whole-waveform detection program picks event onset times and determines event durations by analysis of a segmented envelope function (SEF) derived from the microseismic signal. The detection program has been tested by comparing its arrival time

  1. No effect of diffraction on Pluto-Charon mutual events

    NASA Technical Reports Server (NTRS)

    Tholen, D. J.; Hubbard, W. B.

    1988-01-01

    Mulholland and Gustafson (1987) made the interesting suggestion that observations of Pluto-Charon mutual events might show significant dependence on both wavelength and telescope aperture because of diffraction effects. In this letter, observations are presented that show the predicted effects to be absent and demonstrate that the parameters of the system are such that the events can be accurately analyzed with geometrical optics.

  2. EPA FACILITY POINT LOCATION FILES

    EPA Science Inventory

    Data includes locations of facilities from which pollutants are discharged. The epapoints.tar.gz file is a gzipped tar file of 14 Arc/Info export files and text documents. The .txt files define the attributes located in the INFO point coverage files. Projections are defined in...

  3. Experiences with Information Locator Services.

    ERIC Educational Resources Information Center

    Christian, Eliot

    1999-01-01

    Relates experiences in developing and promoting services interoperable with the Global Information Locator Service (GILS) standard. Describes sample implementations and touches on the strategic choices made in public policy, standards, and technology. Offers 10 recommendations for successful implementation of an Information Locator Service. (AEF)

  4. Cold War Geopolitics: Embassy Locations.

    ERIC Educational Resources Information Center

    Vogeler, Ingolf

    1995-01-01

    Asserts that the geopolitics of the Cold War can be illustrated by the diplomatic ties among countries, particularly the superpowers and their respective allies. Describes a classroom project in which global patterns of embassy locations are examined and compared. Includes five maps and a chart indicating types of embassy locations. (CFR)

  5. Precision zero-home locator

    DOEpatents

    Stone, W.J.

    1983-10-31

    A zero-home locator includes a fixed phototransistor switch and a moveable actuator including two symmetrical, opposed wedges, each wedge defining a point at which switching occurs. The zero-home location is the average of the positions of the points defined by the wedges.

  6. Mobile Alternative Fueling Station Locator

    SciTech Connect

    Not Available

    2009-04-01

    The Department of Energy's Alternative Fueling Station Locator is available on-the-go via cell phones, BlackBerrys, or other personal handheld devices. The mobile locator allows users to find the five closest biodiesel, electricity, E85, hydrogen, natural gas, and propane fueling sites using Google technology.

  7. Precision zero-home locator

    DOEpatents

    Stone, William J.

    1986-01-01

    A zero-home locator includes a fixed phototransistor switch and a moveable actuator including two symmetrical, opposed wedges, each wedge defining a point at which switching occurs. The zero-home location is the average of the positions of the points defined by the wedges.

  8. Locating Information within Extended Hypermedia

    ERIC Educational Resources Information Center

    Cromley, Jennifer G.; Azevedo, Roger

    2009-01-01

    New literacies researchers have identified a core set of strategies for locating information, one of which is "reading a Web page to locate information that might be present there" (Leu et al. in: Rush, Eakle, Berger (eds) "Secondary school reading and writing: What research reveals for classroom practices," 2007, p. 46). Do middle-school, high…

  9. Vaccine Adverse Events

    MedlinePlus

    ... Vaccines, Blood & Biologics Animal & Veterinary Cosmetics Tobacco Products Vaccines, Blood & Biologics Home Vaccines, Blood & Biologics Safety & Availability ( ... Center for Biologics Evaluation & Research Vaccine Adverse Events Vaccine Adverse Events Share Tweet Linkedin Pin it More ...

  10. Location-assured, multifactor authentication on smartphones via LTE communication

    NASA Astrophysics Data System (ADS)

    Kuseler, Torben; Lami, Ihsan A.; Al-Assam, Hisham

    2013-05-01

    With the added security provided by LTE, geographical location has become an important factor for authentication to enhance the security of remote client authentication during mCommerce applications using Smartphones. Tight combination of geographical location with classic authentication factors like PINs/Biometrics in a real-time, remote verification scheme over the LTE layer connection assures the authenticator about the client itself (via PIN/biometric) as well as the client's current location, thus defines the important aspects of "who", "when", and "where" of the authentication attempt without eaves dropping or man on the middle attacks. To securely integrate location as an authentication factor into the remote authentication scheme, client's location must be verified independently, i.e. the authenticator should not solely rely on the location determined on and reported by the client's Smartphone. The latest wireless data communication technology for mobile phones (4G LTE, Long-Term Evolution), recently being rolled out in various networks, can be employed to enhance this location-factor requirement of independent location verification. LTE's Control Plane LBS provisions, when integrated with user-based authentication and independent source of localisation factors ensures secure efficient, continuous location tracking of the Smartphone. This feature can be performed during normal operation of the LTE-based communication between client and network operator resulting in the authenticator being able to verify the client's claimed location more securely and accurately. Trials and experiments show that such algorithm implementation is viable for nowadays Smartphone-based banking via LTE communication.

  11. Automated fault location and diagnosis on electric power distribution feeders

    SciTech Connect

    Zhu, J.; Lubkeman, D.L.; Girgis, A.A.

    1997-04-01

    This paper presents new techniques for locating and diagnosing faults on electric power distribution feeders. The proposed fault location and diagnosis scheme is capable of accurately identifying the location of a fault upon its occurrence, based on the integration of information available from disturbance recording devices with knowledge contained in a distribution feeder database. The developed fault location and diagnosis system can also be applied to the investigation of temporary faults that may not result in a blown fuse. The proposed fault location algorithm is based on the steady-state analysis of the faulted distribution network. To deal with the uncertainties inherent in the system modeling and the phasor estimation, the fault location algorithm has been adapted to estimate fault regions based on probabilistic modeling and analysis. Since the distribution feeder is a radial network, multiple possibilities of fault locations could be computed with measurements available only at the substation. To identify the actual fault location, a fault diagnosis algorithm has been developed to prune down and rank the possible fault locations by integrating the available pieces of evidence. Testing of the developed fault location and diagnosis system using field data has demonstrated its potential for practical use.

  12. Accurate Fission Data for Nuclear Safety

    NASA Astrophysics Data System (ADS)

    Solders, A.; Gorelov, D.; Jokinen, A.; Kolhinen, V. S.; Lantz, M.; Mattera, A.; Penttilä, H.; Pomp, S.; Rakopoulos, V.; Rinta-Antila, S.

    2014-05-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyväskylä. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (1012 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons for benchmarking and to study the energy dependence of fission yields. The scientific program is extensive and is planed to start in 2013 with a measurement of isomeric yield ratios of proton induced fission in uranium. This will be followed by studies of independent yields of thermal and fast neutron induced fission of various actinides.

  13. Fast and Provably Accurate Bilateral Filtering

    NASA Astrophysics Data System (ADS)

    Chaudhury, Kunal N.; Dabhade, Swapnil D.

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires $O(S)$ operations per pixel, where $S$ is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to $O(1)$ per pixel for any arbitrary $S$. The algorithm has a simple implementation involving $N+1$ spatial filterings, where $N$ is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to to estimate the order $N$ required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with state-of-the-art methods in terms of speed and accuracy.

  14. Accurate Prediction of Docked Protein Structure Similarity.

    PubMed

    Akbal-Delibas, Bahar; Pomplun, Marc; Haspel, Nurit

    2015-09-01

    One of the major challenges for protein-protein docking methods is to accurately discriminate nativelike structures. The protein docking community agrees on the existence of a relationship between various favorable intermolecular interactions (e.g. Van der Waals, electrostatic, desolvation forces, etc.) and the similarity of a conformation to its native structure. Different docking algorithms often formulate this relationship as a weighted sum of selected terms and calibrate their weights against specific training data to evaluate and rank candidate structures. However, the exact form of this relationship is unknown and the accuracy of such methods is impaired by the pervasiveness of false positives. Unlike the conventional scoring functions, we propose a novel machine learning approach that not only ranks the candidate structures relative to each other but also indicates how similar each candidate is to the native conformation. We trained the AccuRMSD neural network with an extensive dataset using the back-propagation learning algorithm. Our method achieved predicting RMSDs of unbound docked complexes with 0.4Å error margin. PMID:26335807

  15. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  16. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  17. How Accurate are SuperCOSMOS Positions?

    NASA Astrophysics Data System (ADS)

    Schaefer, Adam; Hunstead, Richard; Johnston, Helen

    2014-02-01

    Optical positions from the SuperCOSMOS Sky Survey have been compared in detail with accurate radio positions that define the second realisation of the International Celestial Reference Frame (ICRF2). The comparison was limited to the IIIaJ plates from the UK/AAO and Oschin (Palomar) Schmidt telescopes. A total of 1 373 ICRF2 sources was used, with the sample restricted to stellar objects brighter than BJ = 20 and Galactic latitudes |b| > 10°. Position differences showed an rms scatter of 0.16 arcsec in right ascension and declination. While overall systematic offsets were < 0.1 arcsec in each hemisphere, both the systematics and scatter were greater in the north.

  18. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  19. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  20. MEMS accelerometers in accurate mount positioning systems

    NASA Astrophysics Data System (ADS)

    Mészáros, László; Pál, András.; Jaskó, Attila

    2014-07-01

    In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.

  1. Accurate lineshape spectroscopy and the Boltzmann constant.

    PubMed

    Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  2. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  3. Calculation and accuracy of ERBE scanner measurement locations

    NASA Technical Reports Server (NTRS)

    Hoffman, Lawrence H.; Weaver, William L.; Kibler, James F.

    1987-01-01

    The Earth Radiation Budget Experiment (ERBE) uses scanning radiometers to measure shortwave and longwave components of the Earth's radiation field at about 40 km resolution. It is essential that these measurements be accurately located at the top of the Earth's atmosphere so they can be properly interpreted by users of the data. Before the launch of the ERBE instrument sets, a substantial emphasis was placed on understanding all factors which influence the determination of measurement locations and properly modeling those factors in the data processing system. After the launch of ERBE instruments on the Earth Radiation Budget Satellite and NOAA 9 spacecraft in 1984, a coastline projection method was developed to assess the accuracy of the algorithms and data used in the location calculations. Using inflight scanner data and the coastline detection technique, the measurement location errors are found to be smaller than the resolution of the scanner instruments. This accuracy is well within the required location knowledge for useful science analysis.

  4. Forecaster's dilemma: Extreme events and forecast evaluation

    NASA Astrophysics Data System (ADS)

    Lerch, Sebastian; Thorarinsdottir, Thordis; Ravazzolo, Francesco; Gneiting, Tilmann

    2015-04-01

    In discussions of the quality of forecasts in the media and public, attention often focuses on the predictive performance in the case of extreme events. Intuitively, accurate predictions on the subset of extreme events seem to suggest better predictive ability. However, it can be demonstrated that restricting conventional forecast verification methods to subsets of observations might have unexpected and undesired effects and may discredit even the most skillful forecasters. Hand-picking extreme events is incompatible with the theoretical assumptions of established forecast verification methods, thus confronting forecasters with what we refer to as the forecaster's dilemma. For probabilistic forecasts, weighted proper scoring rules provide suitable alternatives for forecast evaluation with an emphasis on extreme events. Using theoretical arguments, simulation experiments and a case study on probabilistic forecasts of wind speed over Germany, we illustrate the forecaster's dilemma and the use of weighted proper scoring rules.

  5. Asynchronous event-based corner detection and matching.

    PubMed

    Clady, Xavier; Ieng, Sio-Hoi; Benosman, Ryad

    2015-06-01

    This paper introduces an event-based luminance-free method to detect and match corner events from the output of asynchronous event-based neuromorphic retinas. The method relies on the use of space-time properties of moving edges. Asynchronous event-based neuromorphic retinas are composed of autonomous pixels, each of them asynchronously generating "spiking" events that encode relative changes in pixels' illumination at high temporal resolutions. Corner events are defined as the spatiotemporal locations where the aperture problem can be solved using the intersection of several geometric constraints in events' spatiotemporal spaces. A regularization process provides the required constraints, i.e. the motion attributes of the edges with respect to their spatiotemporal locations using local geometric properties of visual events. Experimental results are presented on several real scenes showing the stability and robustness of the detection and matching. PMID:25828960

  6. Pluto-Charon mutual event predictions for 1986

    NASA Technical Reports Server (NTRS)

    Tholen, D. J.

    1985-01-01

    Circumstances are tabulated for 81-Pluto-Charon mutual events occurring during the 1986 opposition. The deepest and longest events will occur in February and reach a depth of about 0.15 mag. Observations of these events will lead to an accurate determination of the satellite's orbit, the diameters of the two bodies, the mean density of the system, and crude albedo maps of one hemisphere on each object.

  7. Accurate Memory for Object Location by Individuals with Intellectual Disability: Absolute Spatial Tagging Instead of Configural Processing?

    ERIC Educational Resources Information Center

    Giuliani, Fabienne; Favrod, Jerome; Grasset, Francois; Schenk, Francoise

    2011-01-01

    Using head-mounted eye tracker material, we assessed spatial recognition abilities (e.g., reaction to object permutation, removal or replacement with a new object) in participants with intellectual disabilities. The "Intellectual Disabilities (ID)" group (n = 40) obtained a score totalling a 93.7% success rate, whereas the "Normal Control" group…

  8. Experiences with information locator services

    USGS Publications Warehouse

    Christian, E.

    1999-01-01

    Over the last few years, governments and other organizations have been using new technologies to create networked Information Locator Services that help people find information resources. These services not only enhance access to information, but also are designed to support fundamental information policy principles. This article relates experiences in developing and promoting services interoperable with the Global Information Locator Service standard that has now been adopted and promoted in many forums worldwide. The article describes sample implementations and touches on the strategic choices made in public policy, standards, and technology. Ten recommendations are offered for successful implementation of an Information Locator Service. Published by Elsevier Science Ltd. All rights reserved.

  9. Accurate orbit propagation with planetary close encounters

    NASA Astrophysics Data System (ADS)

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  10. How flatbed scanners upset accurate film dosimetry.

    PubMed

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. PMID:26689962

  11. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  12. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  13. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  14. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  15. The Challenges from Extreme Climate Events for Sustainable Development in Amazonia: the Acre State Experience

    NASA Astrophysics Data System (ADS)

    Araújo, M. D. N. M.

    2015-12-01

    In the past ten years Acre State, located in Brazil´s southwestern Amazonia, has confronted sequential and severe extreme events in the form of droughts and floods. In particular, the droughts and forest fires of 2005 and 2010, the 2012 flood within Acre, the 2014 flood of the Madeira River which isolated Acre for two months from southern Brazil, and the most severe flooding throughout the state in 2015 shook the resilience of Acrean society. The accumulated costs of these events since 2005 have exceeded 300 million dollars. For the last 17 years, successive state administrations have been implementing a socio-environmental model of development that strives to link sustainable economic production with environmental conservation, particularly for small communities. In this context, extreme climate events have interfered significantly with this model, increasing the risks of failure. The impacts caused by these events on development in the state have been exacerbated by: a) limitations in monitoring; b) extreme events outside of Acre territory (Madeira River Flood) affecting transportation systems; c) absence of reliable information for decision-making; and d) bureaucratic and judicial impediments. Our experience in these events have led to the following needs for scientific input to reduce the risk of disasters: 1) better monitoring and forecasting of deforestation, fires, and hydro-meteorological variables; 2) ways to increase risk perception in communities; 3) approaches to involve more effectively local and regional populations in the response to disasters; 4) more accurate measurements of the economic and social damages caused by these disasters. We must improve adaptation to and mitigation of current and future extreme climate events and implement a robust civil defense, adequate to these new challenges.

  16. Point Source Location Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Cox, J. Allen

    1986-11-01

    This paper presents the results of an analysis of point source location accuracy and sensitivity as a function of focal plane geometry, optical blur spot, and location algorithm. Five specific blur spots are treated: gaussian, diffraction-limited circular aperture with and without central obscuration (obscured and clear bessinc, respectively), diffraction-limited rectangular aperture, and a pill box distribution. For each blur spot, location accuracies are calculated for square, rectangular, and hexagonal detector shapes of equal area. The rectangular detectors are arranged on a hexagonal lattice. The two location algorithms consist of standard and generalized centroid techniques. Hexagonal detector arrays are shown to give the best performance under a wide range of conditions.

  17. Mental Health Treatment Program Locator

    MedlinePlus

    ... County or Zip By Name Other Links State Mental Health Agencies Frequently Asked Questions Links Comments or Questions ... a Facility in Your State To locate the mental health treatment programs nearest you, find your State on ...

  18. High-precision source location of the 1978 November 19 gamma-ray burst

    NASA Technical Reports Server (NTRS)

    Cline, T. L.; Desai, U. D.; Teegarden, B. J.; Pizzichini, G.; Evans, W. D.; Klebesadel, R. W.; Laros, J. G.; Barat, C.; Hurley, K.; Niel, M.

    1981-01-01

    The celestial source location of the November 19, 1978, intense gamma ray burst has been determined from data obtained with the interplanetary gamma-ray sensor network by means of long-baseline wave front timing instruments. Each of the instruments was designed for studying events with observable spectra of approximately greater than 100 keV, and each provides accurate event profile timing in the several millisecond range. The data analysis includes the following: the triangulated region is centered at (gamma, delta) 1950 = (1h16m32s, -28 deg 53 arcmin), at -84 deg galactic latitude, where the star density is very low and the obscuration negligible. The gamma-ray burst source region, consistent with that of a highly polarized radio source described by Hjellming and Ewald (1981), may assist in the source modeling and may facilitate the understanding of the source process. A marginally identifiable X-ray source was also found by an Einstein Observatory investigation. It is concluded that the burst contains redshifted positron annihilation and nuclear first-excited iron lines, which is consistent with a neutron star origin.

  19. Automatic classification and accurate size measurement of blank mask defects

    NASA Astrophysics Data System (ADS)

    Bhamidipati, Samir; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2015-07-01

    A blank mask and its preparation stages, such as cleaning or resist coating, play an important role in the eventual yield obtained by using it. Blank mask defects' impact analysis directly depends on the amount of available information such as the number of defects observed, their accurate locations and sizes. Mask usability qualification at the start of the preparation process, is crudely based on number of defects. Similarly, defect information such as size is sought to estimate eventual defect printability on the wafer. Tracking of defect characteristics, specifically size and shape, across multiple stages, can further be indicative of process related information such as cleaning or coating process efficiencies. At the first level, inspection machines address the requirement of defect characterization by detecting and reporting relevant defect information. The analysis of this information though is still largely a manual process. With advancing technology nodes and reducing half-pitch sizes, a large number of defects are observed; and the detailed knowledge associated, make manual defect review process an arduous task, in addition to adding sensitivity to human errors. Cases where defect information reported by inspection machine is not sufficient, mask shops rely on other tools. Use of CDSEM tools is one such option. However, these additional steps translate into increased costs. Calibre NxDAT based MDPAutoClassify tool provides an automated software alternative to the manual defect review process. Working on defect images generated by inspection machines, the tool extracts and reports additional information such as defect location, useful for defect avoidance[4][5]; defect size, useful in estimating defect printability; and, defect nature e.g. particle, scratch, resist void, etc., useful for process monitoring. The tool makes use of smart and elaborate post-processing algorithms to achieve this. Their elaborateness is a consequence of the variety and

  20. Progress in fast, accurate multi-scale climate simulations

    DOE PAGESBeta

    Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  1. Progress in fast, accurate multi-scale climate simulations

    SciTech Connect

    Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  2. Progress in Fast, Accurate Multi-scale Climate Simulations

    SciTech Connect

    Collins, William D; Johansen, Hans; Evans, Katherine J; Woodward, Carol S.; Caldwell, Peter

    2015-01-01

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  3. Extracting semantically enriched events from biomedical literature

    PubMed Central

    2012-01-01

    can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare. PMID:22621266

  4. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  5. Detrecting and Locating Partial Discharges in Transformers

    SciTech Connect

    Shourbaji, A.; Richards, R.; Kisner, R. A.; Hardy, J.

    2005-02-04

    A collaborative research between the Oak Ridge National Laboratory (ORNL), the American Electric Power (AEP), the Tennessee Valley Authority (TVA), and the State of Ohio Energy Office (OEO) has been formed to conduct a feasibility study to detect and locate partial discharges (PDs) inside large transformers. The success of early detection of the PDs is necessary to avoid costly catastrophic failures that can occur if the process of PD is ignored. The detection method under this research is based on an innovative technology developed by ORNL researchers using optical methods to sense the acoustical energy produced by the PDs. ORNL researchers conducted experimental studies to detect PD using an optical fiber as an acoustic sensor capable of detecting acoustical disturbances at any point along its length. This technical approach also has the potential to locate the point at which the PD was sensed within the transformer. Several optical approaches were experimentally investigated, including interferometric detection of acoustical disturbances along the sensing fiber, light detection and ranging (LIDAR) techniques using frequency modulation continuous wave (FMCW), frequency modulated (FM) laser with a multimode fiber, FM laser with a single mode fiber, and amplitude modulated (AM) laser with a multimode fiber. The implementation of the optical fiber-based acoustic measurement technique would include installing a fiber inside a transformer allowing real-time detection of PDs and determining their locations. The fibers are nonconductive and very small (core plus cladding are diameters of 125 μm for single-mode fibers and 230 μm for multimode fibers). The research identified the capabilities and limitations of using optical technology to detect and locate sources of acoustical disturbances such as in PDs in large transformers. Amplitude modulation techniques showed the most promising results and deserve further research to better quantify the technique’s sensitivity

  6. The Structure of Evolution LOCBURST: The BATSE Burst Location Algorithm

    NASA Technical Reports Server (NTRS)

    Pendleton, Geoffrey N.; Briggs, Michael S.; Kippen, R. March; Paciesas, William S.; Stollberg, Mark; Woods, Pete; Meegan, C. A.; Fishman, G. J.; McCollough, M. L.; Connaughton, V.

    1998-01-01

    The gamma-ray bursts (GRB) location algorithm used to produce the BATSE GRB locations is described. The general flow of control of the current location algorithm is presented and the significant properties of the various physical inputs required are identified. The development of the burst location algorithm during the releases of the BATSE 1B, 2B, and 3B gamma-ray burst catalogs is presented so that the reasons for the differences in the positions and error estimates between the catalogs can be understood. In particular, differences between the 2B and 3B locations are discussed for events that have moved significantly and the reasons for the changes explained. The locations of bursts located independently by the interplanetary network are used to illustrate the effect on burst location accuracy of various components of the algorithm. IPN data as well as locations from other gamma-ray instruments are used to calculate estimates of the systematic errors on BATSE burst locations.

  7. Method of fan sound mode structure determination computer program user's manual: Microphone location program

    NASA Technical Reports Server (NTRS)

    Pickett, G. F.; Wells, R. A.; Love, R. A.

    1977-01-01

    A computer user's manual describing the operation and the essential features of the microphone location program is presented. The Microphone Location Program determines microphone locations that ensure accurate and stable results from the equation system used to calculate modal structures. As part of the computational procedure for the Microphone Location Program, a first-order measure of the stability of the equation system was indicated by a matrix 'conditioning' number.

  8. Locating the LCROSS Impact Craters

    NASA Technical Reports Server (NTRS)

    Marshall, William; Shirley, Mark; Moratto, Zachary; Colaprete, Anthony; Neumann, Gregory A.; Smith, David E.; Hensley, Scott; Wilson, Barbara; Slade, Martin; Kennedy, Brian; Gurrola, Eric; Harcke, Leif

    2012-01-01

    The Lunar CRater Observations and Sensing Satellite (LCROSS) mission impacted a spent Centaur rocket stage into a permanently shadowed region near the lunar south pole. The Sheperding Spacecraft (SSC) separated approx. 9 hours before impact and performed a small braking maneuver in order to observe the Centaur impact plume, looking for evidence of water and other volatiles, before impacting itself. This paper describes the registration of imagery of the LCROSS impact region from the mid- and near-infrared cameras onboard the SSC, as well as from the Goldstone radar. We compare the Centaur impact features, positively identified in the first two, and with a consistent feature in the third, which are interpreted as a 20 m diameter crater surrounded by a 160 m diameter ejecta region. The images are registered to Lunar Reconnaisance Orbiter (LRO) topographical data which allows determination of the impact location. This location is compared with the impact location derived from ground-based tracking and propagation of the spacecraft's trajectory and with locations derived from two hybrid imagery/trajectory methods. The four methods give a weighted average Centaur impact location of -84.6796 deg, -48.7093 deg, with a 1s uncertainty of 115 m along latitude, and 44 m along longitude, just 146 m from the target impact site. Meanwhile, the trajectory-derived SSC impact location is -84.719 deg, -49.61 deg, with a 1 alpha uncertainty of 3 m along the Earth vector and 75 m orthogonal to that, 766 m from the target location and 2.803 km south-west of the Centaur impact. We also detail the Centaur impact angle and SSC instrument pointing errors. Six high-level LCROSS mission requirements are shown to be met by wide margins. We hope that these results facilitate further analyses of the LCROSS experiment data and follow-up observations of the impact region

  9. Multi-event universal kriging (MEUK)

    NASA Astrophysics Data System (ADS)

    Tonkin, Matthew J.; Kennel, Jonathan; Huber, William; Lambie, John M.

    2016-01-01

    Multi-event universal kriging (MEUK) is a method of interpolation that creates a series of maps, each corresponding to a specific sampling "event", which exhibit spatial relationships that persist over time. MEUK is computed using minimum-variance unbiased linear prediction from data obtained via a sequence of events. MEUK assumes multi-event data can be described by a sum of (a) spatial trends that vary over time, (b) spatial trends that are invariant over time, and (c) spatially- and temporally-stationary correlation among the residuals from the combination of these trends. The fundamental advance made by MEUK versus traditional universal kriging (UK) lies with the generalized least squares (GLS) model and the multi-event capability it facilitates, rather than in the geostatistics, although it is shown how use of MEUK can greatly reduce predictive variances versus UK. For expediency, MEUK assumes a spatial covariance that does not change over time - although it does not have to - which is an advantage over space-time methods that employ a full space-time covariance function. MEUK can be implemented with large multi-event datasets, as demonstrated by application to a large water level dataset. Often, MEUK enables the stable solution of multiple events for similar computational effort as for a single event. MEUK provides an efficient basis for developing "wheel-and-axle" monitoring strategies [32] that combines frequently sampled locations used monitor changes over time with many more locations sampled periodically to provide synoptic depictions. MEUK can aid in the identification of the core monitoring locations, allowing for reduced sampling frequency elsewhere. Although MEUK can incorporate longitudinal variograms as in other space-time methods, doing so reduces the computational advantages of MEUK.

  10. Algorithms for Accurate and Fast Plotting of Contour Surfaces in 3D Using Hexahedral Elements

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Saini, Jaswinder Singh

    2016-07-01

    In the present study, Fast and accurate algorithms for the generation of contour surfaces in 3D are described using hexahedral elements which are popular in finite element analysis. The contour surfaces are described in the form of groups of boundaries of contour segments and their interior points are derived using the contour equation. The locations of contour boundaries and the interior points on contour surfaces are as accurate as the interpolation results obtained by hexahedral elements and thus there are no discrepancies between the analysis and visualization results.

  11. Algorithms for Accurate and Fast Plotting of Contour Surfaces in 3D Using Hexahedral Elements

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Saini, Jaswinder Singh

    2016-05-01

    In the present study, Fast and accurate algorithms for the generation of contour surfaces in 3D are described using hexahedral elements which are popular in finite element analysis. The contour surfaces are described in the form of groups of boundaries of contour segments and their interior points are derived using the contour equation. The locations of contour boundaries and the interior points on contour surfaces are as accurate as the interpolation results obtained by hexahedral elements and thus there are no discrepancies between the analysis and visualization results.

  12. Dialogue on private events

    PubMed Central

    Palmer, David C.; Eshleman, John; Brandon, Paul; Layng, T. V. Joe; McDonough, Christopher; Michael, Jack; Schoneberger, Ted; Stemmer, Nathan; Weitzman, Ray; Normand, Matthew

    2004-01-01

    In the fall of 2003, the authors corresponded on the topic of private events on the listserv of the Verbal Behavior Special Interest Group. Extracts from that correspondence raised questions about the role of response amplitude in determining units of analysis, whether private events can be investigated directly, and whether covert behavior differs from other behavior except in amplitude. Most participants took a cautious stance, noting not only conceptual pitfalls and empirical difficulties in the study of private events, but doubting the value of interpretive exercises about them. Others argued that despite such obstacles, in domains where experimental analyses cannot be done, interpretation of private events in the light of laboratory principles is the best that science can offer. One participant suggested that the notion that private events can be behavioral in nature be abandoned entirely; as an alternative, the phenomena should be reinterpreted only as physiological events. PMID:22477293

  13. The effect of post-identification feedback, delay, and suspicion on accurate eyewitnesses.

    PubMed

    Quinlivan, Deah S; Neuschatz, Jeffrey S; Douglass, Amy Bradfield; Wells, Gary L; Wetmore, Stacy A

    2012-06-01

    We examined whether post-identification feedback and suspicion affect accurate eyewitnesses. Participants viewed a video event and then made a lineup decision from a target-present photo lineup. Regardless of accuracy, the experimenter either, informed participants that they made a correct lineup decision or gave no information regarding their lineup decision. Immediately following the lineup decision or after a 1-week delay, a second experimenter gave some of the participants who received confirming feedback reason to be suspicious of the confirming feedback. Following immediately after the confirming feedback, accurate witnesses did not demonstrate certainty inflation. However, after a delay accurate witnesses did demonstrate certainty inflation typically associated with confirming feedback. The suspicion manipulation only affected participants' certainty when the confirming feedback created certainty inflation. The results lend support to the accessibility interpretation of the post-identification feedback effect and the erasure interpretation of the suspicion effect. PMID:22667810

  14. Lg-Wave Cross Correlation and Epicentral Double-Difference Relative Locations in China

    NASA Astrophysics Data System (ADS)

    Schaff, D. P.; Richards, P. G.; Slinkard, M.; Heck, S.; Young, C. J.

    2014-12-01

    In prior work we presented high-resolution locations of 28 events in the 1999 Xiuyan sequence in China using only cross correlation measurements of Lg-waves and a double-difference technique solving for the epicenter. Only five regional stations 500 to 1000 km away were used. The resulting locations revealed a 4 km stretch of fault with 95% location errors ~150 m and 7 ms residuals from internal consistency. Based on this success, we now attempt to extend this work on a broader scale to all of China to see if similar results can be obtained for a significant fraction of the seismicity. We first examine in detail three other clusters. The first has 9 events, and has relocations in a 1 km box with location errors on the order of tens of m. The second has 10 events, and relocates in a 10 km box with location errors on the order of hundreds of m. The third cluster, relocates in a 25 km box, and has ~1 km location errors. From this we see that the location errors increase with increasing event separation. The Annual Bulletin of Chinese Earthquakes we have determined from a repeating event catalog has average errors of 16 km. Therefore we are able to demonstrate one to two orders of magnitude improvement in the location errors as compared to the bulletin. Then we apply a pair-wise location procedure for the repeating event catalog we have identified for China in earlier work. 13% of the events are classified as repeating events (2,379 out of 17,898). Of these there are 1,123 events (1,710 pairs and 6% of the catalog) that have two or more stations from which we can estimate the locations by the same procedure. 81% of these events are demonstrated to have locations separated by less than 1 km with another event. There are 677 events in the repeating event catalog that have observations at four or more stations which enables an estimate of the location errors for these high quality events to be about 200 m.

  15. Record-breaking events during the compressive failure of porous materials.

    PubMed

    Pál, Gergő; Raischel, Frank; Lennartz-Sassinek, Sabine; Kun, Ferenc; Main, Ian G

    2016-03-01

    An accurate understanding of the interplay between random and deterministic processes in generating extreme events is of critical importance in many fields, from forecasting extreme meteorological events to the catastrophic failure of materials and in the Earth. Here we investigate the statistics of record-breaking events in the time series of crackling noise generated by local rupture events during the compressive failure of porous materials. The events are generated by computer simulations of the uniaxial compression of cylindrical samples in a discrete element model of sedimentary rocks that closely resemble those of real experiments. The number of records grows initially as a decelerating power law of the number of events, followed by an acceleration immediately prior to failure. The distribution of the size and lifetime of records are power laws with relatively low exponents. We demonstrate the existence of a characteristic record rank k(*), which separates the two regimes of the time evolution. Up to this rank deceleration occurs due to the effect of random disorder. Record breaking then accelerates towards macroscopic failure, when physical interactions leading to spatial and temporal correlations dominate the location and timing of local ruptures. The size distribution of records of different ranks has a universal form independent of the record rank. Subsequences of events that occur between consecutive records are characterized by a power-law size distribution, with an exponent which decreases as failure is approached. High-rank records are preceded by smaller events of increasing size and waiting time between consecutive events and they are followed by a relaxation process. As a reference, surrogate time series are generated by reshuffling the event times. The record statistics of the uncorrelated surrogates agrees very well with the corresponding predictions of independent identically distributed random variables, which confirms that temporal and spatial

  16. URLs: Uniform Resource Locators or Unreliable Resource Locators.

    ERIC Educational Resources Information Center

    Germain, Carol Anne

    2000-01-01

    This research studies the accessibility of 64 URLs (Uniform Resource Locators) cited in 31 academic journal articles. Discusses the role of citations as scholarly links and examines results of this longitudinal study that found an increasing decline in the availability of URL citations to World Wide Web sties. (Contains 22 references.) (Author/LRW)

  17. Codes for sound-source location in nontonotopic auditory cortex.

    PubMed

    Middlebrooks, J C; Xu, L; Eddins, A C; Green, D M

    1998-08-01

    We evaluated two hypothetical codes for sound-source location in the auditory cortex. The topographical code assumed that single neurons are selective for particular locations and that sound-source locations are coded by the cortical location of small populations of maximally activated neurons. The distributed code assumed that the responses of individual neurons can carry information about locations throughout 360 degrees of azimuth and that accurate sound localization derives from information that is distributed across large populations of such panoramic neurons. We recorded from single units in the anterior ectosylvian sulcus area (area AES) and in area A2 of alpha-chloralose-anesthetized cats. Results obtained in the two areas were essentially equivalent. Noise bursts were presented from loudspeakers spaced in 20 degrees intervals of azimuth throughout 360 degrees of the horizontal plane. Spike counts of the majority of units were modulated >50% by changes in sound-source azimuth. Nevertheless, sound-source locations that produced greater than half-maximal spike counts often spanned >180 degrees of azimuth. The spatial selectivity of units tended to broaden and, often, to shift in azimuth as sound pressure levels (SPLs) were increased to a moderate level. We sometimes saw systematic changes in spatial tuning along segments of electrode tracks as long as 1.5 mm but such progressions were not evident at higher sound levels. Moderate-level sounds presented anywhere in the contralateral hemifield produced greater than half-maximal activation of nearly all units. These results are not consistent with the hypothesis of a topographic code. We used an artificial-neural-network algorithm to recognize spike patterns and, thereby, infer the locations of sound sources. Network input consisted of spike density functions formed by averages of responses to eight stimulus repetitions. Information carried in the responses of single units permitted reasonable estimates of sound

  18. Concurrent and Accurate Short Read Mapping on Multicore Processors.

    PubMed

    Martínez, Héctor; Tárraga, Joaquín; Medina, Ignacio; Barrachina, Sergio; Castillo, Maribel; Dopazo, Joaquín; Quintana-Ortí, Enrique S

    2015-01-01

    We introduce a parallel aligner with a work-flow organization for fast and accurate mapping of RNA sequences on servers equipped with multicore processors. Our software, HPG Aligner SA (HPG Aligner SA is an open-source application. The software is available at http://www.opencb.org, exploits a suffix array to rapidly map a large fraction of the RNA fragments (reads), as well as leverages the accuracy of the Smith-Waterman algorithm to deal with conflictive reads. The aligner is enhanced with a careful strategy to detect splice junctions based on an adaptive division of RNA reads into small segments (or seeds), which are then mapped onto a number of candidate alignment locations, providing crucial information for the successful alignment of the complete reads. The experimental results on a platform with Intel multicore technology report the parallel performance of HPG Aligner SA, on RNA reads of 100-400 nucleotides, which excels in execution time/sensitivity to state-of-the-art aligners such as TopHat 2+Bowtie 2, MapSplice, and STAR. PMID:26451814

  19. Accurate Reading with Sequential Presentation of Single Letters

    PubMed Central

    Price, Nicholas S. C.; Edwards, Gemma L.

    2012-01-01

    Rapid, accurate reading is possible when isolated, single words from a sentence are sequentially presented at a fixed spatial location. We investigated if reading of words and sentences is possible when single letters are rapidly presented at the fovea under user-controlled or automatically controlled rates. When tested with complete sentences, trained participants achieved reading rates of over 60 wpm and accuracies of over 90% with the single letter reading (SLR) method and naive participants achieved average reading rates over 30 wpm with greater than 90% accuracy. Accuracy declined as individual letters were presented for shorter periods of time, even when the overall reading rate was maintained by increasing the duration of spaces between words. Words in the lexicon that occur more frequently were identified with higher accuracy and more quickly, demonstrating that trained participants have lexical access. In combination, our data strongly suggest that comprehension is possible and that SLR is a practicable form of reading under conditions in which normal scanning of text is not possible, or for scenarios with limited spatial and temporal resolution such as patients with low vision or prostheses. PMID:23115548

  20. Effects of heterogeneity on earthquake location at ISC

    NASA Astrophysics Data System (ADS)

    Adams, R. D.

    1992-12-01

    Earthquake location at the International Seismological Centre is carried out by routine least-squares analysis using Jeffreys-Bullen travel times. It is impossible to examine every earthquake in detail, but when obvious discrepancies in location become apparent, adjustments can be made by analysts, usually in phase identification or the restraint of depth. Such discrepancies often result from inappropriateness of the Jeffreys-Bullen model. The effect is most apparent in subduction zones, where it is often difficult to reconcile local and teleseismic observations, and differences from the standard model can result in substantial mislocations. Large events, located by steeply descending teleseismic phases, may be only slightly misplaced, with large residuals at close stations giving a true indication of velocity anomalies. Small events, however, are often significantly misplaced, although giving small residuals at a few close stations. These apparently well located events give compensating misinformation about velocities and location. In other areas, especially mid-oceanic ridges, difficulties in depth determination are likely to be related to deviations from a laterally homogeneous velocity model.

  1. Improved Epicentral Locations for Earthquakes Near Explorer Ridge

    NASA Astrophysics Data System (ADS)

    Clemens-Sewall, D.; Trehu, A. M.

    2014-12-01

    The tectonics and structure of the Explorer region, which is the northern boundary of the subducting Juan de Fuca plate, help to inform our assessments of the seismic hazard in the Pacific Northwest. Our understanding of this tectonically complex area is largely based on morphology of the seafloor from swath bathymetric data, potential field anomalies, and the calculated locations of contemporary earthquakes in the region. However, the Navy Sound Surveillance System hydrophone network, the Canadian National Seismic Network, the U.S. Advanced National Seismic System, and the Harvard Centroid Moment Tensor Catalog report significantly different epicentral locations for swarms of earthquakes near Explorer Ridge in August and October 2008. We relocated the larger (M>5) earthquakes in the August 2008 swarm using data from both U.S. and Canadian networks to improve azimuthal coverage. Absolute locations were determined for the largest events in the swarm, and the smaller events were relocated relative to the largest using a double difference method. To better understand why the locations from land-based seismic networks differ from those computed from the hydrophone arrays, we also examine T-phases from regional events recorded on Ocean Bottom Seismometers from the COLZA and Cascadia Initiative experiments and evaluate the potential for using T-phases to improve the epicentral locations of submarine earthquakes in the Pacific Northwest region.

  2. Wireless Damage Location Sensing System

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E. (Inventor); Taylor, Bryant Douglas (Inventor)

    2012-01-01

    A wireless damage location sensing system uses a geometric-patterned wireless sensor that resonates in the presence of a time-varying magnetic field to generate a harmonic response that will experience a change when the sensor experiences a change in its geometric pattern. The sensing system also includes a magnetic field response recorder for wirelessly transmitting the time-varying magnetic field and for wirelessly detecting the harmonic response. The sensing system compares the actual harmonic response to a plurality of predetermined harmonic responses. Each predetermined harmonic response is associated with a severing of the sensor at a corresponding known location thereof so that a match between the actual harmonic response and one of the predetermined harmonic responses defines the known location of the severing that is associated therewith.

  3. Improved Earthquake Location in the area of N. Euboean Gulf

    NASA Astrophysics Data System (ADS)

    Mouzakiotis, A. S.; Karastathis, V. K.

    2012-12-01

    Considerably improved hypocentral locations of the seismic events recorded during the period from 2009 to 2010 by the Hellenic Unified Seismographic Network (HUSN), have been obtained for the area of North Euboean Gulf, after implementation of a 3D non-linear location algorithm and a local 3D velocity model for both P and S-waves. The velocity model has been produced in previous studies using local earthquake tomography techniques (1D minimum velocity model and simultaneous 3D inversion techniques). In total, 280 events have been recorded in the area covered by the 3D velocity model, by at least 5 local stations. The 223 out of these were well located by the local stations having the azimuthal gap lower than 180o. Within the area covered by the 3D velocity model, there are 7 HUSN stations and two more from other networks. To optimize the hypocentral parameters estimation of the selected events, we used probabilistic non-linear earthquake location method, utilizing the 3D velocity model of the area. The program used produces a misfit function, "optimal" hypocenters and an estimate of the posterior probability density function (PDF) for the spatial hypocenter location. The calculated travel-times are obtained using a 3D version of the Eikonal finite difference scheme and the complete location PDF is calculated by the EDT (equal differential time) function. The results were compared with the ones obtained by the implementation of other 1D velocity models such as a) the 1D velocity model used for the daily earthquake data analysis by NOA and b) the 1D minimum velocity model. In spite of the fact that the local 3D velocity model was based on a completely different dataset than the present, it produced considerably improved event locations with significantly smaller location errors than both the 1D models. This shows the validity of the 3D velocity model. Although the 1D minimum model produced better locations than the NOA model, it was not as effective as the 3D model

  4. An Accurate Timing Alignment Method with Time-to-Digital Converter Linearity Calibration for High-Resolution TOF PET

    PubMed Central

    Li, Hongdi; Wang, Chao; An, Shaohui; Lu, Xingyu; Dong, Yun; Liu, Shitao; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio; Wong, Wai-Hoi

    2015-01-01

    Accurate PET system timing alignment minimizes the coincidence time window and therefore reduces random events and improves image quality. It is also critical for time-of-flight (TOF) image reconstruction. Here, we use a thin annular cylinder (shell) phantom filled with a radioactive source and located axially and centrally in a PET camera for the timing alignment of a TOF PET system. This timing alignment method involves measuring the time differences between the selected coincidence detector pairs, calibrating the differential and integral nonlinearity of the time-to-digital converter (TDC) with the same raw data and deriving the intrinsic time biases for each detector using an iterative algorithm. The raw time bias for each detector is downloaded to the front-end electronics and the residual fine time bias can be applied during the TOF list-mode reconstruction. Our results showed that a timing alignment accuracy of better than ±25 ps can be achieved, and a preliminary timing resolution of 473 ps (full width at half maximum) was measured in our prototype TOF PET/CT system. PMID:26543243

  5. Hydrodynamic Model of Inundation Event at Confluence of Ohio and Mississippi Rivers

    NASA Astrophysics Data System (ADS)

    Kaplan, B. A.; Luke, A.; Shlaes, M.; Lant, J.; Alsdorf, D. E.

    2012-12-01

    The goal of this project is to produce an accurate 2-D hydrodynamic model of an inundation event that occurred at the confluence of the Ohio and Mississippi River. The inundation occurred in the months of April and May 2011, with the city of interest being Cairo, Illinois. In order to relieve flooding within Cairo, a levee was detonated by the Army Corps of Engineers. Cairo is a small city of 2,800 people, and is prone to flooding due to its proximity to the confluence of the Ohio and Mississippi River. Cairo is also the only city in the U.S. completely surrounded by levees. The advantage of a 2-D modeling approach compared to a 1-D approach is that the floodplain geomorphological processes are more accurately represented. Understanding non-channelized flow that occurs during inundation events is a subject of growing interest, and is being addressed in other projects such as the NASA-SWOT mission scheduled for launch in 2019. The 2-D model utilized in this study is LISFLOOD-FP. LISFLOOD-FP is a 2-D finite-difference flood inundation model that has been proven to accurately simulate flood inundation for urban, coastal, and fluvial environments. LISFLOOD-FP operates using known hydraulic principles along with continuity and momentum equations to describe the flow of water through channels and floodplains. The digital elevation model used to represent the area's topography was obtained from the USGS National Elevation Data set, and our model uses input data from USGS stream gauges located upstream of the confluence of the Ohio and Mississippi River. The gauging station located in Cairo will be used for model validation. Currently, the steady state conditions of the Ohio and the Mississippi River are being modeled. In situ cross sectional data is being used to represent the channel. We have found that using averages of the cross sectional data do not accurately represent the river channels, so future model runs will incorporate interpolation between measurements. Once

  6. An Impact-Location Estimation Algorithm for Subsonic Uninhabited Aircraft

    NASA Technical Reports Server (NTRS)

    Bauer, Jeffrey E.; Teets, Edward

    1997-01-01

    An impact-location estimation algorithm is being used at the NASA Dryden Flight Research Center to support range safety for uninhabited aerial vehicle flight tests. The algorithm computes an impact location based on the descent rate, mass, and altitude of the vehicle and current wind information. The predicted impact location is continuously displayed on the range safety officer's moving map display so that the flightpath of the vehicle can be routed to avoid ground assets if the flight must be terminated. The algorithm easily adapts to different vehicle termination techniques and has been shown to be accurate to the extent required to support range safety for subsonic uninhabited aerial vehicles. This paper describes how the algorithm functions, how the algorithm is used at NASA Dryden, and how various termination techniques are handled by the algorithm. Other approaches to predicting the impact location and the reasons why they were not selected for real-time implementation are also discussed.

  7. AESOP: Adaptive Event detection SOftware using Programming by example

    NASA Astrophysics Data System (ADS)

    Thangali, Ashwin; Prasad, Harsha; Kethamakka, Sai; Demirdjian, David; Checka, Neal

    2015-05-01

    This paper presents AESOP, a software tool for automatic event detection in video. AESOP employs a super- vised learning approach for constructing event models, given training examples from different event classes. A trajectory-based formulation is used for modeling events with an aim towards incorporating invariance to changes in the camera location and orientation parameters. The proposed formulation is designed to accommodate events that involve interactions between two or more entities over an extended period of time. AESOP's event models are formulated as HMMs to improve the event detection algorithm's robustness to noise in input data and to achieve computationally efficient algorithms for event model training and event detection. AESOP's performance is demonstrated on a wide range of different scenarios, including stationary camera surveillance and aerial video footage captured in land and maritime environments.

  8. Backwater controls of avulsion location on deltas

    NASA Astrophysics Data System (ADS)

    Chatanantavet, Phairot; Lamb, Michael P.; Nittrouer, Jeffrey A.

    2012-01-01

    River delta complexes are built in part through repeated river-channel avulsions, which often occur about a persistent spatial node creating delta lobes that form a fan-like morphology. Predicting the location of avulsions is poorly understood, but it is essential for wetland restoration, hazard mitigation, reservoir characterization, and delta morphodynamics. Following previous work, we show that the upstream distance from the river mouth where avulsions occur is coincident with the backwater length, i.e., the upstream extent of river flow that is affected by hydrodynamic processes in the receiving basin. To explain this observation we formulate a fluvial morphodynamic model that is coupled to an offshore spreading river plume and subject it to a range of river discharges. Results show that avulsion is less likely in the downstream portion of the backwater zone because, during high-flow events, the water surface is drawn down near the river mouth to match that of the offshore plume, resulting in river-bed scour and a reduced likelihood of overbank flow. Furthermore, during low-discharge events, flow deceleration near the upstream extent of backwater causes enhanced deposition locally and a reduced channel-fill timescale there. Both mechanisms favor preferential avulsion in the upstream part of the backwater zone. These dynamics are fundamentally due to variable river discharges and a coupled offshore river plume, with implications for predicting delta response to climate and sea level change, and fluvio-deltaic stratigraphy.

  9. An event database for rotational seismology

    NASA Astrophysics Data System (ADS)

    Salvermoser, Johannes; Hadziioannou, Celine; Hable, Sarah; Chow, Bryant; Krischer, Lion; Wassermann, Joachim; Igel, Heiner

    2016-04-01

    The ring laser sensor (G-ring) located at Wettzell, Germany, routinely observes earthquake-induced rotational ground motions around a vertical axis since its installation in 2003. Here we present results from a recently installed event database which is the first that will provide ring laser event data in an open access format. Based on the GCMT event catalogue and some search criteria, seismograms from the ring laser and the collocated broadband seismometer are extracted and processed. The ObsPy-based processing scheme generates plots showing waveform fits between rotation rate and transverse acceleration and extracts characteristic wavefield parameters such as peak ground motions, noise levels, Love wave phase velocities and waveform coherence. For each event, these parameters are stored in a text file (json dictionary) which is easily readable and accessible on the website. The database contains >10000 events starting in 2007 (Mw>4.5). It is updated daily and therefore provides recent events at a time lag of max. 24 hours. The user interface allows to filter events for epoch, magnitude, and source area, whereupon the events are displayed on a zoomable world map. We investigate how well the rotational motions are compatible with the expectations from the surface wave magnitude scale. In addition, the website offers some python source code examples for downloading and processing the openly accessible waveforms.

  10. Features, Events, and Processes: Disruptive Events

    SciTech Connect

    J. King

    2004-03-31

    The primary purpose of this analysis is to evaluate seismic- and igneous-related features, events, and processes (FEPs). These FEPs represent areas of natural system processes that have the potential to produce disruptive events (DE) that could impact repository performance and are related to the geologic processes of tectonism, structural deformation, seismicity, and igneous activity. Collectively, they are referred to as the DE FEPs. This evaluation determines which of the DE FEPs are excluded from modeling used to support the total system performance assessment for license application (TSPA-LA). The evaluation is based on the data and results presented in supporting analysis reports, model reports, technical information, or corroborative documents that are cited in the individual FEP discussions in Section 6.2 of this analysis report.

  11. Event-by-Event Fission with FREYA

    SciTech Connect

    Randrup, J; Vogt, R

    2010-11-09

    The recently developed code FREYA (Fission Reaction Event Yield Algorithm) generates large samples of complete fission events, consisting of two receding product nuclei as well as a number of neutrons and photons, all with complete kinematic information. Thus it is possible to calculate arbitrary correlation observables whose behavior may provide unique insight into the fission process. The presentation first discusses the present status of FREYA, which has now been extended up to energies where pre-equilibrium emission becomes significant and one or more neutrons may be emitted prior to fission. Concentrating on {sup 239}Pu(n,f), we discuss the neutron multiplicity correlations, the dependence of the neutron energy spectrum on the neutron multiplicity, and the relationship between the fragment kinetic energy and the number of neutrons and their energies. We also briefly suggest novel fission observables that could be measured with modern detectors.

  12. An automatic procedure for high-resolution earthquake locations: a case study from the TABOO near fault observatory (Northern Apennines, Italy)

    NASA Astrophysics Data System (ADS)

    Valoroso, Luisa; Chiaraluce, Lauro; Di Stefano, Raffaele; Latorre, Diana; Piccinini, Davide

    2014-05-01

    The characterization of the geometry, kinematics and rheology of fault zones by seismological data depends on our capability of accurately locate the largest number of low-magnitude seismic events. To this aim, we have been working for the past three years to develop an advanced modular earthquake location procedure able to automatically retrieve high-resolution earthquakes catalogues directly from continuous waveforms data. We use seismograms recorded at about 60 seismic stations located both at surface and at depth. The network covers an area of about 80x60 km with a mean inter-station distance of 6 km. These stations are part of a Near fault Observatory (TABOO; http://taboo.rm.ingv.it/), consisting of multi-sensor stations (seismic, geodetic, geochemical and electromagnetic). This permanent scientific infrastructure managed by the INGV is devoted to studying the earthquakes preparatory phase and the fast/slow (i.e., seismic/aseismic) deformation process active along the Alto Tiberina fault (ATF) located in the northern Apennines (Italy). The ATF is potentially one of the rare worldwide examples of active low-angle (< 15°) normal fault accommodating crustal extension and characterized by a regular occurrence of micro-earthquakes. The modular procedure combines: i) a sensitive detection algorithm optimized to declare low-magnitude events; ii) an accurate picking procedure that provides consistently weighted P- and S-wave arrival times, P-wave first motion polarities and the maximum waveform amplitude for local magnitude calculation; iii) both linearized iterative and non-linear global-search earthquake location algorithms to compute accurate absolute locations of single-events in a 3D geological model (see Latorre et al. same session); iv) cross-correlation and double-difference location methods to compute high-resolution relative event locations. This procedure is now running off-line with a delay of 1 week to the real-time. We are now implementing this

  13. Features, Events, and Processes: Disruptive Events

    SciTech Connect

    P. Sanchez

    2004-11-08

    The purpose of this analysis report is to evaluate and document the inclusion or exclusion of the disruptive events features, events, and processes (FEPs) with respect to modeling used to support the total system performance assessment for license application (TSPA-LA). A screening decision, either ''Included'' or ''Excluded,'' is given for each FEP, along with the technical basis for screening decisions. This information is required by the U.S. Nuclear Regulatory Commission (NRC) at 10 CFR 63.114 (d), (e), and (f) [DIRS 156605]. The FEPs addressed in this report deal with both seismic and igneous disruptive events, such as fault displacements through the repository and an igneous intrusion into the repository. For included FEPs, this analysis summarizes the implementation of the FEP in TSPA-LA (i.e., how the FEP is included). For excluded FEPs, this analysis provides the technical basis for exclusion from TSPA-LA (i.e., why the FEP is excluded). Previous versions of this report were developed to support the total system performance assessments (TSPA) for various prior repository designs. This revision addresses the repository design for the license application (LA).

  14. Location and identification of radioactive waste in Massachusetts Bay

    SciTech Connect

    Colton, D.P.; Louft, H.L.

    1993-12-31

    The accurate location and identification of hazardous waste materials dumped in the world`s oceans are becoming an increasing concern. For years, the oceans have been viewed as a convenient and economical place to dispose of all types of waste. In all but a few cases, major dump sites have been closed leaving behind years of accumulated debris. The extent of past environmental damage, the possibility of continued environmental damage, and the possibility of hazardous substances reaching the human food chain need to be carefully investigated. This paper reports an attempt to accurately locate and identify the radioactive component of the waste material. The Department of Energy`s Remote Sensing Laboratory (RSL), in support of the US Environmental Protection Agency (EPA), provided the precision navigation system and prototype underwater radiological monitoring equipment that were used during this project. The paper also describes the equipment used, presents the data obtained, and discusses future equipment development.

  15. Two-dimensional location and direction estimating method.

    PubMed

    Haga, Teruhiro; Tsukamoto, Sosuke; Hoshino, Hiroshi

    2008-01-01

    In this paper, a method of estimating both the position and the rotation angle of an object on a measurement stage was proposed. The system utilizes the radio communication technology and the directivity of an antenna. As a prototype system, a measurement stage (a circle 240mm in diameter) with 36 antennas that placed in each 10 degrees was developed. Two transmitter antennas are settled in a right angle on the stage as the target object, and the position and the rotation angle is estimated by measuring efficiency of the radio communication of each 36 antennas. The experimental result revealed that even when the estimated location is not so accurate (about a 30 mm error), the rotation angle is accurately estimated (about 2.33 degree error on average). The result suggests that the proposed method will be useful for estimating the location and the direction of an object. PMID:19162938

  16. Activating Event Knowledge

    ERIC Educational Resources Information Center

    Hare, Mary; Jones, Michael; Thomson, Caroline; Kelly, Sarah; McRae, Ken

    2009-01-01

    An increasing number of results in sentence and discourse processing demonstrate that comprehension relies on rich pragmatic knowledge about real-world events, and that incoming words incrementally activate such knowledge. If so, then even outside of any larger context, nouns should activate knowledge of the generalized events that they denote or…

  17. Events and Constructs

    ERIC Educational Resources Information Center

    Smith, Noel W.

    2007-01-01

    Psychology has largely ignored the distinction between constructs and events and what comprises a scientific construct, yet this distinction is basic to some of the major divisions of thought within the discipline. Several kinds of constructs are identified and compared with events, and improper use of constructs is noted of which the mind…

  18. Committed Sport Event Volunteers

    ERIC Educational Resources Information Center

    Han, Keunsu; Quarterman, Jerome; Strigas, Ethan; Ha, Jaehyun; Lee, Seungbum

    2013-01-01

    The purpose of this study was to investigate the relationships among selected demographic characteristics (income, education and age), motivation and commitment of volunteers at a sporting event. Three-hundred and five questionnaires were collected from volunteers in a marathon event and analyzed using structural equation modeling (SEM). Based on…

  19. Event generator overview

    SciTech Connect

    Pang, Y.

    1997-12-01

    Due to their ability to provide detailed and quantitative predictions, the event generators have become an important part of studying relativistic heavy ion physics and of designing future experiments. In this talk, the author will briefly summarize recent progress in developing event generators for the relativistic heavy ion collisions.

  20. Location estimation using a broadband electromagnetic induction array

    NASA Astrophysics Data System (ADS)

    Gurbuz, Ali C.; Scott, Waymond R., Jr.; McClellan, James H.

    2009-05-01

    A broadband quadrapole electromagnetic induction (EMI) array with one transmitter and three receiver coils is built for detecting buried metallic targets. In this paper, it is shown that the locations of multiple metallic targets including their depth and cross-range position can be estimated accurately with the EMI array using an orthogonal matching pursuit (OMP) approach. Conventional OMP approaches use measurement dictionaries generated for each possible target space point which results in huge dictionaries for the 3D location problem. This paper exploits the inherent shifting properties of the scanning system to reduce the size of the dictionary used in OMP and to lower the computation cost for possibly a real-time EMI location estimation system. The method is tested on both simulated and experimental data collected over metal spheres at different depths and accurate location estimates were obtained. This method allows EMI to be used as a pre-screener and results in valuable location estimates that could be used by a multi-modal GPR or other sensor for enhanced operation.

  1. Locating the kidneys on CT to guide nephrotomography.

    PubMed

    Chen, M Y; Gelfand, D W; Spangler, K; Dyer, R B; Zagoria, R J; Ott, D J

    1997-01-01

    Tomography of the kidneys is a routine procedure performed during intravenous urography. Precisely locating the kidneys, however, can be difficult. This article describes a study performed to determine a simple and accurate measurement for kidney location as a guide to obtaining initial nephrotomographic sections. The authors measured the distance from the midplane of the kidney to the posterior skin line on abdominal CT images in 26 patients. This distance averaged one-third the thickness of the abdominal region. The best depth for the nephrotomographic cut was found to be one-third the thickness of the abdomen plus the thickness of any table pad. PMID:9085416

  2. Locating influential nodes via dynamics-sensitive centrality.

    PubMed

    Liu, Jian-Guo; Lin, Jian-Hong; Guo, Qiang; Zhou, Tao

    2016-01-01

    With great theoretical and practical significance, locating influential nodes of complex networks is a promising issue. In this paper, we present a dynamics-sensitive (DS) centrality by integrating topological features and dynamical properties. The DS centrality can be directly applied in locating influential spreaders. According to the empirical results on four real networks for both susceptible-infected-recovered (SIR) and susceptible-infected (SI) spreading models, the DS centrality is more accurate than degree, k-shell index and eigenvector centrality. PMID:26905891

  3. Locating influential nodes via dynamics-sensitive centrality

    PubMed Central

    Liu, Jian-Guo; Lin, Jian-Hong; Guo, Qiang; Zhou, Tao

    2016-01-01

    With great theoretical and practical significance, locating influential nodes of complex networks is a promising issue. In this paper, we present a dynamics-sensitive (DS) centrality by integrating topological features and dynamical properties. The DS centrality can be directly applied in locating influential spreaders. According to the empirical results on four real networks for both susceptible-infected-recovered (SIR) and susceptible-infected (SI) spreading models, the DS centrality is more accurate than degree, k-shell index and eigenvector centrality. PMID:26905891

  4. Predictors of Rural Practice Location

    ERIC Educational Resources Information Center

    Kegel-Flom, Penelope

    1977-01-01

    Attitudes toward the urban environment and place of origin were found to be the best predictors of an optometrist's practice location. Findings of this study imply that optometry students most likely to enter rural practice can be objectively identified early in their training and that the predictive equation presented may be useful in the…

  5. Negative Geography: Locating Things Elsewhere.

    ERIC Educational Resources Information Center

    Stoddard, Robert H.

    The phenomenon of negative geography--the assertion that any location is better than the one selected--is discussed and ways in which this approach differs from traditional geography methodology are analyzed. Case studies of two citizens' groups which protested the relocation of a city mission and halfway house in their neighborhoods illustrate…

  6. How close can we approach the event horizon of the Kerr black hole from the detection of gravitational quasinormal modes?

    NASA Astrophysics Data System (ADS)

    Nakamura, Takashi; Nakano, Hiroyuki

    2016-04-01

    Using the Wentzel-Kramers-Brillouin method, we show that the peak location (r_peak) of the potential, which determines the quasinormal mode frequency of the Kerr black hole, obeys an accurate empirical relation as a function of the specific angular momentum a and the gravitational mass M. If the quasinormal mode with a/M ˜ 1 is observed by gravitational wave detectors, we can confirm the black-hole space-time around the event horizon, r_peak=r_+ +O(√ {1-q}), where r_+ is the event horizon radius. However, if the quasinormal mode is different from that of general relativity, we are forced to seek the true theory of gravity and/or face the existence of the naked singularity.

  7. How close can we approach the event horizon of the Kerr black hole from the detection of gravitational quasinormal modes?

    NASA Astrophysics Data System (ADS)

    Nakamura, Takashi; Nakano, Hiroyuki

    2016-04-01

    Using the Wentzel-Kramers-Brillouin method, we show that the peak location (r_peak) of the potential, which determines the quasinormal mode frequency of the Kerr black hole, obeys an accurate empirical relation as a function of the specific angular momentum a and the gravitational mass M. If the quasinormal mode with a/M ˜ 1 is observed by gravitational wave detectors, we can confirm the black-hole space-time around the event horizon, r_peak=r_+ +O(√{1-q}), where r_+ is the event horizon radius. However, if the quasinormal mode is different from that of general relativity, we are forced to seek the true theory of gravity and/or face the existence of the naked singularity.

  8. Automatic location of disruption times in JET.

    PubMed

    Moreno, R; Vega, J; Murari, A

    2014-11-01

    The loss of stability and confinement in tokamak plasmas can induce critical events known as disruptions. Disruptions produce strong electromagnetic forces and thermal loads which can damage fundamental components of the devices. Determining the disruption time is extremely important for various disruption studies: theoretical models, physics-driven models, or disruption predictors. In JET, during the experimental campaigns with the JET-C (Carbon Fiber Composite) wall, a common criterion to determine the disruption time consisted of locating the time of the thermal quench. However, with the metallic ITER-like wall (JET-ILW), this criterion is usually not valid. Several thermal quenches may occur previous to the current quench but the temperature recovers. Therefore, a new criterion has to be defined. A possibility is to use the start of the current quench as disruption time. This work describes the implementation of an automatic data processing method to estimate the disruption time according to this new definition. This automatic determination allows both reducing human efforts to locate the disruption times and standardizing the estimates (with the benefit of being less vulnerable to human errors). PMID:25430239

  9. Source Identification and Location Techniques

    NASA Technical Reports Server (NTRS)

    Weir, Donald; Bridges, James; Agboola, Femi; Dougherty, Robert

    2001-01-01

    Mr. Weir presented source location results obtained from an engine test as part of the Engine Validation of Noise Reduction Concepts program. Two types of microphone arrays were used in this program to determine the jet noise source distribution for the exhaust from a 4.3 bypass ratio turbofan engine. One was a linear array of 16 microphones located on a 25 ft. sideline and the other was a 103 microphone 3-D "cage" array in the near field of the jet. Data were obtained from a baseline nozzle and from numerous nozzle configuration using chevrons and/or tabs to reduce the jet noise. Mr. Weir presented data from two configurations: the baseline nozzle and a nozzle configuration with chevrons on both the core and bypass nozzles. This chevron configuration had achieved a jet noise reduction of 4 EPNdB in small scale tests conducted at the Glenn Research Center. IR imaging showed that the chevrons produced significant improvements in mixing and greatly reduced the length of the jet potential core. Comparison of source location data from the 1-D phased array showed a shift of the noise sources towards the nozzle and clear reductions of the sources due to the noise reduction devices. Data from the 3-D array showed a single source at a frequency of 125 Hz. located several diameters downstream from the nozzle exit. At 250 and 400 Hz., multiple sources, periodically spaced, appeared to exist downstream of the nozzle. The trend of source location moving toward the nozzle exit with increasing frequency was also observed. The 3-D array data also showed a reduction in source strength with the addition of chevrons. The overall trend of source location with frequency was compared for the two arrays and with classical experience. Similar trends were observed. Although overall trends with frequency and addition of suppression devices were consistent between the data from the 1-D and the 3-D arrays, a comparison of the details of the inferred source locations did show differences. A

  10. Contrasting Large Solar Events

    NASA Astrophysics Data System (ADS)

    Lanzerotti, Louis J.

    2010-10-01

    After an unusually long solar minimum, solar cycle 24 is slowly beginning. A large coronal mass ejection (CME) from sunspot 1092 occurred on 1 August 2010, with effects reaching Earth on 3 August and 4 August, nearly 38 years to the day after the huge solar event of 4 August 1972. The prior event, which those of us engaged in space research at the time remember well, recorded some of the highest intensities of solar particles and rapid changes of the geomagnetic field measured to date. What can we learn from the comparisons of these two events, other than their essentially coincident dates? One lesson I took away from reading press coverage and Web reports of the August 2010 event is that the scientific community and the press are much more aware than they were nearly 4 decades ago that solar events can wreak havoc on space-based technologies.

  11. Characterization of sub-daily rainfall properties in three raingauges located in northeast Brazil

    NASA Astrophysics Data System (ADS)

    Coutinho, J. V.; Almeida, C. D. N.; Leal, A. M. F.; Barbosa, L. R.

    2014-09-01

    This paper aims to evaluate the characteristics of rainfall events of three experimental basins located in northeast Brazil. The study areas are located, one in Ceará State and two in Paraíba State. Thus, the definition of rainfall events was based on two characteristics: minimum inter-event time and minimum event depth. Then, they were classified according to the shape of the hyetograph: to the left rectangular, triangular, and triangular with peak, and to the right, bimodal and unshaped. Evaluation of the percentages of each type of hyetograph and the main characteristics of rainfall events (peak, duration and intensity) was carried out. The results shows that the two experimental basins located in the semi-arid region have similar characteristics, and shapeless events have significant accumulated rainfall.

  12. Bronchoscopic location of bronchopleural fistula with xenon-133

    SciTech Connect

    Lillington, G.A.; Stevens, R.P.; DeNardo, G.L.

    1982-04-01

    Successful application of the technique of transbronchoscopic endobronchial occlusion of a persistent bronchopleural fistula requires an accurate determination of the segmental location of the air leak. This was achieved by injections of small boluses of Xe-133 into a number of segmental bronchi through a fiber-optic bronchoscope. Following the instillation of Xe-133 into the segmental bronchus leading to the fistula, there was a marked increase in radioactivity in the intercostal drainage tube.

  13. Cluster Analysis for CTBT Seismic Event Monitoring

    SciTech Connect

    Carr, Dorthe B.; Young, Chris J.; Aster, Richard C.; Zhang, Xioabing

    1999-08-03

    Mines at regional distances are expected to be continuing sources of small, ambiguous events which must be correctly identified as part of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) monitoring process. Many of these events are small enough that they are only seen by one or two stations, so locating them by traditional methods maybe impossible or at best leads to poorly resolved parameters. To further complicate matters, these events have parametric characteristics (explosive sources, shallow depths) which make them difficult to identify as definite non-nuclear events using traditional discrimination methods. Fortunately, explosions from the same mines tend to have similar waveforms, making it possible to identify an unknown event by comparison with characteristic archived events that have been associated with specific mines. In this study we examine the use of hierarchical cluster methods to identify groups of similar events. These methods produce dendrograms, which are tree-like structures showing the relationships between entities. Hierarchical methods are well-suited to use for event clustering because they are well documented, easy to implement, computationally cheap enough to run multiple times for a given data set, and because these methods produce results which can be readily interpreted. To aid in determining the proper threshold value for defining event families for a given dendrogram, we use cophenetic correlation (which compares a model of the similarity behavior to actual behavior), variance, and a new metric developed for this study. Clustering methods are compared using archived regional and local distance mining blasts recorded at two sites in the western U.S. with different tectonic and instrumentation characteristics: the three-component broadband DSVS station in Pinedale, Wyoming and the short period New Mexico Tech (NMT) network in central New Mexico. Ground truth for the events comes from the mining industry and local network locations

  14. Testing the ability of different seismic detections approaches to monitor aftershocks following a moderate magnitude event.

    NASA Astrophysics Data System (ADS)

    Romero, Paula; Díaz, Jordi; Ruiz, Mario; Cantavella, Juan Vicente; Gomez-García, Clara

    2016-04-01

    The detection and picking of seismic events is a permanent concern for seismic surveying, in particular when dealing with aftershocks of moderate magnitude events. Many efforts have been done to find the balance between computer efficiency and the robustness of the detection methods. In this work, data recorded by a high density seismic network deployed following a 5.2 magnitude event located close to Albacete, SE Spain, is used to test the ability of classical and recently proposed detection methodologies. Two days after the main shock, occurred the 23th February, a network formed by 11 stations from ICTJA-CSIC and 2 stations from IGN were deployed over the region, with inter-station distances ranging between 5 and 10 km. The network remained in operation until April 6th, 2015 and allowed to manually identify up to 552 events with magnitudes from 0.2 to 3.5 located in an area of just 25 km2 inside the network limits. The detection methods here studied applied are the classical STA/LTA, a power spectral method, a detector based in the Benford's law and a waveform similarity method. The STA/LTA method, based in the comparison of background noise and seismic signal amplitudes, is taken as a reference to evaluate the results arising from the other approaches. The power spectral density method is based in the inspection of the characteristic frequency pattern associated to seismic events. The Benford's Law detector analyses the distribution of the first-digit of displacement count in the histogram of a seismic waveform, considering that only the windows containing seismic wave arrivals will match the logarithmic law. Finally, the waveform similarity method is based in the analysis of the normalized waveform amplitude, detecting those events with waveform similar to a previously defined master event. The aim of this contribution is to inspect the ability of the different approaches to accurately detect the aftershocks events for this kind of seismic crisis and to

  15. Single event phenomena: Testing and prediction

    NASA Technical Reports Server (NTRS)

    Kinnison, James D.

    1992-01-01

    Highly integrated microelectronic devices are often used to increase the performance of satellite systems while reducing the system power dissipation, size, and weight. However, these devices are usually more susceptible to radiation than less integrated devices. In particular, the problem of sensitivity to single event upset and latchup is greatly increased as the integration level is increased. Therefore, a method for accurately evaluating the susceptibility of new devices to single event phenomena is critical to qualifying new components for use in space systems. This evaluation includes testing devices for upset or latchup and extrapolating the results of these tests to the orbital environment. Current methods for testing devices for single event effects are reviewed, and methods for upset rate prediction, including a new technique based on Monte Carlo simulation, are presented.

  16. Giant African pouched rats (Cricetomys gambianus) that work on tilled soil accurately detect land mines.

    PubMed

    Edwards, Timothy L; Cox, Christophe; Weetjens, Bart; Tewelde, Tesfazghi; Poling, Alan

    2015-09-01

    Pouched rats were employed as mine-detection animals in a quality-control application where they searched for mines in areas previously processed by a mechanical tiller. The rats located 58 mines and fragments in this 28,050-m(2) area with a false indication rate of 0.4 responses per 100 m(2) . Humans with metal detectors found no mines that were not located by the rats. These findings indicate that pouched rats can accurately detect land mines in disturbed soil and suggest that they can play multiple roles in humanitarian demining. PMID:25962550

  17. The Locations of Gamma-Ray Bursts Measured by Comptel

    NASA Technical Reports Server (NTRS)

    Kippen, R. Marc; Ryan, James M.; Connors, Alanna; Hartmann, Dieter H.; Winkler, Christoph; Kuiper, Lucien; Varendorff, Martin; McConnell, Mark L.; Hurley, Kevin; Hermsen, Wim; Schoenfelder, Volker

    1998-01-01

    The COMPTEL instrument on the Compton Gamma Ray Observatory is used to measure the locations of gamma-ray bursts through direct imaging of MeV photons. In a comprehensive search, we have detected and localized 29 bursts observed between 1991 April 19 and 1995 May 31. The average location accuracy of these events is 1.25 deg (1 sigma), including a systematic error of approx. 0.5 deg, which is verified through comparison with Interplanetary Network (IPN) timing annuli. The combination of COMPTEL and IPN measurements results in locations for 26 of the bursts with an average "error box" area of only approx. 0.3 deg (1 sigma). We find that the angular distribution of COMPTEL burst locations is consistent with large-scale isotropy and that there is no statistically significant evidence of small-angle autocorrelations. We conclude that there is no compelling evidence for burst repetition since no more than two of the events (or approx. 7% of the 29 bursts) could possibly have come from the same source. We also find that there is no significant correlation between the burst locations and either Abell clusters of galaxies or radio-quiet quasars. Agreement between individual COMPTEL locations and IPN annuli places a lower limit of approx. 100 AU (95% confidence) on the distance to the stronger bursts.

  18. Asynchronous event-based binocular stereo matching.

    PubMed

    Rogister, Paul; Benosman, Ryad; Ieng, Sio-Hoi; Lichtsteiner, Patrick; Delbruck, Tobi

    2012-02-01

    We present a novel event-based stereo matching algorithm that exploits the asynchronous visual events from a pair of silicon retinas. Unlike conventional frame-based cameras, recent artificial retinas transmit their outputs as a continuous stream of asynchronous temporal events, in a manner similar to the output cells of the biological retina. Our algorithm uses the timing information carried by this representation in addressing the stereo-matching problem on moving objects. Using the high temporal resolution of the acquired data stream for the dynamic vision sensor, we show that matching on the timing of the visual events provides a new solution to the real-time computation of 3-D objects when combined with geometric constraints using the distance to the epipolar lines. The proposed algorithm is able to filter out incorrect matches and to accurately reconstruct the depth of moving objects despite the low spatial resolution of the sensor. This brief sets up the principles for further event-based vision processing and demonstrates the importance of dynamic information and spike timing in processing asynchronous streams of visual events. PMID:24808513

  19. Adverse event recording post hip fracture surgery.

    PubMed

    Doody, K; Mohamed, K M S; Butler, A; Street, J; Lenehan, B

    2013-01-01

    Accurate recording of adverse events post hip fracture surgery is vital for planning and allocating resources. The purpose of this study was to compare adverse events recorded prospectively at point of care with adverse recorded by the Hospital In-Patient Enquiry (HIPE) System. The study examined a two month period from August to September 2011 at University Hospital Limerick. Out of a sample size of 39, there were 7 males (17.9%) and 32 females (82.1%) with an age range of between 53 and 98 years. The mean age was 80.5 years. 55 adverse events were recorded, in contrast to the HIPE record of 13 (23.6%) adverse events. The most common complications included constipation 10 (18.2%), anaemia 8 (14.5%), urinary retention 8 (14.50%), pneumonia 5 (9.1%) and delirium 5 (9.1%). Of the female cohort, 24 (68.8%) suffered an adverse event, while only 4 (57%) males suffered an adverse event. PMID:24579408

  20. Event Coverage Detection and Event Source Determination in Underwater Wireless Sensor Networks.

    PubMed

    Zhou, Zhangbing; Xing, Riliang; Duan, Yucong; Zhu, Yueqin; Xiang, Jianming

    2015-01-01

    With the advent of the Internet of Underwater Things, smart things are deployed in the ocean space and establish underwater wireless sensor networks for the monitoring of vast and dynamic underwater environments. When events are found to have possibly occurred, accurate event coverage should be detected, and potential event sources should be determined for the enactment of prompt and proper responses. To address this challenge, a technique that detects event coverage and determines event sources is developed in this article. Specifically, the occurrence of possible events corresponds to a set of neighboring sensor nodes whose sensory data may deviate from a normal sensing range in a collective fashion. An appropriate sensor node is selected as the relay node for gathering and routing sensory data to sink node(s). When sensory data are collected at sink node(s), the event coverage is detected and represented as a weighted graph, where the vertices in this graph correspond to sensor nodes and the weight specified upon the edges reflects the extent of sensory data deviating from a normal sensing range. Event sources are determined, which correspond to the barycenters in this graph. The results of the experiments show that our technique is more energy efficient, especially when the network topology is relatively steady. PMID:26694394