Science.gov

Sample records for accurate event locations

  1. Location of long-period events below Kilauea Volcano using seismic amplitudes and accurate relative relocation

    USGS Publications Warehouse

    Battaglia, J.; Got, J.-L.; Okubo, P.

    2003-01-01

    We present methods for improving the location of long-period (LP) events, deep and shallow, recorded below Kilauea Volcano by the permanent seismic network. LP events might be of particular interest to understanding eruptive processes as their source mechanism is assumed to directly involve fluid transport. However, it is usually difficult or impossible to locate their source using traditional arrival time methods because of emergent wave arrivals. At Kilauea, similar LP waveform signatures suggest the existence of LP multiplets. The waveform similarity suggests spatially close sources, while catalog solutions using arrival time estimates are widely scattered beneath Kilauea's summit caldera. In order to improve estimates of absolute LP location, we use the distribution of seismic amplitudes corrected for station site effects. The decay of the amplitude as a function of hypocentral distance is used for inferring LP location. In a second stage, we use the similarity of the events to calculate their relative positions. The analysis of the entire LP seismicity recorded between January 1997 and December 1999 suggests that a very large part of the LP event population, both deep and shallow, is generated by a small number of compact sources. Deep events are systematically composed of a weak high-frequency onset followed by a low-frequency wave train. Aligning the low-frequency wave trains does not lead to aligning the onsets indicating the two parts of the signal are dissociated. This observation favors an interpretation in terms of triggering and resonance of a magmatic conduit. Instead of defining fault planes, the precise relocation of similar LP events, based on the alignment of the high-energy low-frequency wave trains, defines limited size volumes. Copyright 2003 by the American Geophysical Union.

  2. Bayesian Mulitple-Event Location

    2010-03-30

    Bayesloc is a statistical model of the multiple seismic location system, including event hypocenters, corrections to model-based travel time predictions, assessments precision for measurement phase arrival times, and phase lavels which indicate phase ray path.

  3. Accurate source location from waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, Nian; Shen, Yang; Flinders, Ashton; Zhang, Wei

    2016-06-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (>100 m). In this study, we explore the use of P coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example to provide realistic topography. A grid search algorithm is combined with the 3-D strain Green's tensor database to improve search efficiency as well as the quality of hypocenter solutions. The strain Green's tensor is calculated using a 3-D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are obtained based on the least squares misfit between the "observed" and predicted P and P coda waves. The 95% confidence interval of the solution is provided as an a posteriori error estimation. For shallow events tested in the study, scattering is mainly due to topography in comparison with stochastic lateral velocity heterogeneity. The incorporation of P coda significantly improves solution accuracy and reduces solution uncertainty. The solution remains robust with wide ranges of random noises in data, unmodeled random velocity heterogeneities, and uncertainties in moment tensors. The method can be extended to locate pairs of sources in close proximity by differential waveforms using source-receiver reciprocity, further reducing errors caused by unmodeled velocity structures.

  4. Automated microseismic event location using Master-Event Waveform Stacking

    PubMed Central

    Grigoli, Francesco; Cesca, Simone; Krieger, Lars; Kriegerowski, Marius; Gammaldi, Sergio; Horalek, Josef; Priolo, Enrico; Dahm, Torsten

    2016-01-01

    Accurate and automated locations of microseismic events are desirable for many seismological and industrial applications. The analysis of microseismicity is particularly challenging because of weak seismic signals with low signal-to-noise ratio. Traditional location approaches rely on automated picking, based on individual seismograms, and make no use of the coherency information between signals at different stations. This strong limitation has been overcome by full-waveform location methods, which exploit the coherency of waveforms at different stations and improve the location robustness even in presence of noise. However, the performance of these methods strongly depend on the accuracy of the adopted velocity model, which is often quite rough; inaccurate models result in large location errors. We present an improved waveform stacking location method based on source-specific station corrections. Our method inherits the advantages of full-waveform location methods while strongly mitigating the dependency on the accuracy of the velocity model. With this approach the influence of an inaccurate velocity model on the results is restricted to the estimation of travel times solely within the seismogenic volume, but not for the entire source-receiver path. We finally successfully applied our new method to a realistic synthetic dataset as well as real data. PMID:27185465

  5. Automated microseismic event location using Master-Event Waveform Stacking.

    PubMed

    Grigoli, Francesco; Cesca, Simone; Krieger, Lars; Kriegerowski, Marius; Gammaldi, Sergio; Horalek, Josef; Priolo, Enrico; Dahm, Torsten

    2016-01-01

    Accurate and automated locations of microseismic events are desirable for many seismological and industrial applications. The analysis of microseismicity is particularly challenging because of weak seismic signals with low signal-to-noise ratio. Traditional location approaches rely on automated picking, based on individual seismograms, and make no use of the coherency information between signals at different stations. This strong limitation has been overcome by full-waveform location methods, which exploit the coherency of waveforms at different stations and improve the location robustness even in presence of noise. However, the performance of these methods strongly depend on the accuracy of the adopted velocity model, which is often quite rough; inaccurate models result in large location errors. We present an improved waveform stacking location method based on source-specific station corrections. Our method inherits the advantages of full-waveform location methods while strongly mitigating the dependency on the accuracy of the velocity model. With this approach the influence of an inaccurate velocity model on the results is restricted to the estimation of travel times solely within the seismogenic volume, but not for the entire source-receiver path. We finally successfully applied our new method to a realistic synthetic dataset as well as real data. PMID:27185465

  6. Automated microseismic event location using Master-Event Waveform Stacking

    NASA Astrophysics Data System (ADS)

    Grigoli, Francesco; Cesca, Simone; Krieger, Lars; Kriegerowski, Marius; Gammaldi, Sergio; Horalek, Josef; Priolo, Enrico; Dahm, Torsten

    2016-05-01

    Accurate and automated locations of microseismic events are desirable for many seismological and industrial applications. The analysis of microseismicity is particularly challenging because of weak seismic signals with low signal-to-noise ratio. Traditional location approaches rely on automated picking, based on individual seismograms, and make no use of the coherency information between signals at different stations. This strong limitation has been overcome by full-waveform location methods, which exploit the coherency of waveforms at different stations and improve the location robustness even in presence of noise. However, the performance of these methods strongly depend on the accuracy of the adopted velocity model, which is often quite rough; inaccurate models result in large location errors. We present an improved waveform stacking location method based on source-specific station corrections. Our method inherits the advantages of full-waveform location methods while strongly mitigating the dependency on the accuracy of the velocity model. With this approach the influence of an inaccurate velocity model on the results is restricted to the estimation of travel times solely within the seismogenic volume, but not for the entire source-receiver path. We finally successfully applied our new method to a realistic synthetic dataset as well as real data.

  7. Accurate source location from P waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.

    2015-12-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (> 100 m). In this study, we explore the use of P-coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example. The grid search method is combined with the 3D strain Green's tensor database type method to improve the search efficiency as well as the quality of hypocenter solution. The strain Green's tensor is calculated by the 3D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are then obtained based on the least-square misfit between the 'observed' and predicted P and P-coda waves. A 95% confidence interval of the solution is also provided as a posterior error estimation. We find that the scattered waves are mainly due to topography in comparison with random velocity heterogeneity characterized by the von Kάrmάn-type power spectral density function. When only P wave data is used, the 'best' solution is offset from the real source location mostly in the vertical direction. The incorporation of P coda significantly improves solution accuracy and reduces its uncertainty. The solution remains robust with a range of random noises in data, un-modeled random velocity heterogeneities, and uncertainties in moment tensors that we tested.

  8. How to accurately detect autobiographical events.

    PubMed

    Sartori, Giuseppe; Agosta, Sara; Zogmaister, Cristina; Ferrara, Santo Davide; Castiello, Umberto

    2008-08-01

    We describe a new method, based on indirect measures of implicit autobiographical memory, that allows evaluation of which of two contrasting autobiographical events (e.g., crimes) is true for a given individual. Participants were requested to classify sentences describing possible autobiographical events by pressing one of two response keys. Responses were faster when sentences related to truly autobiographical events shared the same response key with other sentences reporting true events and slower when sentences related to truly autobiographical events shared the same response key with sentences reporting false events. This method has possible application in forensic settings and as a lie-detection technique.

  9. Accurate eye center location through invariant isocentric patterns.

    PubMed

    Valenti, Roberto; Gevers, Theo

    2012-09-01

    Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and impossible to use on standard (i.e., visible wavelength), low-resolution images of eyes. Systems based solely on appearance are proposed in the literature, but their accuracy does not allow us to accurately locate and distinguish eye centers movements in these low-resolution settings. Our aim is to bridge this gap by locating the center of the eye within the area of the pupil on low-resolution images taken from a webcam or a similar device. The proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve in-plane rotational invariance, and to keep low-computational costs. To further gain scale invariance, the approach is applied to a scale space pyramid. In this paper, we extensively test our approach for its robustness to changes in illumination, head pose, scale, occlusion, and eye rotation. We demonstrate that our system can achieve a significant improvement in accuracy over state-of-the-art techniques for eye center location in standard low-resolution imagery. PMID:22813958

  10. Hierarchical Bayesian Approach to Locating Seismic Events

    SciTech Connect

    Johannesson, G; Myers, S C; Hanley, W G

    2005-11-09

    We propose a hierarchical Bayesian model for conducting inference on the location of multiple seismic events (earthquakes) given data on the arrival of various seismic phases to sensor locations. The model explicitly accounts for the uncertainty associated with a theoretical seismic-wave travel-time model used along with the uncertainty of the arrival data. Posterior inferences is carried out using Markov chain Monte Carlo (MCMC).

  11. Accurate Relative Location Estimates for the North Korean Nuclear Tests Using Empirical Slowness Corrections

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna', T.; Mykkeltveit, S.

    2016-10-01

    modified velocity gradients reduce the residuals, the relative location uncertainties, and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.

  12. Accurate Tremor Locations in Japan from Coherent S-Waves

    NASA Astrophysics Data System (ADS)

    Armbruster, J. G.

    2014-12-01

    The tremor detectors developed for accurately locating tectonic tremor in Cascadia [Armbruster et al., JGR 2014] have been applied to data from the HINET seismic network in Japan. The best results were obtained in the Tokai region with stations ASU, ASH and TYE having relatively close spacing (11-18 km). 330 days with active tremor, 2004-2014, near these stations were found on the daily epicentral distributions of tremor on the HINET web site. The detector sees numbers of detections per day comparable to minor tremor episodes in Cascadia. Major tremor episodes in Cascadia are associated with geodetic signals stronger than those seen in Japan. If the tremor is located by constraining it to the plate interface, a pattern of persistent sources is seen, with some intense sources. This is similar to what was seen in Cascadia. In southwest Shikoku 139 days with tremor were identified. Stations UWA, OOZ and IKT see tremor with persistent patterns and strong sources but with approximately one fifth as many detections per day on active days, compared to ASU-ASH-TYE. The web site tremor distributions show activity here as strong as in Tokai. We believe the lesser number of detections in Shikoku is primarily the result of wider station spacing, 19-30 km, than in Tokai, although there may be other factors. Yabe and Ide [EPS 2013] detect and locate tremor in Kyushu on July 17-18 2005 and December 4-6 2008. A detector with stations NRA, SUK and KTM, station spacing 21-22 km, sees tremor which resembles minor episodes in Cascadia. The relative arrival times are consistent with their locations. We conclude that the methods developed in Cascadia will work in Japan but the typical spacing of HINET stations, ~20 km, is greater than the optimum distance found in analysis of data from Cascadia, 8 to 15 km.

  13. It's All about Location, Location, Location: Children's Memory for the "Where'' of Personally Experienced Events

    ERIC Educational Resources Information Center

    Bauer, Patricia J.; Doydum, Ayzit O.; Pathman, Thanujeni; Larkina, Marina; Guler, O. Evren; Burch, Melissa

    2012-01-01

    Episodic memory is defined as the ability to recall specific past events located in a particular time and place. Over the preschool and into the school years, there are clear developmental changes in memory for when events took place. In contrast, little is known about developmental changes in memory for where events were experienced. In the…

  14. Multiple event location analysis of aftershock sequences in the Pannonian basin

    NASA Astrophysics Data System (ADS)

    Bekesi, Eszter; Sule, Balint; Bondar, Istvan

    2016-04-01

    Accurate seismic event location is crucial to understand tectonic processes such as crustal faults that are most commonly investigated by studying seismic activity. Location errors can be significantly reduced using multiple event location methods. We applied the double difference method to relocate the earthquake occurred near Oroszlány and its 200 aftershocks to identify the geometry of the related fault. We used the extended ISC location algorithm, iLoc to determine the absolute single event locations for the Oroszlány aftershock sequence and applied double difference algorithm on the new hypocenters. To improve location precision, we added differential times from waveform cross-correlation to the multiple event location process to increase the accuracy of arrival time readings. We also tested the effect of various local 1-D velocity models on the results. We compared hypoDD results of bulletin and iLoc hypocenters to investigate the effect of initial hypocenter parameters on the relocation process. We show that hypoDD collapses the initial, rather diffuse locations into a smaller cluster and the vertical cross-sections show sharp images of seismicity. Unsurprisingly, the combined use of catalog and cross-correlation data sets provides the more accurate locations. Some of the relocated events in the cluster are ground truth quality with a location accuracy of 5 km or better. Having achieved accurate locations for the event cluster we are able to resolve the fault plane ambiguity in the moment tensor solutions and determine the accurate strike of the fault.

  15. Accurate Event-Driven Motion Compensation in High-Resolution PET Incorporating Scattered and Random Events

    PubMed Central

    Dinelle, Katie; Cheng, Ju-Chieh; Shilov, Mikhail A.; Segars, William P.; Lidstone, Sarah C.; Blinder, Stephan; Rousset, Olivier G.; Vajihollahi, Hamid; Tsui, Benjamin M. W.; Wong, Dean F.; Sossi, Vesna

    2010-01-01

    With continuing improvements in spatial resolution of positron emission tomography (PET) scanners, small patient movements during PET imaging become a significant source of resolution degradation. This work develops and investigates a comprehensive formalism for accurate motion-compensated reconstruction which at the same time is very feasible in the context of high-resolution PET. In particular, this paper proposes an effective method to incorporate presence of scattered and random coincidences in the context of motion (which is similarly applicable to various other motion correction schemes). The overall reconstruction framework takes into consideration missing projection data which are not detected due to motion, and additionally, incorporates information from all detected events, including those which fall outside the field-of-view following motion correction. The proposed approach has been extensively validated using phantom experiments as well as realistic simulations of a new mathematical brain phantom developed in this work, and the results for a dynamic patient study are also presented. PMID:18672420

  16. Bolus location associated with videofluoroscopic and respirodeglutometric events.

    PubMed

    Perlman, Adrienne L; He, Xuming; Barkmeier, Joseph; Van Leer, Eva

    2005-02-01

    The purpose of the present investigation was to determine the relation between specific events observed with simultaneous videofluoroscopy and respirodeglutometry. The order of occurrence was determined for each of 31 events (18 videofluoroscopic, 13 respirodeglutometric). Using 1 video frame (33.3 ms) as the maximum distance allowed between the average times of 2 events in the same cluster, 8 potential clusters were identified, 3 of which were statistically confirmed based on 90% confidence intervals on the mean time distances between events. Confirmed clusters included the time of (a) complete velar descent and the onset of the small noninspiratory flow (SNIF), (b) full separation of the base of the tongue from the pharyngeal wall and SNIF nadir, complete upper esophageal sphincter closure, and SNIF nadir, and (c) onset of epiglottic return and apnea offset. The onset of respiratory flow occurred within 13 ms after the onset of epiglottic return. Additionally, the percentage of swallows during which the bolus head or tail was located at each of 6 locations was determined for 20 of these events (10 videofluoroscopic, 10 respirodeglutometric). The 6 locations of interest included the oral cavity, base of tongue, valleculae, pyriform sinuses, upper esophageal sphincter, and the esophagus. Lastly, of the 72 swallows performed by these healthy, young adults, 65 (90.3%) were preceded by expiration, and all (100%) were followed by expiration.

  17. Development of an accurate transmission line fault locator using the global positioning system satellites

    NASA Technical Reports Server (NTRS)

    Lee, Harry

    1994-01-01

    A highly accurate transmission line fault locator based on the traveling-wave principle was developed and successfully operated within B.C. Hydro. A transmission line fault produces a fast-risetime traveling wave at the fault point which propagates along the transmission line. This fault locator system consists of traveling wave detectors located at key substations which detect and time tag the leading edge of the fault-generated traveling wave as if passes through. A master station gathers the time-tagged information from the remote detectors and determines the location of the fault. Precise time is a key element to the success of this system. This fault locator system derives its timing from the Global Positioning System (GPS) satellites. System tests confirmed the accuracy of locating faults to within the design objective of +/-300 meters.

  18. Hydrogen atoms can be located accurately and precisely by x-ray crystallography

    PubMed Central

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M.; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-01-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A–H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A–H bond lengths with those from neutron measurements for A–H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors. PMID:27386545

  19. Hydrogen atoms can be located accurately and precisely by x-ray crystallography.

    PubMed

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-05-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A-H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A-H bond lengths with those from neutron measurements for A-H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors. PMID:27386545

  20. Hydrogen atoms can be located accurately and precisely by x-ray crystallography.

    PubMed

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-05-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A-H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A-H bond lengths with those from neutron measurements for A-H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors.

  1. Integrated Seismic Event Detection and Location by Advanced Array Processing

    SciTech Connect

    Kvaerna, T; Gibbons, S J; Ringdal, F; Harris, D B

    2007-02-09

    The principal objective of this two-year study is to develop and test a new advanced, automatic approach to seismic detection/location using array processing. We address a strategy to obtain significantly improved precision in the location of low-magnitude events compared with current fully-automatic approaches, combined with a low false alarm rate. We have developed and evaluated a prototype automatic system which uses as a basis regional array processing with fixed, carefully calibrated, site-specific parameters in conjuction with improved automatic phase onset time estimation. We have in parallel developed tools for Matched Field Processing for optimized detection and source-region identification of seismic signals. This narrow-band procedure aims to mitigate some of the causes of difficulty encountered using the standard array processing system, specifically complicated source-time histories of seismic events and shortcomings in the plane-wave approximation for seismic phase arrivals at regional arrays.

  2. Temporal and Location Based RFID Event Data Management and Processing

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Liu, Peiya

    Advance of sensor and RFID technology provides significant new power for humans to sense, understand and manage the world. RFID provides fast data collection with precise identification of objects with unique IDs without line of sight, thus it can be used for identifying, locating, tracking and monitoring physical objects. Despite these benefits, RFID poses many challenges for data processing and management. RFID data are temporal and history oriented, multi-dimensional, and carrying implicit semantics. Moreover, RFID applications are heterogeneous. RFID data management or data warehouse systems need to support generic and expressive data modeling for tracking and monitoring physical objects, and provide automated data interpretation and processing. We develop a powerful temporal and location oriented data model for modeling and queryingRFID data, and a declarative event and rule based framework for automated complex RFID event processing. The approach is general and can be easily adapted for different RFID-enabled applications, thus significantly reduces the cost of RFID data integration.

  3. Leisure and Pleasure: Science events in unusual locations

    NASA Astrophysics Data System (ADS)

    Bultitude, Karen; Margarida Sardo, Ana

    2012-12-01

    Building on concepts relating to informal science education, this work compares science-related activities which successfully engaged public audiences at three different 'generic' locations: a garden festival, a public park, and a music festival. The purpose was to identify what factors contribute to the perceived success of science communication activities occurring within leisure spaces. This article reports the results of 71 short (2-3 min) structured interviews with public participants at the events, and 18 structured observations sessions, demonstrating that the events were considered both novel and interesting by the participants. Audience members were found to perceive both educational and affective purposes from the events. Three key elements were identified as contributing to the success of the activities across the three 'generic venues': the informality of the surroundings, the involvement of 'real' scientists, and the opportunity to re-engage participants with scientific concepts outside formal education.

  4. Absolute GPS Time Event Generation and Capture for Remote Locations

    NASA Astrophysics Data System (ADS)

    HIRES Collaboration

    The HiRes experiment operates fixed location and portable lasers at remote desert locations to generate calibration events. One physics goal of HiRes is to search for unusual showers. These may appear similar to upward or horizontally pointing laser tracks used for atmospheric calibration. It is therefore necessary to remove all of these calibration events from the HiRes detector data stream in a physics blind manner. A robust and convenient "tagging" method is to generate the calibration events at precisely known times. To facilitate this tagging method we have developed the GPSY (Global Positioning System YAG) module. It uses a GPS receiver, an embedded processor and additional timing logic to generate laser triggers at arbitrary programmed times and frequencies with better than 100nS accuracy. The GPSY module has two trigger outputs (one microsecond resolution) to trigger the laser flash-lamp and Q-switch and one event capture input (25nS resolution). The GPSY module can be programmed either by a front panel menu based interface or by a host computer via an RS232 serial interface. The latter also allows for computer logging of generated and captured event times. Details of the design and the implementation of these devices will be presented. 1 Motivation Air Showers represent a small fraction, much less than a percent, of the total High Resolution Fly's Eye data sample. The bulk of the sample is calibration data. Most of this calibration data is generated by two types of systems that use lasers. One type sends light directly to the detectors via optical fibers to monitor detector gains (Girard 2001). The other sends a beam of light into the sky and the scattered light that reaches the detectors is used to monitor atmospheric effects (Wiencke 1998). It is important that these calibration events be cleanly separated from the rest of the sample both to provide a complete set of monitoring information, and more

  5. Automated seismic event location by arrival time stacking: Applications to local and micro-seismicity

    NASA Astrophysics Data System (ADS)

    Grigoli, F.; Cesca, S.; Braun, T.; Philipp, J.; Dahm, T.

    2012-04-01

    Locating seismic events is one of the oldest problem in seismology. In microseismicity application, when the number of event is very large, it is not possible to locate earthquake manually and automated location procedures must be established. Automated seismic event location at different scales is very important in different application areas, including mining monitoring, reservoir geophysics and early warning systems. Location is needed to start rescue operations rapidly. Locating and mapping microearthquakes or acoustic emission sources in mining environments is important for monitoring of mines stability. Mapping fractures through microseimicity distribution inside hydrocarbon reservoirs is needed to find areas with an higher permeability and enhance oil production. In the last 20 years a large number of picking algorithm was developed in order to locate seismic events automatically. While P onsets can now be accurately picked using automatic routines, the automatic picking of later seismic phases (including S onset) is still problematic , thus limiting the location performance. In this work we present a picking free location method based on the use of the Short-Term-Average/Long-Term-Average (STA/LTA) traces at different stations as observed data. For different locations and origin times, observed STA/LTA are stacked along the travel time surface corresponding to the selected hypocentre. Iterating this procedure on a three-dimensional grid we retrieve a multidimensional matrix whose absolute maximum corresponds to the spatio-temporal coordinates of the seismic event. We tested our methodology on synthetic data, simulating different environments and network geometries. Finally, we apply our method to real datasets related to microseismic activity in mines and earthquake swarms in Italy. This work has been funded by the German BMBF "Geotechnologien" project MINE (BMBF03G0737A).

  6. The DOE Model for Improving Seismic Event Locations Using Travel Time Corrections: Description and Demonstration

    SciTech Connect

    Hipp, J.R.; Moore, S.G.; Shepherd, E.; Young, C.J.

    1998-10-20

    The U.S. National Laboratories, under the auspices of the Department of Energy, have been tasked with improv- ing the capability of the United States National Data Center (USNDC) to monitor compliance with the Comprehen- sive Test Ban Trea~ (CTBT). One of the most important services which the USNDC must provide is to locate suspicious events, preferably as accurately as possible to help identify their origin and to insure the success of on-site inspections if they are deemed necessary. The seismic location algorithm used by the USNDC has the capability to generate accurate locations by applying geographically dependent travel time corrections, but to date, none of the means, proposed for generating and representing these corrections has proven to be entirely satisfactory. In this presentation, we detail the complete DOE model for how regional calibration travel time information gathered by the National Labs will be used to improve event locations and provide more realistic location error esti- mates. We begin with residual data and error estimates from ground truth events. Our model consists of three parts: data processing, data storage, and data retrieval. The former two are effectively one-time processes, executed in advance before the system is made operational. The last step is required every time an accurate event location is needed. Data processing involves applying non-stationary Bayesian kriging to the residwd data to densifi them, and iterating to find the optimal tessellation representation for the fast interpolation in the data retrieval task. Both the kriging and the iterative re-tessellation are slow, computationally-expensive processes but this is acceptable because they are performed off-line, before any events are to be located. In the data storage task, the densified data set is stored in a database and spatially indexed. Spatial indexing improves the access efficiency of the geographically-ori- ented data requests associated with event location

  7. Pickless event detection and location: The waveform correlation event detection system (WCEDS) revisited

    DOE PAGES

    Arrowsmith, Stephen John; Young, Christopher J.; Ballard, Sanford; Slinkard, Megan Elizabeth

    2016-01-01

    The standard paradigm for seismic event monitoring breaks the event detection problem down into a series of processing stages that can be categorized at the highest level into station-level processing and network-level processing algorithms (e.g., Le Bras and Wuster (2002)). At the station-level, waveforms are typically processed to detect signals and identify phases, which may subsequently be updated based on network processing. At the network-level, phase picks are associated to form events, which are subsequently located. Furthermore, waveforms are typically directly exploited only at the station-level, while network-level operations rely on earth models to associate and locate the events thatmore » generated the phase picks.« less

  8. Pickless event detection and location: The waveform correlation event detection system (WCEDS) revisited

    SciTech Connect

    Arrowsmith, Stephen John; Young, Christopher J.; Ballard, Sanford; Slinkard, Megan Elizabeth

    2016-01-01

    The standard paradigm for seismic event monitoring breaks the event detection problem down into a series of processing stages that can be categorized at the highest level into station-level processing and network-level processing algorithms (e.g., Le Bras and Wuster (2002)). At the station-level, waveforms are typically processed to detect signals and identify phases, which may subsequently be updated based on network processing. At the network-level, phase picks are associated to form events, which are subsequently located. Furthermore, waveforms are typically directly exploited only at the station-level, while network-level operations rely on earth models to associate and locate the events that generated the phase picks.

  9. Accurate Damage Location in Complex Composite Structures and Industrial Environments using Acoustic Emission

    NASA Astrophysics Data System (ADS)

    Eaton, M.; Pearson, M.; Lee, W.; Pullin, R.

    2015-07-01

    The ability to accurately locate damage in any given structure is a highly desirable attribute for an effective structural health monitoring system and could help to reduce operating costs and improve safety. This becomes a far greater challenge in complex geometries and materials, such as modern composite airframes. The poor translation of promising laboratory based SHM demonstrators to industrial environments forms a barrier to commercial up take of technology. The acoustic emission (AE) technique is a passive NDT method that detects elastic stress waves released by the growth of damage. It offers very sensitive damage detection, using a sparse array of sensors to detect and globally locate damage within a structure. However its application to complex structures commonly yields poor accuracy due to anisotropic wave propagation and the interruption of wave propagation by structural features such as holes and thickness changes. This work adopts an empirical mapping technique for AE location, known as Delta T Mapping, which uses experimental training data to account for such structural complexities. The technique is applied to a complex geometry composite aerospace structure undergoing certification testing. The component consists of a carbon fibre composite tube with varying wall thickness and multiple holes, that was loaded under bending. The damage location was validated using X-ray CT scanning and the Delta T Mapping technique was shown to improve location accuracy when compared with commercial algorithms. The onset and progression of damage were monitored throughout the test and used to inform future design iterations.

  10. It’s all about location, location, location: Children’s memory for the “where” of personally experienced events

    PubMed Central

    Bauer, Patricia J.; Doydum, Ayzit O.; Pathman, Thanujeni; Larkina, Marina; Güler, O. Evren; Burch, Melissa

    2012-01-01

    Episodic memory is defined as the ability to recall specific past events located in a particular time and place. Over the preschool and into the school years, there are clear developmental changes in memory for when events took place. In contrast, little is known about developmental changes in memory for where events were experienced. In the present research we tested 4-, 6-, and 8-year-old children’s memories for specific laboratory events, each of which was experienced in a unique location. We also tested the children memories for the conjunction of the events and their locations. Age-related differences were observed in all three types of memory (event, location, conjunction of event and location), with the most pronounced differences in memory for conjunctions of events and their locations. The results have implications for our understanding of the development of episodic memory, including suggestions of protracted development of the ability to contextualize events in their spatial locations. PMID:23010356

  11. Distributed fiber sensing system with wide frequency response and accurate location

    NASA Astrophysics Data System (ADS)

    Shi, Yi; Feng, Hao; Zeng, Zhoumo

    2016-02-01

    A distributed fiber sensing system merging Mach-Zehnder interferometer and phase-sensitive optical time domain reflectometer (Φ-OTDR) is demonstrated for vibration measurement, which requires wide frequency response and accurate location. Two narrow line-width lasers with delicately different wavelengths are used to constitute the interferometer and reflectometer respectively. A narrow band Fiber Bragg Grating is responsible for separating the two wavelengths. In addition, heterodyne detection is applied to maintain the signal to noise rate of the locating signal. Experiment results show that the novel system has a wide frequency from 1 Hz to 50 MHz, limited by the sample frequency of data acquisition card, and a spatial resolution of 20 m, according to 200 ns pulse width, along 2.5 km fiber link.

  12. Accurate Vehicle Location System Using RFID, an Internet of Things Approach.

    PubMed

    Prinsloo, Jaco; Malekian, Reza

    2016-06-04

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved.

  13. Accurate Vehicle Location System Using RFID, an Internet of Things Approach.

    PubMed

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  14. Accurate Vehicle Location System Using RFID, an Internet of Things Approach

    PubMed Central

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  15. Incorporation of probabilistic seismic phase labels into a Bayesian multiple-event seismic locator

    SciTech Connect

    Myers, S; Johannesson, G; Hanley, W

    2008-01-17

    We add probabilistic phase labels to the multiple-event joint probability function of Myers et al., 2007 that formerly included event locations, travel-time corrections, and arrival-time measurement precision. Prior information on any of the multiple-event parameters may be used. The phase-label model includes a null label that captures phases not belonging to the collection of phases under consideration. Using the Markov-Chain Monte Carlo method, samples are drawn from the multiple-event joint probability function to infer the posteriori distribution that is consistent with priors and the arrival-time data set. Using this approach phase-label error can be accessed and phase-label error is propagated to all other multiple-event parameters. We test the method using a ground-truth data set of nuclear explosions at the Nevada Test Site. We find that posteriori phase labels agree with the meticulously analyzed data set in more than 97% of instances and the results are robust even when the input phase-label information is discarded. Only when a large percentage of the arrival-time data are corrupted does prior phase label information improve resolution of multiple-event parameters. Simultaneous modeling of the entire multiple-event system results in accurate posteriori probability regions for each multiple-event parameter.

  16. LOCATION OF BODY FAT AMONG WOMEN WHO ACCURATELY OR INACCURATELY PERCEIVE THEIR WEIGHT STATUS.

    PubMed

    Rote, Aubrianne E; Klos, Lori A; Swartz, Ann M

    2015-10-01

    This cross-sectional study investigated location of body fat, with specific focus on abdominal fat, among normal weight and overweight women who accurately or inaccurately perceived their weight status. Young, adult women (N = 120; M age = 19.5 yr., SD = 1.2) were asked to classify their weight status using the Self-Classified Weight subscale from the Multidimensional Body-Self Relations Questionnaire. Actual weight status was operationalized via dual-energy x-ray absorptiometry. Overweight women who thought they were normal weight had an average of 19 pounds more fat than normal weight women with 1.5 pounds of excess abdominal fat. Interventions to raise awareness among overweight women unaware of their fat level are warranted. However, these interventions should balance consideration of potential detriments to body image among these women. PMID:26474442

  17. Accurate prediction of V1 location from cortical folds in a surface coordinate system

    PubMed Central

    Hinds, Oliver P.; Rajendran, Niranjini; Polimeni, Jonathan R.; Augustinack, Jean C.; Wiggins, Graham; Wald, Lawrence L.; Rosas, H. Diana; Potthast, Andreas; Schwartz, Eric L.; Fischl, Bruce

    2008-01-01

    Previous studies demonstrated substantial variability of the location of primary visual cortex (V1) in stereotaxic coordinates when linear volume-based registration is used to match volumetric image intensities (Amunts et al., 2000). However, other qualitative reports of V1 location (Smith, 1904; Stensaas et al., 1974; Rademacher et al., 1993) suggested a consistent relationship between V1 and the surrounding cortical folds. Here, the relationship between folds and the location of V1 is quantified using surface-based analysis to generate a probabilistic atlas of human V1. High-resolution (about 200 μm) magnetic resonance imaging (MRI) at 7 T of ex vivo human cerebral hemispheres allowed identification of the full area via the stria of Gennari: a myeloarchitectonic feature specific to V1. Separate, whole-brain scans were acquired using MRI at 1.5 T to allow segmentation and mesh reconstruction of the cortical gray matter. For each individual, V1 was manually identified in the high-resolution volume and projected onto the cortical surface. Surface-based intersubject registration (Fischl et al., 1999b) was performed to align the primary cortical folds of individual hemispheres to those of a reference template representing the average folding pattern. An atlas of V1 location was constructed by computing the probability of V1 inclusion for each cortical location in the template space. This probabilistic atlas of V1 exhibits low prediction error compared to previous V1 probabilistic atlases built in volumetric coordinates. The increased predictability observed under surface-based registration suggests that the location of V1 is more accurately predicted by the cortical folds than by the shape of the brain embedded in the volume of the skull. In addition, the high quality of this atlas provides direct evidence that surface-based intersubject registration methods are superior to volume-based methods at superimposing functional areas of cortex, and therefore are better

  18. Ground truth seismic events and location capability at Degelen mountain, Kazakhstan

    USGS Publications Warehouse

    Trabant, C.; Thurber, C.; Leith, W.

    2002-01-01

    We utilized nuclear explosions from the Degelen Mountain sub-region of the Semipalatinsk Test Site (STS), Kazakhstan, to assess seismic location capability directly. Excellent ground truth information for these events was either known or was estimated from maps of the Degelen Mountain adit complex. Origin times were refined for events for which absolute origin time information was unknown using catalog arrival times, our ground truth location estimates, and a time baseline provided by fixing known origin times during a joint hypocenter determination (JHD). Precise arrival time picks were determined using a waveform cross-correlation process applied to the available digital data. These data were used in a JHD analysis. We found that very accurate locations were possible when high precision, waveform cross-correlation arrival times were combined with JHD. Relocation with our full digital data set resulted in a mean mislocation of 2 km and a mean 95% confidence ellipse (CE) area of 6.6 km2 (90% CE: 5.1 km2), however, only 5 of the 18 computed error ellipses actually covered the associated ground truth location estimate. To test a more realistic nuclear test monitoring scenario, we applied our JHD analysis to a set of seven events (one fixed) using data only from seismic stations within 40?? epicentral distance. Relocation with these data resulted in a mean mislocation of 7.4 km, with four of the 95% error ellipses covering less than 570 km2 (90% CE: 438 km2), and the other two covering 1730 and 8869 km2 (90% CE: 1331 and 6822 km2). Location uncertainties calculated using JHD often underestimated the true error, but a circular region with a radius equal to the mislocation covered less than 1000 km2 for all events having more than three observations. ?? 2002 Elsevier Science B.V. All rights reserved.

  19. Detection and location of multiple events by MARS. Final report. [Multiple Arrival Recognition System

    SciTech Connect

    Wang, J.; Masso, J.F.; Archambeau, C.B.; Savino, J.M.

    1980-09-01

    Seismic data from two explosions was processed using the Systems Science and Software MARS (Multiple Arrival Recognition System) seismic event detector in an effort to determine their relative spatial and temporal separation on the basis of seismic data alone. The explosions were less than 1.0 kilometer apart and were separated by less than 0.5 sec in origin times. The seismic data consisted of nine local accelerograms (r < 1.0 km) and four regional (240 through 400 km) seismograms. The MARS processing clearly indicates the presence of multiple explosions, but the restricted frequency range of the data inhibits accurate time picks and hence limits the precision of the event location.

  20. Bolus Location Associated with Videofluoroscopic and Respirodeglutometric Events

    ERIC Educational Resources Information Center

    Perlman, Adrienne L.; He, Xuming; Barkmeier, Joseph; Van Leer, Eva

    2005-01-01

    The purpose of the present investigation was to determine the relation between specific events observed with simultaneous videofluoroscopy and respirodeglutometry. The order of occurrence was determined for each of 31 events (18 videofluoroscopic, 13 respirodeglutometric). Using 1 video frame (33.3 ms) as the maximum distance allowed between the…

  1. Combined Use of Absolute and Differential Seismic Arrival Time Data to Improve Absolute Event Location

    NASA Astrophysics Data System (ADS)

    Myers, S.; Johannesson, G.

    2012-12-01

    Arrival time measurements based on waveform cross correlation are becoming more common as advanced signal processing methods are applied to seismic data archives and real-time data streams. Waveform correlation can precisely measure the time difference between the arrival of two phases, and differential time data can be used to constrain relative location of events. Absolute locations are needed for many applications, which generally requires the use of absolute time data. Current methods for measuring absolute time data are approximately two orders of magnitude less precise than differential time measurements. To exploit the strengths of both absolute and differential time data, we extend our multiple-event location method Bayesloc, which previously used absolute time data only, to include the use of differential time measurements that are based on waveform cross correlation. Fundamentally, Bayesloc is a formulation of the joint probability over all parameters comprising the multiple event location system. The Markov-Chain Monte Carlo method is used to sample from the joint probability distribution given arrival data sets. The differential time component of Bayesloc includes scaling a stochastic estimate of differential time measurement precision based the waveform correlation coefficient for each datum. For a regional-distance synthetic data set with absolute and differential time measurement error of 0.25 seconds and 0.01 second, respectively, epicenter location accuracy is improved from and average of 1.05 km when solely absolute time data are used to 0.28 km when absolute and differential time data are used jointly (73% improvement). The improvement in absolute location accuracy is the result of conditionally limiting absolute location probability regions based on the precise relative position with respect to neighboring events. Bayesloc estimates of data precision are found to be accurate for the synthetic test, with absolute and differential time measurement

  2. Leisure and Pleasure: Science Events in Unusual Locations

    ERIC Educational Resources Information Center

    Bultitude, Karen; Sardo, Ana Margarida

    2012-01-01

    Building on concepts relating to informal science education, this work compares science-related activities which successfully engaged public audiences at three different "generic" locations: a garden festival, a public park, and a music festival. The purpose was to identify what factors contribute to the perceived success of science communication…

  3. Using XTE as Part of the IPN to Derive Accurate GRB Locations

    NASA Technical Reports Server (NTRS)

    Barthelmy, S.

    1998-01-01

    The objective of this final report was to integrate the Rossi X-Ray Timing Explorer PCA into the 3rd Interplanetary Network of gamma-ray burst detectors, to allow more bursts to be detected and accurately localized. Although the necessary software was implemented to do this at Goddard and at UC Berkeley, several factors made a full integration impossible or impractical.

  4. Fast and accurate dating of nuclear events using La-140/Ba-140 isotopic activity ratio.

    PubMed

    Yamba, Kassoum; Sanogo, Oumar; Kalinowski, Martin B; Nikkinen, Mika; Koulidiati, Jean

    2016-06-01

    This study reports on a fast and accurate assessment of zero time of certain nuclear events using La-140/Ba-140 isotopic activity ratio. For a non-steady nuclear fission reaction, the dating is not possible. For the hypothesis of a nuclear explosion and for a release from a steady state nuclear fission reaction the zero-times will differ. This assessment is fast, because we propose some constants that can be used directly for the calculation of zero time and its upper and lower age limits. The assessment is accurate because of the calculation of zero time using a mathematical method, namely the weighted least-squares method, to evaluate an average value of the age of a nuclear event. This was done using two databases that exhibit differences between the values of some nuclear parameters. As an example, the calculation method is applied for the detection of radionuclides La-140 and Ba-140 in May 2010 at the radionuclides station JPP37 (Okinawa Island, Japan).

  5. A single geophone to locate seismic events on Mars

    NASA Astrophysics Data System (ADS)

    Roques, Aurélien; Berenguer, Jean-Luc; Bozdag, Ebru

    2016-04-01

    Knowing the structure of Mars is a key point in understanding the formation of Earth-like planets as plate tectonics and erosion have erased the original suface of the Earth formation. Installing a seismometer on Mars surface makes it possible to identify its structure. An important step in the identification of the structure of a planet is the epicenter's location of a seismic source, typically a meteoric impact or an earthquake. On Earth, the classical way of locating epicenters is triangulation, which requires at least 3 stations. The Mars InSight Project plans to set a single station with 3 components. We propose a software to locate seismic sources on Mars thanks to the 3-components simulated data of an earthquake given by Geoazur (Nice Sophia-Antipolis University, CNRS) researchers. Instrumental response of a sensor is crucial for data interpretation. We study the oscillations of geophone in several situations so as to awaken students to the meaning of damping in second order modeling. In physics, car shock absorbers are often used to illustrate the principle of damping but rarely in practical experiments. We propose the use of a simple seismometer (a string with a mass and a damper) that allows changing several parameters (inductive damping, temperature and pressure) so as to see the effects of these parameters on the impulse response and, in particular, on the damping coefficient. In a second step, we illustrate the effect of damping on a seismogram with the difficulty of identifying and interpreting the different phase arrival times with low damping.

  6. Accurate identification of centromere locations in yeast genomes using Hi-C.

    PubMed

    Varoquaux, Nelle; Liachko, Ivan; Ay, Ferhat; Burton, Joshua N; Shendure, Jay; Dunham, Maitreya J; Vert, Jean-Philippe; Noble, William S

    2015-06-23

    Centromeres are essential for proper chromosome segregation. Despite extensive research, centromere locations in yeast genomes remain difficult to infer, and in most species they are still unknown. Recently, the chromatin conformation capture assay, Hi-C, has been re-purposed for diverse applications, including de novo genome assembly, deconvolution of metagenomic samples and inference of centromere locations. We describe a method, Centurion, that jointly infers the locations of all centromeres in a single genome from Hi-C data by exploiting the centromeres' tendency to cluster in three-dimensional space. We first demonstrate the accuracy of Centurion in identifying known centromere locations from high coverage Hi-C data of budding yeast and a human malaria parasite. We then use Centurion to infer centromere locations in 14 yeast species. Across all microbes that we consider, Centurion predicts 89% of centromeres within 5 kb of their known locations. We also demonstrate the robustness of the approach in datasets with low sequencing depth. Finally, we predict centromere coordinates for six yeast species that currently lack centromere annotations. These results show that Centurion can be used for centromere identification for diverse species of yeast and possibly other microorganisms.

  7. Accurate identification of centromere locations in yeast genomes using Hi-C

    PubMed Central

    Varoquaux, Nelle; Liachko, Ivan; Ay, Ferhat; Burton, Joshua N.; Shendure, Jay; Dunham, Maitreya J.; Vert, Jean-Philippe; Noble, William S.

    2015-01-01

    Centromeres are essential for proper chromosome segregation. Despite extensive research, centromere locations in yeast genomes remain difficult to infer, and in most species they are still unknown. Recently, the chromatin conformation capture assay, Hi-C, has been re-purposed for diverse applications, including de novo genome assembly, deconvolution of metagenomic samples and inference of centromere locations. We describe a method, Centurion, that jointly infers the locations of all centromeres in a single genome from Hi-C data by exploiting the centromeres’ tendency to cluster in three-dimensional space. We first demonstrate the accuracy of Centurion in identifying known centromere locations from high coverage Hi-C data of budding yeast and a human malaria parasite. We then use Centurion to infer centromere locations in 14 yeast species. Across all microbes that we consider, Centurion predicts 89% of centromeres within 5 kb of their known locations. We also demonstrate the robustness of the approach in datasets with low sequencing depth. Finally, we predict centromere coordinates for six yeast species that currently lack centromere annotations. These results show that Centurion can be used for centromere identification for diverse species of yeast and possibly other microorganisms. PMID:25940625

  8. Disambiguating past events: Accurate source memory for time and context depends on different retrieval processes.

    PubMed

    Persson, Bjorn M; Ainge, James A; O'Connor, Akira R

    2016-07-01

    Current animal models of episodic memory are usually based on demonstrating integrated memory for what happened, where it happened, and when an event took place. These models aim to capture the testable features of the definition of human episodic memory which stresses the temporal component of the memory as a unique piece of source information that allows us to disambiguate one memory from another. Recently though, it has been suggested that a more accurate model of human episodic memory would include contextual rather than temporal source information, as humans' memory for time is relatively poor. Here, two experiments were carried out investigating human memory for temporal and contextual source information, along with the underlying dual process retrieval processes, using an immersive virtual environment paired with a 'Remember-Know' memory task. Experiment 1 (n=28) showed that contextual information could only be retrieved accurately using recollection, while temporal information could be retrieved using either recollection or familiarity. Experiment 2 (n=24), which used a more difficult task, resulting in reduced item recognition rates and therefore less potential for contamination by ceiling effects, replicated the pattern of results from Experiment 1. Dual process theory predicts that it should only be possible to retrieve source context from an event using recollection, and our results are consistent with this prediction. That temporal information can be retrieved using familiarity alone suggests that it may be incorrect to view temporal context as analogous to other typically used source contexts. This latter finding supports the alternative proposal that time since presentation may simply be reflected in the strength of memory trace at retrieval - a measure ideally suited to trace strength interrogation using familiarity, as is typically conceptualised within the dual process framework. PMID:27174312

  9. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  10. A General Event Location Algorithm with Applications to Eclispe and Station Line-of-Sight

    NASA Technical Reports Server (NTRS)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  11. Multiple-Event Location Using the Markov-Chain Monte Carlo Technique

    SciTech Connect

    Myers, S C; Johannesson, G; Hanley, W

    2005-07-13

    The goal of next-generation seismic location is to ascertain a consistent set of event locations and travel-time corrections through simultaneous analysis of all relevant data. Towards that end, we are developing a new multiple-event location algorithm that utilizes the Markov-Chain Monte Carlo (MCMC) method for solving large, non-linear event inverse problems. Unlike most inverse methods, the MCMC approach produces a suite of solutions, each of which is consistent with seismic and other observations, as well as prior estimates of data and model uncertainties. In the MCMC multiple-event locator (MCMCloc), the model uncertainties consist of prior estimates on the accuracy of each input event location, travel-time prediction uncertainties, phase measurement uncertainties, and assessments of phase identification. The prior uncertainty estimates include correlations between travel-time predictions, correlations between measurement errors, and the probability of misidentifying one phase for another (or bogus picks). The implementation of prior constraints on location accuracy allows the direct utilization of ground-truth events in the location algorithm. This is a significant improvement over most other multiple-event locators (GMEL is an exception), for which location accuracy is achieved through post-processing comparisons with ground-truth information. Like the double-difference algorithm, the implementation of a correlation structure for travel-time predictions allows MCMCloc to operate over arbitrarily large geographic areas. MCMCloc can accommodate non-Gaussian and multi-modal pick distributions, which can enhance application to poorly recorded events. Further, MCMCloc allows for ambiguous determination of phase assignments, and the solution includes the probability that phases are properly assigned. The probabilities that phase assignments are correct are propagated to the estimates of all other model parameters. Posteriori estimates of event locations, path

  12. Location of the Green Canyon (Offshore Southern Louisiana) Seismic Event of February 10, 2006

    USGS Publications Warehouse

    Dewey, James W.; Dellinger, Joseph A.

    2008-01-01

    We calculated an epicenter for the Offshore Southern Louisiana seismic event of February 10, 2006 (the 'Green Canyon event') that was adopted as the preferred epicenter for the event by the USGS/NEIC. The event is held at a focal depth of 5 km; the focal depth could not be reliably calculated but was most likely between 1 km and 15 km beneath sea level. The epicenter was calculated with a radially symmetric global Earth model similar to that routinely used at the USGS/NEIC for all earthquakes worldwide. The location was calculated using P-waves recorded by seismographic stations from which the USGS/NEIC routinely obtains seismological data, plus data from two seismic exploration arrays, the Atlantis ocean-bottom node array, operated by BP in partnership with BHP Billiton Limited, and the CGG Green Canyon phase VIII multi-client towed-streamer survey. The preferred epicenter is approximately 26 km north of an epicenter earlier published by the USGS/NEIC, which was obtained without benefit of the seismic exploration arrays. We estimate that the preferred epicenter is accurate to within 15 km. We selected the preferred epicenter from a suite of trial calculations that attempted to fit arrival times of seismic energy associated with the Green Canyon event and that explored the effect of errors in the velocity model used to calculate the preferred epicenter. The various trials were helpful in confirming the approximate correctness of the preferred epicenter and in assessing the accuracy of the preferred epicenter, but none of the trial calculations, including that of the preferred epicenter, was able to reconcile arrival-time observations and assumed velocity model as well as is typical for the vast majority of earthquakes in and near the continental United States. We believe that remaining misfits between the preferred solution and the observations reflect errors in interpreted arrival times of emergent seismic phases that are due partly to a temporally extended source

  13. Accurate GPS measurement of the location and orientation of a floating platform. [for sea floor geodesy

    NASA Technical Reports Server (NTRS)

    Purcell, G. H., Jr.; Young, L. E.; Wolf, S. K.; Meehan, T. K.; Duncan, C. B.; Fisher, S. S.; Spiess, F. N.; Austin, G.; Boegeman, D. E.; Lowenstein, C. D.

    1990-01-01

    This article describes the design and initial tests of the GPS portion of a system for making seafloor geodesy measurements. In the planned system, GPS antennas on a floating platform will be used to measure the location of an acoustic transducer, attached below the platform, which interrogates an array of transponders on the seafloor. Since the GPS antennas are necessarily some distance above the transducer, a short-baseline GPS interferometer consisting of three antennas is used to measure the platform's orientation. A preliminary test of several crucial elements of the system was performed. The test involved a fixed antenna on the pier and a second antenna floating on a buoy about 80 m away. GPS measurements of the vertical component of this baseline, analyzed independently by two groups using different software, agree with each other and with an independent measurement within a centimeter. The first test of an integrated GPS/acoustic system took place in the Santa Cruz Basin off the coast of southern California in May 1990. In this test a much larger buoy, designed and built at SIO, was equipped with three GPS antennas and an acoustic transducer that interrogated a transponder on the ocean floor. Preliminary analysis indicates that the horizontal position of the transponder can be determined with a precision of about a centimeter.

  14. An infrastructure for accurate characterization of single-event transients in digital circuits☆

    PubMed Central

    Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael

    2013-01-01

    We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure. PMID:24748694

  15. An infrastructure for accurate characterization of single-event transients in digital circuits.

    PubMed

    Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael

    2013-11-01

    We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure. PMID:24748694

  16. An infrastructure for accurate characterization of single-event transients in digital circuits.

    PubMed

    Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael

    2013-11-01

    We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure.

  17. Locations and focal mechanisms of deep long period events beneath Aleutian Arc volcanoes using back projection methods

    NASA Astrophysics Data System (ADS)

    Lough, A. C.; Roman, D. C.; Haney, M. M.

    2015-12-01

    Deep long period (DLP) earthquakes are commonly observed in volcanic settings such as the Aleutian Arc in Alaska. DLPs are poorly understood but are thought to be associated with movements of fluids, such as magma or hydrothermal fluids, deep in the volcanic plumbing system. These events have been recognized for several decades but few studies have gone beyond their identification and location. All long period events are more difficult to identify and locate than volcano-tectonic (VT) earthquakes because traditional detection schemes focus on high frequency (short period) energy. In addition, DLPs present analytical challenges because they tend to be emergent and so it is difficult to accurately pick the onset of arriving body waves. We now expect to find DLPs at most volcanic centers, the challenge lies in identification and location. We aim to reduce the element of human error in location by applying back projection to better constrain the depth and horizontal position of these events. Power et al. (2004) provided the first compilation of DLP activity in the Aleutian Arc. This study focuses on the reanalysis of 162 cataloged DLPs beneath 11 volcanoes in the Aleutian arc (we expect to ultimately identify and reanalyze more DLPs). We are currently adapting the approach of Haney (2014) for volcanic tremor to use back projection over a 4D grid to determine position and origin time of DLPs. This method holds great potential in that it will allow automated, high-accuracy picking of arrival times and could reduce the number of arrival time picks necessary for traditional location schemes to well constrain event origins. Back projection can also calculate a relative focal mechanism (difficult with traditional methods due to the emergent nature of DLPs) allowing the first in depth analysis of source properties. Our event catalog (spanning over 25 years and volcanoes) is one of the longest and largest and enables us to investigate spatial and temporal variation in DLPs.

  18. A new Bayesian approach of tomography and seismic event location dedicated to the estimation of the true uncertainties

    NASA Astrophysics Data System (ADS)

    Gesret, Alexandrine; Noble, Mark; Desassis, Nicolas; Romary, Thomas

    2013-04-01

    The monitoring of hydrocarbon reservoirs, geothermal reservoirs and mines commonly relies on the analysis of the induced seismicity. Even if a large amount of microseismic data have been recorded, the relationship between the exploration and the induced seismicity still needs to be better understood. This microseismicity is also interpreted to derive the fracture network and several physical parameters. The first step is thus to locate very precisely the induced seismicity and to estimate its associated uncertainties. The microseismic location errors are mainly due to the lack of knowledge of the wave-propagation medium, the velocity model has thus to be preliminary inverted. We here present a tomography algorithm that estimates the true uncertainties on the resulting velocity model. Including these results, we develop an approach that allows to obtain accurate event locations and their associated uncertainties due to the velocity model uncertainties. We apply a Monte-Carlo Markov chain (MCMC) algorithm to the tomography of calibration shots for a typical 3D geometry hydraulic fracture context. Our formulation is especially useful for ill-posed inverse problem, as it results in a large number of samples of possible solutions from the posterior probability distribution. All these velocity models are consistent with both the data and the prior information. Our non linear approach leads to a very satisfying mean velocity model and to associated meaningful standard deviations. These uncertainty estimates are much more reliable and accurate than sensitivity tests for only one final solution that is obtained with a linearized inversion approach. The Bayesian approach is commonly used for the computation of the posterior probability density function (PDF) of the event location as proposed by Tarantola and Valette in 1982 and Lomax in 2000. We add here the propagation of the posterior distribution of the velocity model to the formulation of the posterior PDF of the event

  19. Accurate modeling and inversion of electrical resistivity data in the presence of metallic infrastructure with known location and dimension

    SciTech Connect

    Johnson, Timothy C.; Wellman, Dawn M.

    2015-06-26

    Electrical resistivity tomography (ERT) has been widely used in environmental applications to study processes associated with subsurface contaminants and contaminant remediation. Anthropogenic alterations in subsurface electrical conductivity associated with contamination often originate from highly industrialized areas with significant amounts of buried metallic infrastructure. The deleterious influence of such infrastructure on imaging results generally limits the utility of ERT where it might otherwise prove useful for subsurface investigation and monitoring. In this manuscript we present a method of accurately modeling the effects of buried conductive infrastructure within the forward modeling algorithm, thereby removing them from the inversion results. The method is implemented in parallel using immersed interface boundary conditions, whereby the global solution is reconstructed from a series of well-conditioned partial solutions. Forward modeling accuracy is demonstrated by comparison with analytic solutions. Synthetic imaging examples are used to investigate imaging capabilities within a subsurface containing electrically conductive buried tanks, transfer piping, and well casing, using both well casings and vertical electrode arrays as current sources and potential measurement electrodes. Results show that, although accurate infrastructure modeling removes the dominating influence of buried metallic features, the presence of metallic infrastructure degrades imaging resolution compared to standard ERT imaging. However, accurate imaging results may be obtained if electrodes are appropriately located.

  20. One dimensional P wave velocity structure of the crust beneath west Java and accurate hypocentre locations from local earthquake inversion

    SciTech Connect

    Supardiyono; Santosa, Bagus Jaya

    2012-06-20

    A one-dimensional (1-D) velocity model and station corrections for the West Java zone were computed by inverting P-wave arrival times recorded on a local seismic network of 14 stations. A total of 61 local events with a minimum of 6 P-phases, rms 0.56 s and a maximum gap of 299 Degree-Sign were selected. Comparison with previous earthquake locations shows an improvement for the relocated earthquakes. Tests were carried out to verify the robustness of inversion results in order to corroborate the conclusions drawn out from our reasearch. The obtained minimum 1-D velocity model can be used to improve routine earthquake locations and represents a further step toward more detailed seismotectonic studies in this area of West Java.

  1. The use of propagation path corrections to improve seismic event location in western China

    SciTech Connect

    Cogbill, A.H.; Steck, L.K.

    1998-03-01

    In an effort to improve ability to locate events in western China using only regional data, the authors have developed propagation path corrections to seismic travel times, and applied such corrections using both traditional location routines as well as a nonlinear grid search method. Thus far, they have concentrated on corrections to observed P arrival times. They have constructed such corrections by using travel time observations available from the USGS Earthquake Data Reports, as well as data reported by the ISC. They have also constructed corrections for six stations that are a part of the International monitoring System. For each station having sufficient data, they produce a map of the travel-time residuals from all located events. Large-amplitude residuals are removed by median filtering, and the resulting data are gridded. For a given source location, the correction at a particular station is then interpolated from the correction grid associated with the station. They have constrained the magnitude of the corrections to be {le} 3 s. They have evaluated the utility of the calculated corrections by applying the corrections to the regional relocation of 10 well-located Chinese nuclear tests, as well as a single, well-located aftershock in nearby Kyrgyzstan. The use of corrections having magnitudes > 2 s is troubling when using traditional location codes, as the corrections amount to a nonlinear perturbation correction, and when large may destabilize the location algorithm. Partly for this reason, the authors have begun using grid search methods to relocate regional events. Such methods are easy to implement and fully nonlinear. Moreover, the misfit function used to locate the event can very easily be changed; they have used L{sub 1}- and L{sub 2}-norm misfit functions, for example. Instances in which multiple local minima occur in a location problem are easily recognized by simply contouring or otherwise displaying the misfit function.

  2. Optimizing the real-time automatic location of the events produced in Romania using an advanced processing system

    NASA Astrophysics Data System (ADS)

    Neagoe, Cristian; Grecu, Bogdan; Manea, Liviu

    2016-04-01

    National Institute for Earth Physics (NIEP) operates a real time seismic network which is designed to monitor the seismic activity on the Romanian territory, which is dominated by the intermediate earthquakes (60-200 km) from Vrancea area. The ability to reduce the impact of earthquakes on society depends on the existence of a large number of high-quality observational data. The development of the network in recent years and an advanced seismic acquisition are crucial to achieving this objective. The software package used to perform the automatic real-time locations is Seiscomp3. An accurate choice of the Seiscomp3 setting parameters is necessary to ensure the best performance of the real-time system i.e., the most accurate location for the earthquakes and avoiding any false events. The aim of this study is to optimize the algorithms of the real-time system that detect and locate the earthquakes in the monitored area. This goal is pursued by testing different parameters (e.g., STA/LTA, filters applied to the waveforms) on a data set of representative earthquakes of the local seismicity. The results are compared with the locations from the Romanian Catalogue ROMPLUS.

  3. Improving Seismic Event Location: An Alternative to Three-dimensional Structural Models

    NASA Astrophysics Data System (ADS)

    Piromallo, C.; Morelli, A.

    - We devise and apply a method to account for the effect of the aspherical structure of the Earth in locating earthquakes. This technique relies upon the ability to detect the average structural signal present in the residuals between source and receiver and correct for this signal during location, using a phenomenological description that we call Empirical Heterogeneity Corrections (EHC). EHC are employed in the relocation of a large set of well-constrained teleseismic earthquakes selected among the events reported by the Bulletins of the International Seismological Centre 1964-1995. The rms length of EHC relocation vectors for these events is about 10km. The method is also tested against a selected set of ground-truth events, both earthquakes and explosions, whose locations are independently known by nonseismic means. The rms length of the mislocation vectors for the test events, compared to their original mislocation in the reference 1-D model SP6, is reduced in the EHC relocation by 17% for explosions and 12% for earthquakes. Our technique provides a successful alternative to the use of 3-D structural models, approximately reaching the same value of effectiveness in improving event location.

  4. The use of propagation path corrections to improve regional seismic event location in western China

    SciTech Connect

    Steck, L.K.; Cogbill, A.H.; Velasco, A.A.

    1999-03-01

    In an effort to improve the ability to locate seismic events in western China using only regional data, the authors have developed empirical propagation path corrections (PPCs) and applied such corrections using both traditional location routines as well as a nonlinear grid search method. Thus far, the authors have concentrated on corrections to observed P arrival times for shallow events using travel-time observations available from the USGS EDRs, the ISC catalogs, their own travel-tim picks from regional data, and data from other catalogs. They relocate events with the algorithm of Bratt and Bache (1988) from a region encompassing China. For individual stations having sufficient data, they produce a map of the regional travel-time residuals from all well-located teleseismic events. From these maps, interpolated PPC surfaces have been constructed using both surface fitting under tension and modified Bayesian kriging. The latter method offers the advantage of providing well-behaved interpolants, but requires that the authors have adequate error estimates associated with the travel-time residuals. To improve error estimates for kriging and event location, they separate measurement error from modeling error. The modeling error is defined as the travel-time variance of a particular model as a function of distance, while the measurement error is defined as the picking error associated with each phase. They estimate measurement errors for arrivals from the EDRs based on roundoff or truncation, and use signal-to-noise for the travel-time picks from the waveform data set.

  5. A new method for producing automated seismic bulletins: Probabilistic event detection, association, and location

    SciTech Connect

    Draelos, Timothy J.; Ballard, Sanford; Young, Christopher J.; Brogan, Ronald

    2015-10-01

    Given a set of observations within a specified time window, a fitness value is calculated at each grid node by summing station-specific conditional fitness values. Assuming each observation was generated by a refracted P wave, these values are proportional to the conditional probabilities that each observation was generated by a seismic event at the grid node. The node with highest fitness value is accepted as a hypothetical event location, subject to some minimal fitness value, and all arrivals within a longer time window consistent with that event are associated with it. During the association step, a variety of different phases are considered. In addition, once associated with an event, an arrival is removed from further consideration. While unassociated arrivals remain, the search for other events is repeated until none are identified.

  6. A new method for producing automated seismic bulletins: Probabilistic event detection, association, and location

    DOE PAGES

    Draelos, Timothy J.; Ballard, Sanford; Young, Christopher J.; Brogan, Ronald

    2015-10-01

    Given a set of observations within a specified time window, a fitness value is calculated at each grid node by summing station-specific conditional fitness values. Assuming each observation was generated by a refracted P wave, these values are proportional to the conditional probabilities that each observation was generated by a seismic event at the grid node. The node with highest fitness value is accepted as a hypothetical event location, subject to some minimal fitness value, and all arrivals within a longer time window consistent with that event are associated with it. During the association step, a variety of different phasesmore » are considered. In addition, once associated with an event, an arrival is removed from further consideration. While unassociated arrivals remain, the search for other events is repeated until none are identified.« less

  7. Accurate Analysis of the Change in Volume, Location, and Shape of Metastatic Cervical Lymph Nodes During Radiotherapy

    SciTech Connect

    Takao, Seishin; Tadano, Shigeru; Taguchi, Hiroshi; Yasuda, Koichi; Onimaru, Rikiya; Ishikawa, Masayori; Bengua, Gerard; Suzuki, Ryusuke; Shirato, Hiroki

    2011-11-01

    Purpose: To establish a method for the accurate acquisition and analysis of the variations in tumor volume, location, and three-dimensional (3D) shape of tumors during radiotherapy in the era of image-guided radiotherapy. Methods and Materials: Finite element models of lymph nodes were developed based on computed tomography (CT) images taken before the start of treatment and every week during the treatment period. A surface geometry map with a volumetric scale was adopted and used for the analysis. Six metastatic cervical lymph nodes, 3.5 to 55.1 cm{sup 3} before treatment, in 6 patients with head and neck carcinomas were analyzed in this study. Three fiducial markers implanted in mouthpieces were used for the fusion of CT images. Changes in the location of the lymph nodes were measured on the basis of these fiducial markers. Results: The surface geometry maps showed convex regions in red and concave regions in blue to ensure that the characteristics of the 3D tumor geometries are simply understood visually. After the irradiation of 66 to 70 Gy in 2 Gy daily doses, the patterns of the colors had not changed significantly, and the maps before and during treatment were strongly correlated (average correlation coefficient was 0.808), suggesting that the tumors shrank uniformly, maintaining the original characteristics of the shapes in all 6 patients. The movement of the gravitational center of the lymph nodes during the treatment period was everywhere less than {+-}5 mm except in 1 patient, in whom the change reached nearly 10 mm. Conclusions: The surface geometry map was useful for an accurate evaluation of the changes in volume and 3D shapes of metastatic lymph nodes. The fusion of the initial and follow-up CT images based on fiducial markers enabled an analysis of changes in the location of the targets. Metastatic cervical lymph nodes in patients were suggested to decrease in size without significant changes in the 3D shape during radiotherapy. The movements of the

  8. Using Ancillary Information to Improve Hypocenter Estimation: Bayesian Single Event Location (BSEL).

    SciTech Connect

    Fagan, Deborah; Taylor, Steve R.; Schult, Frederick R.; Anderson, Dale N.

    2009-04-01

    We have developed and tested an algorithm, Bayesian Single Event Location (BSEL), for esti- mating the location of a seismic event. The estimation approach di*ers from established non-linear regression techniques by using a Bayesian prior probability density function (prior PDF) to incor- porate ancillary physical basis constraints about event location. P wave arrival times from seismic events are used in the development. Depth, a focus of this paper, may be modeled with a prior PDF (potentially skewed) that captures physical basis bounds from ancillary event characteristics. For instance the surface wave Rg is present in a waveform only when an event is shallow. A high- condence Rg detection in one or more event waveforms implies a shallow-skewed prior PDF for the depth parameter. This PDF is constructed from a physically-based Rayleigh wave depth excitation eigenfunction that is based on the minimum period of observation from a spectrogram analysis and estimated near-source elastic parameters. The proposed Bayesian algorithm is illustrated with events that demonstrate its congruity with established hypocenter estimation methods and its application potential. The BSEL method is applied to the Mw 5.6 Dillon, MT earthquake of July 26, 2005 and a shallow Mw 4 earthquake that occurred near Bardwell, KY on June 6, 2003. In both cases we simulate BSEL on a small subset of arrival time data to illustrate the power of the technique. No Rg was actually observed for the Dillon, MT earthquake, but we used the minimum observed period of a Rayleigh wave (7 seconds) to reduce the depth and origin time uncertainty. A strong Rg was observed from the Bardwell, KY earthquake that places very strong constraints on depth and origin time.

  9. Finding Faces Among Faces: Human Faces are Located More Quickly and Accurately than Other Primate and Mammal Faces

    PubMed Central

    Simpson, Elizabeth A.; Buchin, Zachary; Werner, Katie; Worrell, Rey; Jakobsen, Krisztina V.

    2014-01-01

    We tested the specificity of human face search efficiency by examining whether there is a broad window of detection for various face-like stimuli—human and animal faces—or whether own-species faces receive greater attentional allocation. We assessed the strength of the own-species face detection bias by testing whether human faces are located more efficiently than other animal faces, when presented among various other species’ faces, in heterogeneous 16-, 36-, and 64-item arrays. Across all array sizes, we found that, controlling for distractor type, human faces were located faster and more accurately than primate and mammal faces, and that, controlling for target type, searches were faster when distractors were human faces compared to animal faces, revealing more efficient processing of human faces regardless of their role as targets or distractors (Experiment 1). Critically, these effects remained when searches were for specific species’ faces (human, chimpanzee, otter), ruling out a category-level explanation (Experiment 2). Together, these results suggest that human faces may be processed more efficiently than animal faces, both when task-relevant (targets), and when task-irrelevant (distractors), even when in direct competition with other faces. These results suggest that there is not a broad window of detection for all face-like patterns, but that human adults process own-species’ faces more efficiently than other species’ faces. Such own-species search efficiencies may arise through experience with own-species faces throughout development, or may be privileged early in development, due to the evolutionary importance of conspecifics’ faces. PMID:25113852

  10. Using Ancillary Information to Improve Hypocenter Estimation: Bayesian Single Event Location (BSEL)

    SciTech Connect

    Fagan, Deborah K.; Taylor, Steven R.; Schult, Frederick R.; Anderson, Dale N.

    2009-04-01

    Abstract: We have developed and tested a unifying algorithm for estimating the location of a seismic event. The estimation approach differs from established non-linear regression techniques by using Bayesian priors to incorporate ancillary physical basis constraints and knowledge about the event location. P-wave (primary) arrival times from seismic data waveforms are used in the development. Depth, a focus of this paper, may be modeled with a probability density function (potentially skewed) that captures physical basis bounds on depth from ancillary event characteristics. For instance, the surface wave Rg is present in a waveform only when an event is shallow. A high confidence Rg detection in one or more event waveforms can lead one to assume a shallow-skewed prior probability density function for the depth parameters. The proposed Bayesian algorithm is illustrated with a magnitude 5.6 earthquake in southwest Montana by comparing results with good and poor station configurations. A noninformative uniform prior is used in both cases to demonstrate a hypocenter estimate that is equivalent to the established non-linear regression approach.

  11. The LLNL-G3D global P-wave velocity model and the significance of the BayesLoc multiple-event location procedure

    NASA Astrophysics Data System (ADS)

    Simmons, N. A.; Myers, S. C.; Johannesson, G.; Matzel, E.

    2011-12-01

    LLNL-G3D is a global-scale model of P-wave velocity designed to accurately predict seismic travel times at regional and teleseismic distances simultaneously. The underlying goal of the model is to provide enhanced seismic event location capabilities. Previous versions of LLNL-G3D (versions 1 and 2) provide substantial improvements in event location accuracy via 3-D ray tracing. The latest models are based on ~2.7 million P and Pn arrivals that are re-processed using our global multi-event locator known as BayesLoc. Bayesloc is a formulation of the joint probability distribution across multiple-event location parameters, including hypocenters, travel time corrections, pick precision, and phase labels. Modeling the whole multiple-event system results in accurate locations and an internally consistent data set that is ideal for tomography. Our recently developed inversion approach (called Progressive Multi-level Tessellation Inversion or PMTI) captures regional trends and fine details where data warrant. Using PMTI, we model multiple heterogeneity scale lengths without defining parameter grids with variable densities based on some ad hoc criteria. LLNL-G3Dv3 (version 3) is produced with data generated with the BayesLoc procedure, recently modified to account for localized travel time trends via a multiple event clustering technique. We demonstrate the significance of BayesLoc processing, the impact on the resulting tomographic images, and the application of LLNL-G3D to seismic event location. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-491805.

  12. A method for detecting and locating geophysical events using groups of arrays

    NASA Astrophysics Data System (ADS)

    de Groot-Hedlin, Catherine D.; Hedlin, Michael A. H.

    2015-11-01

    We have developed a novel method to detect and locate geophysical events that makes use of any sufficiently dense sensor network. This method is demonstrated using acoustic sensor data collected in 2013 at the USArray Transportable Array (TA). The algorithm applies Delaunay triangulation to divide the sensor network into a mesh of three-element arrays, called triads. Because infrasound waveforms are incoherent between the sensors within each triad, the data are transformed into envelopes, which are cross-correlated to find signals that satisfy a consistency criterion. The propagation azimuth, phase velocity and signal arrival time are computed for each signal. Triads with signals that are consistent with a single source are bundled as an event group. The ensemble of arrival times and azimuths of detected signals within each group are used to locate a common source in space and time. A total of 513 infrasonic stations that were active for part or all of 2013 were divided into over 2000 triads. Low (0.5-2 Hz) and high (2-8 Hz) catalogues of infrasonic events were created for the eastern USA. The low-frequency catalogue includes over 900 events and reveals several highly active source areas on land that correspond with coal mining regions. The high-frequency catalogue includes over 2000 events, with most occurring offshore. Although their cause is not certain, most events are clearly anthropogenic as almost all occur during regular working hours each week. The regions to which the TA is most sensitive vary seasonally, with the direction of reception dependent on the direction of zonal winds. The catalogue has also revealed large acoustic events that may provide useful insight into the nature of long-range infrasound propagation in the atmosphere.

  13. Improvement of IDC/CTBTO Event Locations in Latin America and the Caribbean Using a Regional Seismic Travel Time Model

    NASA Astrophysics Data System (ADS)

    Given, J. W.; Guendel, F.

    2013-05-01

    The International Data Centre is a vital element of the Comprehensive Test Ban Treaty (CTBT) verification mechanism. The fundamental mission of the International Data Centre (IDC) is to collect, process, and analyze monitoring data and to present results as event bulletins to Member States. For the IDC and in particular for waveform technologies, a key measure of the quality of its products is the accuracy by which every detected event is located. Accurate event location is crucial for purposes of an On Site Inspection (OSI), which would confirm the conduct of a nuclear test. Thus it is important for the IDC monitoring and data analysis to adopt new processing algorithms that improve the accuracy of event location. Among them the development of new algorithms to compute regional seismic travel times through 3-dimensional models have greatly increased IDC's location precision, the reduction of computational time, allowing forward and inverse modeling of large data sets. One of these algorithms has been the Regional Seismic Travel Time model (RSTT) of Myers et al., (2011). The RSTT model is nominally a global model; however, it currently covers only North America and Eurasia in sufficient detail. It is the intention CTBTO's Provisional Technical Secretariat and the IDC to extend the RSTT model to other regions of the earth, e.g. Latin America-Caribbean, Africa and Asia. This is particularly important for the IDC location procedure, as there are regions of the earth for which crustal models are not well constrained. For this purpose IDC has launched a RSTT initiative. In May 2012, a technical meeting was held in Vienna under the auspices of the CTBTO. The purpose of this meeting was to invite National Data Centre experts as well as network operators from Africa, Europe, the Middle East, Asia, Australia, Latin and North America to discuss the context under which a project to extend the RSTT model would be implemented. A total of 41 participants from 32 Member States

  14. Using ancillary information to improve hypocenter estimation: Bayesian single event location (BSEL)

    SciTech Connect

    Anderson, Dale N

    2008-01-01

    We have developed and tested an algorithm, Bayesian Single Event Location (BSEL), for estimating the location of a seismic event. The main driver for our research is the inadequate representation of ancillary information in the hypocenter estimation procedure. The added benefit is that we have also addressed instability issues often encountered with historical NLR solvers (e.g., non-convergence or seismically infeasible results). BSEL differs from established nonlinear regression techniques by using a Bayesian prior probability density function (prior PDF) to incorporate ancillary physical basis constraints about event location. P-wave arrival times from seismic events are used in the development. Depth, a focus of this paper, may be modeled with a prior PDF (potentially skewed) that captures physical basis bounds from surface wave observations. This PDF is constructed from a Rayleigh wave depth excitation eigenfunction that is based on the observed minimum period from a spectrogram analysis and estimated near-source elastic parameters. For example, if the surface wave is an Rg phase, it potentially provides a strong constraint for depth, which has important implications for remote monitoring of nuclear explosions. The proposed Bayesian algorithm is illustrated with events that demonstrate its congruity with established hypocenter estimation methods and its application potential. The BSEL method is applied to three events: (1) A shallow Mw 4 earthquake that occurred near Bardwell, KY on June 6, 2003, (2) the Mw 5.6 earthquake of July 26, 2005 that occurred near Dillon, MT, and (3) a deep Mw 5.7 earthquake that occurred off the coast of Japan on April 22, 1980. A strong Rg was observed from the Bardwell, KY earthquake that places very strong constraints on depth and origin time. No Rg was observed for the Dillon, MT earthquake, but we used the minimum observed period of a Rayleigh wave (7 seconds) to reduce the depth and origin time uncertainty. Because the Japan

  15. A New Characteristic Function for Fast Time-Reverse Seismic Event Location

    NASA Astrophysics Data System (ADS)

    Hendriyana, Andri; Bauer, Klaus; Weber, Michael; Jaya, Makky; Muksin, Muksin

    2015-04-01

    Microseismicity produced by natural activities is usually characterized by low signal-to-noise ratio and huge amount of data as recording is conducted for a long period of time. Locating microseismic events is preferably carried out using migration-based methods such as time-reverse modeling (TRM). The original TRM is based on backpropagating the wavefield from the receiver down to the source location. Alternatively, we are using a characteristic function (CF) derived from the measured wavefield as input for the TRM. The motivation for such a strategy is to avoid undesired contributions from secondary arrivals which may generate artifacts in the final images. In this presentation, we introduce a new CF as input for TRM method. To obtain this CF, initially we apply kurtosis-based automatic onset detection and convolution with a given wavelet. The convolution with low frequency wavelets allows us to conduct time-reverse modeling using coarser sampling hence it will reduce computing time. We apply the method to locate seismic events measured along an active part of the Sumatra Fault around the Tarutung pull-apart basin (North Sumatra, Indonesia). The results show that seismic events are well-determined since they are concentrated along the Sumatran fault. Internal details of the Tarutung basin structure could be derived. Our results are consistent with those obtained from inversion of manually picked travel time data.

  16. Application of an Artificial Intelligence Method for Velocity Calibration and Events Location in Microseismic Monitoring

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Chen, X.

    2013-12-01

    Good quality hydraulic fracture maps are heavily dependent upon the best possible velocity structure. Particle Swarm Optimization inversion scheme, an artificial intelligence technique for velocity calibration and events location could serve as a viable option, able to produce high quality data. Using perforation data to recalibrate the 1D isotropic velocity model derived from dipole sonic logs (or even without them), we are able to get the initial velocity model used for consequential events location. Velocity parameters can be inverted, as well as the thickness of the layer, through an iterative procedure. Performing inversion without integrating available data is unlikely to produce reliable results; especially if there are only one perforation shot and a single poor-layer-covered array along with low signal/noise ratio signal. The inversion method was validated via simulations and compared to the Fast Simulated Annealing approach and the Conjugate Gradient method. Further velocity model refinement can be accomplished while calculating events location during the iterative procedure minimizing the residuals from both sides. This artificial intelligence technique also displays promising application to the joint inversion of large-scale seismic activities data.

  17. Quantifying uncertainties in location and source mechanism for Long-Period events at Mt Etna, Italy.

    NASA Astrophysics Data System (ADS)

    Cauchie, Léna; Saccorotti, Gilberto; Bean, Christopher

    2014-05-01

    The manifestation of Long-Period events is documented at many volcanoes worldwide. However the mechanism at their origin is still object of discussion. Models proposed so far involve (i) the resonance of fluid-filled cracks or conduits that are triggered by fluid instabilities or the brittle failure of high viscous magmas and (ii) the slow-rupture earthquakes in the shallow portion of volcanic edifices. Since LP activity usually precedes and accompanies volcanic eruption, the understanding of these sources is important in terms of hazard assessment and eruption early warning. The work is thus primarily aimed at the assessment of the uncertainties in the determination of LP source properties as a consequence of poor knowledge of the velocity structure and location errors. We used data from temporary networks deployed on Mt Etna in 2005. During August, 2005, about 13000 LP events were detected through a STA/LTA approach, and were classified into two families on the basis of waveform similarity. For each family of events, we located the source using three different approaches: (1) a single-station-location method based on the back-propagation of the polarization vector estimated from covariance analysis of three-component signals; (2) multi-channel analysis of data recorded by two seismic arrays; (3) relative locations based on inversion of differential times obtained through cross-correlation of similar waveforms. For all these three different methods, the solutions are very sensitive to the chosen velocity model. We thus iterated the location procedure for different medium properties; the preferred velocity is that for which the results obtained with the three different methods are consistent each other. For each family, we then defined a volume of possible source location and performed a full-waveform, moment tensor (MT) inversion for the entire catalog of events. In this manner, we obtained a MT solution for each grid node of the investigated volume. The MT

  18. Final Scientific Report, Integrated Seismic Event Detection and Location by Advanced Array Processing

    SciTech Connect

    Kvaerna, T.; Gibbons. S.J.; Ringdal, F; Harris, D.B.

    2007-01-30

    In the field of nuclear explosion monitoring, it has become a priority to detect, locate, and identify seismic events down to increasingly small magnitudes. The consideration of smaller seismic events has implications for a reliable monitoring regime. Firstly, the number of events to be considered increases greatly; an exponential increase in naturally occurring seismicity is compounded by large numbers of seismic signals generated by human activity. Secondly, the signals from smaller events become more difficult to detect above the background noise and estimates of parameters required for locating the events may be subject to greater errors. Thirdly, events are likely to be observed by a far smaller number of seismic stations, and the reliability of event detection and location using a very limited set of observations needs to be quantified. For many key seismic stations, detection lists may be dominated by signals from routine industrial explosions which should be ascribed, automatically and with a high level of confidence, to known sources. This means that expensive analyst time is not spent locating routine events from repeating seismic sources and that events from unknown sources, which could be of concern in an explosion monitoring context, are more easily identified and can be examined with due care. We have obtained extensive lists of confirmed seismic events from mining and other artificial sources which have provided an excellent opportunity to assess the quality of existing fully-automatic event bulletins and to guide the development of new techniques for online seismic processing. Comparing the times and locations of confirmed events from sources in Fennoscandia and NW Russia with the corresponding time and location estimates reported in existing automatic bulletins has revealed substantial mislocation errors which preclude a confident association of detected signals with known industrial sources. The causes of the errors are well understood and are

  19. Coronal Waves and Solar Energetic Particle Events Observed at Widely Separate Locations

    NASA Astrophysics Data System (ADS)

    Nitta, N.; Jian, L.; Gomez-Herrero, R.

    2015-12-01

    During solar cycle 24, thanks largely to the Solar Terrestrial Relations Observatory (STEREO), many solar energetic particle (SEP) events have been observed at widely separate locations in the heliosphere, even including impulsive events that are usually assumed to reflect localized acceleration and injection. It is found that many of these wide SEP events accompany coronal waves that typically appear in extreme-ultraviolet (EUV) images. The EUV wave phenomenon has been observed much more closely than before by the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory that continuously produces full-disk EUV images with unprecedentedly fast cadence and high sensitivity in multiple wavelength bands covering a broad temperature range. This is complemented by the EUV Imager on STEREO that traces the wave front into regions inaccessible from Earth. Several authors have attempted to explain wide SEP events in terms of EUV waves, especially comparing the SEP release times with how and when the EUV wave fronts traverse the magnetic footprints of the locations of SEPs. They have come to mixed results. The primary reason for the mixed results may be that they tend to overlook or underestimate the uncertainties inherent in the works. For example, how well do we model magnetic field connection in the corona and heliosphere? Do we adequately take into account the evolving solar wind conditions? Here we study a number of SEP events with various angular spreads in comparison with newly analyzed EUV waves. We discuss the importance of including the above-mentioned uncertainties as well as understanding EUV waves as part of the 3d propagation of CME-driven shock waves into the coronagraph fields of view. Without these approaches, it may remain ambiguous how much of the angular spread of SEP events is attributable to coronal shock waves.

  20. Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.

    PubMed

    Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L

    2016-08-01

    Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782

  1. Children who experienced a repeated event only appear less accurate in a second interview than those who experienced a unique event.

    PubMed

    Price, Heather L; Connolly, Deborah A; Gordon, Heidi M

    2016-08-01

    When children have experienced a repeated event, reports of experienced details may be inconsistently reported across multiple interviews. In 3 experiments, we explored consistency of children's reports of an instance of a repeated event after a long delay (Exp. 1, N = 53, Mage = 7.95 years; Exp. 2, N = 70, Mage = 5.77 years, Exp. 3, N = 59, Mage = 4.88 years). In all experiments, children either experienced 1 or 4 activity sessions, followed at a relatively short delay (days or weeks) by an initial memory test. Then, following a longer delay (4 months or 1 year), children were reinterviewed with the same memory questions. We analyzed the consistency of children's memory reports across the 2 interviews, as well as forgetting, reminiscence, and accuracy, defined with both narrow and broad criteria. A highly consistent pattern was observed across the 3 experiments with children who experienced a single event appearing more consistent than children who experienced a repeated event. We conclude that inconsistencies across multiple interviews can be expected from children who have experienced repeated events and these inconsistencies are often reflective of accurate, but different, recall. (PsycINFO Database Record PMID:27149287

  2. Event Detection and Location of Earthquakes Using the Cascadia Initiative Dataset

    NASA Astrophysics Data System (ADS)

    Morton, E.; Bilek, S. L.; Rowe, C. A.

    2015-12-01

    The Cascadia subduction zone (CSZ) produces a range of slip behavior along the plate boundary megathrust, from great earthquakes to episodic slow slip and tremor (ETS). Unlike other subduction zones that produce great earthquakes and ETS, the CSZ is notable for the lack of small and moderate magnitude earthquakes recorded. The seismogenic zone extent is currently estimated to be primarily offshore, thus the lack of observed small, interplate earthquakes may be partially due to the use of only land seismometers. The Cascadia Initiative (CI) community seismic experiment seeks to address this issue by including ocean bottom seismometers (OBS) deployed directly over the locked seismogenic zone, in addition to land seismometers. We use these seismic data to explore whether small magnitude earthquakes are occurring on the plate interface, but have gone undetected by the land-based seismic networks. We select a subset of small magnitude (M0.1-3.7) earthquakes from existing earthquake catalogs, based on land seismic data, whose preliminary hypocentral locations suggest they may have occurred on the plate interface. We window the waveforms on CI OBS and land seismometers around the phase arrival times for these earthquakes to generate templates for subspace detection, which allows for additional flexibility over traditional matched filter detection methods. Here we present event detections from the first year of CI deployment and preliminary locations for the detected events. Initial results of scanning the first year of the CI deployment using one cluster of template events, located near a previously identified subducted seamount, include 473 detections on OBS station M08A (~61.6 km offshore) and 710 detections on OBS station J25A (~44.8 km northeast of M08A). Ongoing efforts include detection using additional OBS stations along the margin, as well as determining locations of clusters detected in the first year of deployment.

  3. a Topic Modeling Based Representation to Detect Tweet Locations. Example of the Event "je Suis Charlie"

    NASA Astrophysics Data System (ADS)

    Morchid, M.; Josselin, D.; Portilla, Y.; Dufour, R.; Altman, E.; Linarès, G.

    2015-09-01

    Social Networks became a major actor in information propagation. Using the Twitter popular platform, mobile users post or relay messages from different locations. The tweet content, meaning and location, show how an event-such as the bursty one "JeSuisCharlie", happened in France in January 2015, is comprehended in different countries. This research aims at clustering the tweets according to the co-occurrence of their terms, including the country, and forecasting the probable country of a non-located tweet, knowing its content. First, we present the process of collecting a large quantity of data from the Twitter website. We finally have a set of 2,189 located tweets about "Charlie", from the 7th to the 14th of January. We describe an original method adapted from the Author-Topic (AT) model based on the Latent Dirichlet Allocation (LDA) method. We define an homogeneous space containing both lexical content (words) and spatial information (country). During a training process on a part of the sample, we provide a set of clusters (topics) based on statistical relations between lexical and spatial terms. During a clustering task, we evaluate the method effectiveness on the rest of the sample that reaches up to 95% of good assignment. It shows that our model is pertinent to foresee tweet location after a learning process.

  4. Tectonic tremor locations along the western Mexico subduction zone using stacked waveforms of similar events

    NASA Astrophysics Data System (ADS)

    Schlanser, K. M.; Brudzinski, M. R.; Holtkamp, S. G.; Shelly, D. R.

    2011-12-01

    Tectonic (non-volcanic) tremor is difficult to locate due to its emergent nature, but critical to assess what impact it has on the plate interface slip budget. Tectonic tremor has been observed in Jalisco, Colima, and Michoacán regions of southern Mexico using the MARS seismic network. A semi-automated approach in which analyst-refined relative arrival times are inverted for source locations using a 1-D velocity model has previously produced hundreds of source locations. The results found tectonic tremor shift from near the 50 km contour to the 20 km contour going from east to west, with the latter epicenters hugging the coastline. There is little room between the tectonic tremor and the seismogenic zone for a wide intervening slow slip region like what is seen in other region of the Mexican subduction zone, suggesting a potentially different source process than tremor in other regions. This study seeks to refine the tremor source locations by stacking families of similar events to enhance the signal to noise ratio and bring out clear P- and S-wave arrivals even for low amplitude sources at noisier stations. Well-defined tremor bursts within the Jalisco, Colima, and Michoacán region from previous results are being used to define 6 s template waveforms that are matched to similar waveforms through cross-correlation over the entire duration of recording. After stacking the similar events, the clarified arrival times will be used to refine the source locations. Particular attention will be paid to whether the tremor families form a dipping linear feature consistent with the plate interface and if tremor associated with the Rivera plate is as shallow (~20km) as it appears from previous results.

  5. Locating narrow bipolar events with single-station measurement of low-frequency magnetic fields

    NASA Astrophysics Data System (ADS)

    Zhang, Hongbo; Lu, Gaopeng; Qie, Xiushu; Jiang, Rubin; Fan, Yanfeng; Tian, Ye; Sun, Zhuling; Liu, Mingyuan; Wang, Zhichao; Liu, Dongxia; Feng, Guili

    2016-06-01

    We developed a method to locate the narrow bipolar events (NBEs) based on the single-station measurement of low-frequency (LF, 40-500 kHz) magnetic fields. The direction finding of a two-axis magnetic sensor provides the azimuth of NBEs relative to the measurement site; the ionospheric reflection pairs in the lightning sferics are used to determine the range and height. We applied this method to determine the three-dimensional (3D) locations of 1475 NBEs with magnetic signals recorded during the SHandong Artificially Triggered Lightning Experiment (SHATLE) in summer of 2013. The NBE detections are evaluated on a storm basis by comparing with radar observations of reflectivity and lightning data from the World Wide Lightning Location Network (WWLLN) for two mesoscale convective systems (MCSs) of different sizes. As revealed by previous studies, NBEs are predominately produced in the convective regions with relatively strong radar echo (with composite reflectivity ≥30 dBZ), although not all the convections with high reflectivity and active lightning production are in favor of NBE production. The NBEs located by the single-station magnetic method also exhibit the distinct segregation in altitude for positive and negative NBEs, namely positive NBEs are mainly produced between 7 km and 15 km, while negative NBEs are predominantly produced above 14 km. In summary, the results of comparison generally show that the single-station magnetic method can locate NBEs with good reliability, although the accuracy of 3D location remains to be evaluated with the traditional multi-station method based on the time-of-arrival technique. This method can be applied to track the motion of storm convection within 800 km, especially when they move out to ocean beyond the detection range (typically <400 km) of meteorological radars, making it possible to study NBEs in oceanic thunderstorms for which the location with multiple ground-based stations is usually not feasible.

  6. A data-based model to locate mass movements triggered by seismic events in Sichuan, China.

    PubMed

    de Souza, Fabio Teodoro

    2014-01-01

    Earthquakes affect the entire world and have catastrophic consequences. On May 12, 2008, an earthquake of magnitude 7.9 on the Richter scale occurred in the Wenchuan area of Sichuan province in China. This event, together with subsequent aftershocks, caused many avalanches, landslides, debris flows, collapses, and quake lakes and induced numerous unstable slopes. This work proposes a methodology that uses a data mining approach and geographic information systems to predict these mass movements based on their association with the main and aftershock epicenters, geologic faults, riverbeds, and topography. A dataset comprising 3,883 mass movements is analyzed, and some models to predict the location of these mass movements are developed. These predictive models could be used by the Chinese authorities as an important tool for identifying risk areas and rescuing survivors during similar events in the future.

  7. A data-based model to locate mass movements triggered by seismic events in Sichuan, China.

    PubMed

    de Souza, Fabio Teodoro

    2014-01-01

    Earthquakes affect the entire world and have catastrophic consequences. On May 12, 2008, an earthquake of magnitude 7.9 on the Richter scale occurred in the Wenchuan area of Sichuan province in China. This event, together with subsequent aftershocks, caused many avalanches, landslides, debris flows, collapses, and quake lakes and induced numerous unstable slopes. This work proposes a methodology that uses a data mining approach and geographic information systems to predict these mass movements based on their association with the main and aftershock epicenters, geologic faults, riverbeds, and topography. A dataset comprising 3,883 mass movements is analyzed, and some models to predict the location of these mass movements are developed. These predictive models could be used by the Chinese authorities as an important tool for identifying risk areas and rescuing survivors during similar events in the future. PMID:24085622

  8. On the violation of causal, emotional, and locative inferences: An event-related potentials study.

    PubMed

    Rodríguez-Gómez, Pablo; Sánchez-Carmona, Alberto; Smith, Cybelle; Pozo, Miguel A; Hinojosa, José A; Moreno, Eva M

    2016-07-01

    Previous event-related potential studies have demonstrated the online generation of inferences during reading for comprehension tasks. The present study contrasted the brainwave patterns of activity to the fulfilment or violation of various types of inferences (causal, emotional, locative). Relative to inference congruent sentence endings, a typical centro-parietal N400 was elicited for the violation of causal and locative inferences. This N400 effect was initially absent for emotional inferences, most likely due to their lower cloze probability. Between 500 and 750ms, a larger frontal positivity (pN400FP) was elicited by inference incongruent sentence endings in the causal condition. In emotional sentences, both inference congruent and incongruent endings exerted this frontally distributed late positivity. For the violation of locative inferences, the larger positivity was only marginally significant over left posterior scalp locations. Thus, not all inference eliciting sentences evoked a similar pattern of ERP responses. We interpret and discuss our results in line with recent views on what the N400, the P600 and the pN400FP brainwave potentials index.

  9. Nuclear event time histories and computed site transfer functions for locations in the Los Angeles region

    USGS Publications Warehouse

    Rogers, A.M.; Covington, P.A.; Park, R.B.; Borcherdt, R.D.; Perkins, D.M.

    1980-01-01

    This report presents a collection of Nevada Test Site (NTS) nuclear explosion recordings obtained at sites in the greater Los Angeles, Calif., region. The report includes ground velocity time histories, as well as, derived site transfer functions. These data have been collected as part of a study to evaluate the validity of using low-level ground motions to predict the frequency-dependent response of a site during an earthquake. For this study 19 nuclear events were recorded at 98 separate locations. Some of these sites have recorded more than one of the nuclear explosions, and, consequently, there are a total of 159, three-component station records. The location of all the recording sites are shown in figures 1–5, the station coordinates and abbreviations are given in table 1. The station addresses are listed in table 2, and the nuclear explosions that were recorded are listed in table 3. The recording sites were chosen on the basis of three criteria: (1) that the underlying geological conditions were representative of conditions over significant areas of the region, (2) that the site was the location of a strong-motion recording of the 1971 San Fernando earthquake, or (3) that more complete geographical coverage was required in that location.

  10. An Event-related Potential Study on the Interaction between Lighting Level and Stimulus Spatial Location

    PubMed Central

    Carretié, Luis; Ruiz-Padial, Elisabeth; Mendoza, María T.

    2015-01-01

    Due to heterogeneous photoreceptor distribution, spatial location of stimulation is crucial to study visual brain activity in different light environments. This unexplored issue was studied through occipital event-related potentials (ERPs) recorded from 40 participants in response to discrete visual stimuli presented at different locations and in two environmental light conditions, low mesopic (L, 0.03 lux) and high mesopic (H, 6.5 lux), characterized by a differential photoreceptor activity balance: rod > cone and rod < cone, respectively. Stimuli, which were exactly the same in L and H, consisted of squares presented at fixation, at the vertical periphery (above or below fixation) or at the horizontal periphery (left or right). Analyses showed that occipital ERPs presented important L vs. H differences in the 100 to 450 ms window, which were significantly modulated by spatial location of stimulation: differences were greater in response to peripheral stimuli than to stimuli presented at fixation. Moreover, in the former case, significance of L vs. H differences was even stronger in response to stimuli presented at the horizontal than at the vertical periphery. These low vs. high mesopic differences may be explained by photoreceptor activation and their retinal distribution, and confirm that ERPs discriminate between rod– and cone-originated visual processing. PMID:26635588

  11. Fast, Accurate and Precise Mid-Sagittal Plane Location in 3D MR Images of the Brain

    NASA Astrophysics Data System (ADS)

    Bergo, Felipe P. G.; Falcão, Alexandre X.; Yasuda, Clarissa L.; Ruppert, Guilherme C. S.

    Extraction of the mid-sagittal plane (MSP) is a key step for brain image registration and asymmetry analysis. We present a fast MSP extraction method for 3D MR images, based on automatic segmentation of the brain and on heuristic maximization of the cerebro-spinal fluid within the MSP. The method is robust to severe anatomical asymmetries between the hemispheres, caused by surgical procedures and lesions. The method is also accurate with respect to MSP delineations done by a specialist. The method was evaluated on 64 MR images (36 pathological, 20 healthy, 8 synthetic), and it found a precise and accurate approximation of the MSP in all of them with a mean time of 60.0 seconds per image, mean angular variation within a same image (precision) of 1.26o and mean angular difference from specialist delineations (accuracy) of 1.64o.

  12. Location of seismic events and eruptive fissures on the Piton de la Fournaise volcano using seismic amplitudes

    USGS Publications Warehouse

    Battaglia, J.; Aki, K.

    2003-01-01

    We present a method for locating the source of seismic events on Piton de la Fournaise. The method is based on seismic amplitudes corrected for station site effects using coda site amplification factors. Once corrected, the spatial distribution of amplitudes shows smooth and simple contours for many types of events, including rockfalls, long-period events and eruption tremor. On the basis of the simplicity of these distributions we develop inversion methods for locating their origins. To achieve this, the decrease of the amplitude as a function of the distance to the source is approximated by the decay either of surface or body waves in a homogeneous medium. The method is effective for locating rockfalls, long-period events, and eruption tremor sources. The sources of eruption tremor are usually found to be located at shallow depth and close to the eruptive fissures. Because of this, our method is a useful tool for locating fissures at the beginning of eruptions.

  13. Location of multi-phase volcanic events from a temporary dense seismic array at White Island, New Zealand

    NASA Astrophysics Data System (ADS)

    Jolly, Arthur; Lokmer, Ivan; Thun, Johannes; Salichon, Jerome; Fournier, Nico; Fry, Bill

    2016-04-01

    The August 2012 to October 2013 White Island eruption sequence included an increase in gas flux and RSAM seismic tremor beginning in late 2011. Prior to this unrest, a small swarm of 25 events was observed on 19-21 August 2011. The events were captured on a temporary dense seismic array including 12 broadband sensors that were deployed between June and November 2011. Each event comprised coupled earthquakes having distinct high frequency (HF = >1 s), long-period (LP = 2-4 s) and very long period (VLP = 10-30 s) pulses. For each coupled HF, LP and VLP event, we compute the source locations, origin times and related uncertainties by application of standard arrival time locations for the HF events and waveform back-projection for the LP and VLP events. Preliminary results suggest that the events are centred beneath active vent at depths generally less than 2 km. The HF earthquakes have diffuse locations (<2 km), while LP events are constrained to generally shallower source depths (< 1km) and VLP events have slightly deeper source locations (1 to 2 km). The arrival-time locations have been constrained using a realistic shallow velocity model while the waveform back-projection locations have been constrained by thorough synthetic testing. Emergent onsets for LP and VLP sources make an analysis of the absolute origin times problematic but waveform matching of VLP to LP components suggests relative time variations of less than a second or two. We will discuss the location and relative timing for the three event types in context with possible hydrothermal and magmatic processes at White Island volcano.

  14. Acoustic monitoring of laboratory faults: locating the origin of unstable slip events

    NASA Astrophysics Data System (ADS)

    Korkolis, Evangelos; Niemeijer, André; Spiers, Christopher

    2015-04-01

    Over the past several decades, much work has been done on studying the frictional properties of fault gouges at earthquake nucleation velocities. In addition, post-experiment microstructural analyses have been performed in an attempt to link microphysical mechanisms to the observed mechanical data. However, all observations are necessarily post-mortem and it is thus difficult to directly link transients to microstructural characteristics. We are developing an acoustic monitoring system to be used in sliding experiments using a ring shear apparatus. The goal is to locate acoustic emission sources in sheared granular assemblages and link them to processes that act on microstructures responsible for the frictional stability of the simulated fault gouge. The results will be used to develop and constrain microphysical models that explain the relation of these processes to empirical friction laws, such as rate- and state-dependent friction. The acoustic monitoring setup is comprised of an array of 16 piezo-electric sensors installed on the top and bottom sides of an annular sample, at 45 degree intervals. Acoustic emissions associated with slip events can be recorded at sampling rates of up to 50 MHz, in triggered mode. Initial experiments on 0.1 to 0.2 mm and 0.4 to 0.5 mm diameter glass beads, at 1 to 5 MPa normal stress and 1 to 30 um/s load point velocity, have been conducted to estimate the sensitivity of the sensor array. Preliminary results reveal that the intensity of the audible signal is not necessarily proportional to the magnitude of the associated stress drop for constant loading conditions, and that acoustic emissions precede slip events by a small amount of time, in the order of a few milliseconds. Currently, our efforts are focused on developing a suitable source location algorithm with the aim to identify differences in the mode of (unstable) sliding for different types of materials. This will help to identify the micromechanical mechanisms operating

  15. Applications of Location Similarity Measures and Conceptual Spaces to Event Coreference and Classification

    ERIC Educational Resources Information Center

    McConky, Katie Theresa

    2013-01-01

    This work covers topics in event coreference and event classification from spoken conversation. Event coreference is the process of identifying descriptions of the same event across sentences, documents, or structured databases. Existing event coreference work focuses on sentence similarity models or feature based similarity models requiring slot…

  16. Helicopter Based Magnetic Detection Of Wells At The Teapot Dome (Naval Petroleum Reserve No. 3 Oilfield: Rapid And Accurate Geophysical Algorithms For Locating Wells

    NASA Astrophysics Data System (ADS)

    Harbert, W.; Hammack, R.; Veloski, G.; Hodge, G.

    2011-12-01

    In this study Airborne magnetic data was collected by Fugro Airborne Surveys from a helicopter platform (Figure 1) using the Midas II system over the 39 km2 NPR3 (Naval Petroleum Reserve No. 3) oilfield in east-central Wyoming. The Midas II system employs two Scintrex CS-2 cesium vapor magnetometers on opposite ends of a transversely mounted, 13.4-m long horizontal boom located amidships (Fig. 1). Each magnetic sensor had an in-flight sensitivity of 0.01 nT. Real time compensation of the magnetic data for magnetic noise induced by maneuvering of the aircraft was accomplished using two fluxgate magnetometers mounted just inboard of the cesium sensors. The total area surveyed was 40.5 km2 (NPR3) near Casper, Wyoming. The purpose of the survey was to accurately locate wells that had been drilled there during more than 90 years of continuous oilfield operation. The survey was conducted at low altitude and with closely spaced flight lines to improve the detection of wells with weak magnetic response and to increase the resolution of closely spaced wells. The survey was in preparation for a planned CO2 flood to enhance oil recovery, which requires a complete well inventory with accurate locations for all existing wells. The magnetic survey was intended to locate wells that are missing from the well database and to provide accurate locations for all wells. The well location method used combined an input dataset (for example, leveled total magnetic field reduced to the pole), combined with first and second horizontal spatial derivatives of this input dataset, which were then analyzed using focal statistics and finally combined using a fuzzy combination operation. Analytic signal and the Shi and Butt (2004) ZS attribute were also analyzed using this algorithm. A parameter could be adjusted to determine sensitivity. Depending on the input dataset 88% to 100% of the wells were located, with typical values being 95% to 99% for the NPR3 field site.

  17. Location of EMIC Wave Events Relative to the Plasmapause: Van Allen Probes Observations

    NASA Astrophysics Data System (ADS)

    Tetrick, S.; Engebretson, M. J.; Posch, J. L.; Kletzing, C.; Smith, C. W.; Wygant, J. R.; Gkioulidou, M.; Reeves, G. D.; Fennell, J. F.

    2015-12-01

    Many early theoretical studies of electromagnetic ion cyclotron (EMIC) waves generated in Earth's magnetosphere predicted that the equatorial plasmapause (PP) would be a preferred location for their generation. However, several large statistical studies in the past two decades, most notably Fraser and Nguyen [2001], have provided little support for this location. In this study we present a survey of the most intense EMIC waves observed by the EMFISIS fluxgate magnetometer on the Van Allen Probes-A spacecraft (with apogee at 5.9 RE) from its launch through the end of 2014, and have compared their location with simultaneous electron density data obtained by the EFW electric field instrument and ring current ion flux data obtained by the HOPE and RBSPICE instruments. We show distributions of these waves as a function of distance inside or outside the PP as a function of local time sector, frequency band (H+, He+, or both), and timing relative to magnetic storms and substorms. Most EMIC waves in this data set occurred within 1 RE of the PP in all local time sectors, but very few were limited to ± 0.1 RE, and most of these occurred in the 06-12 MLT sector during non-storm conditions. The majority of storm main phase waves in the dusk sector occurred inside the PP. He+ band waves dominated at most local times inside the PP, and H+ band waves were never observed there. Although the presence of elevated fluxes of ring current protons was common to all events, the configuration of lower energy ion populations varied as a function of geomagnetic activity and storm phase.

  18. A global 3D P-velocity model of the Earth's crust and mantle for improved event location.

    SciTech Connect

    Ballard, Sanford; Encarnacao, Andre Villanova; Begnaud, Michael A.; Rowe, Charlotte A.; Lewis, Jennifer E.; Young, Christopher John; Chang, Marcus C.; Hipp, James Richard

    2010-04-01

    To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D (SAndia LoS Alamos) version 1.4, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is > 55%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model

  19. aPPRove: An HMM-Based Method for Accurate Prediction of RNA-Pentatricopeptide Repeat Protein Binding Events.

    PubMed

    Harrison, Thomas; Ruiz, Jaime; Sloan, Daniel B; Ben-Hur, Asa; Boucher, Christina

    2016-01-01

    Pentatricopeptide repeat containing proteins (PPRs) bind to RNA transcripts originating from mitochondria and plastids. There are two classes of PPR proteins. The [Formula: see text] class contains tandem [Formula: see text]-type motif sequences, and the [Formula: see text] class contains alternating [Formula: see text], [Formula: see text] and [Formula: see text] type sequences. In this paper, we describe a novel tool that predicts PPR-RNA interaction; specifically, our method, which we call aPPRove, determines where and how a [Formula: see text]-class PPR protein will bind to RNA when given a PPR and one or more RNA transcripts by using a combinatorial binding code for site specificity proposed by Barkan et al. Our results demonstrate that aPPRove successfully locates how and where a PPR protein belonging to the [Formula: see text] class can bind to RNA. For each binding event it outputs the binding site, the amino-acid-nucleotide interaction, and its statistical significance. Furthermore, we show that our method can be used to predict binding events for [Formula: see text]-class proteins using a known edit site and the statistical significance of aligning the PPR protein to that site. In particular, we use our method to make a conjecture regarding an interaction between CLB19 and the second intronic region of ycf3. The aPPRove web server can be found at www.cs.colostate.edu/~approve. PMID:27560805

  20. aPPRove: An HMM-Based Method for Accurate Prediction of RNA-Pentatricopeptide Repeat Protein Binding Events

    PubMed Central

    Harrison, Thomas; Ruiz, Jaime; Sloan, Daniel B.; Ben-Hur, Asa; Boucher, Christina

    2016-01-01

    Pentatricopeptide repeat containing proteins (PPRs) bind to RNA transcripts originating from mitochondria and plastids. There are two classes of PPR proteins. The P class contains tandem P-type motif sequences, and the PLS class contains alternating P, L and S type sequences. In this paper, we describe a novel tool that predicts PPR-RNA interaction; specifically, our method, which we call aPPRove, determines where and how a PLS-class PPR protein will bind to RNA when given a PPR and one or more RNA transcripts by using a combinatorial binding code for site specificity proposed by Barkan et al. Our results demonstrate that aPPRove successfully locates how and where a PPR protein belonging to the PLS class can bind to RNA. For each binding event it outputs the binding site, the amino-acid-nucleotide interaction, and its statistical significance. Furthermore, we show that our method can be used to predict binding events for PLS-class proteins using a known edit site and the statistical significance of aligning the PPR protein to that site. In particular, we use our method to make a conjecture regarding an interaction between CLB19 and the second intronic region of ycf3. The aPPRove web server can be found at www.cs.colostate.edu/~approve. PMID:27560805

  1. Accurate memory for object location by individuals with intellectual disability: absolute spatial tagging instead of configural processing?

    PubMed

    Giuliani, Fabienne; Favrod, Jérôme; Grasset, François; Schenk, Françoise

    2011-01-01

    Using head-mounted eye tracker material, we assessed spatial recognition abilities (e.g., reaction to object permutation, removal or replacement with a new object) in participants with intellectual disabilities. The "Intellectual Disabilities (ID)" group (n = 40) obtained a score totalling a 93.7% success rate, whereas the "Normal Control" group (n = 40) scored 55.6% and took longer to fix their attention on the displaced object. The participants with an intellectual disability thus had a more accurate perception of spatial changes than controls. Interestingly, the ID participants were more reactive to object displacement than to removal of the object. In the specific test of novelty detection, however, the scores were similar, the two groups approaching 100% detection. Analysis of the strategies expressed by the ID group revealed that they engaged in more systematic object checking and were more sensitive than the control group to changes in the structure of the environment. Indeed, during the familiarisation phase, the "ID" group explored the collection of objects more slowly, and fixed their gaze for a longer time upon a significantly lower number of fixation points during visual sweeping. PMID:21353464

  2. Evolution and location of lightning events that cause terrestrial gamma ray flashes (TGFs)

    NASA Astrophysics Data System (ADS)

    Marshall, T.; Stolzenburg, M.; Karunarathne, S.; Lu, G.; Cummer, S. A.

    2012-12-01

    TGFs often occur during the initial breakdown (IB) stage of intracloud lightning flashes; in particular, they seem to be related to the bipolar IB pulses seen by E-change sensors. Each IB pulse usually has 1-3 fast (2-5 μs) unipolar pulses "superimposed on the initial half cycle" of a slower (~40 μs) bipolar pulse [Weidman and Krider, JGR 1979]. During the summer of 2011 we collected lightning E-change data at 10 sites covering an area of about 70 km × 100 km at the NASA/Kennedy Space Center (KSC). We also use ELF and LF data recorded at Duke University to identify lightning flashes in the KSC area that exhibit TGF-like characteristics. By using a time-of-arrival technique with our E-change data, preliminary results indicate that the successive unipolar pulses in an IB bipolar pulse are located a few hundred meters higher in altitude than the previous unipolar pulse. The E-change data during the initial half cycle of the bipolar pulse are consistent with propagation of a negative streamer upward. We will present these data and discuss ways in which the events might produce gamma rays.

  3. Seismic monitoring of EGS tests at the Coso Geothermal area, California, using accurate MEQ locations and full moment tensors

    SciTech Connect

    Foulger, G.R.; B.R. Julian, B.R.; F. Monastero

    2008-04-01

    We studied high-resolution relative locations and full moment tensors of microearthquakes (MEQs) occurring before, during and following Enhanced Geothermal Systems (EGS) experiments in two wells at the Coso geothermal area, California. The objective was to map new fractures, determine the mode and sense of failure, and characterize the stress cycle associated with injection. New software developed for this work combines waveform crosscorrelation measurement of arrival times with relative relocation methods, and assesses confidence regions for moment tensors derived using linearprogramming methods. For moment tensor determination we also developed a convenient Graphical User Interface (GUI), to streamline the work. We used data from the U.S. Navy’s permanent network of three-component digital borehole seismometers and from 14 portable three-component digital instruments. The latter supplemented the permanent network during injection experiments in well 34A-9 in 2004 and well 34-9RD2 in 2005. In the experiment in well 34A-9, the co-injection earthquakes were more numerous, smaller, more explosive and had more horizontal motion, compared with the pre-injection earthquakes. In the experiment in well 34-9RD2 the relocated hypocenters reveal a well-defined planar structure, 700 m long and 600 m high in the depth range 0.8 to 1.4 km below sea level, striking N 20° E and dipping at 75° to the WNW. The moment tensors show that it corresponds to a mode I (opening) crack. For both wells, the perturbed stress state near the bottom of the well persisted for at least two months following the injection.

  4. A new method to estimate location and slip of simulated rock failure events

    NASA Astrophysics Data System (ADS)

    Heinze, Thomas; Galvan, Boris; Miller, Stephen Andrew

    2015-05-01

    At the laboratory scale, identifying and locating acoustic emissions (AEs) is a common method for short term prediction of failure in geomaterials. Above average AE typically precedes the failure process and is easily measured. At larger scales, increase in micro-seismic activity sometimes precedes large earthquakes (e.g. Tohoku, L'Aquilla, oceanic transforms), and can be used to assess seismic risk. The goal of this work is to develop a methodology and numerical algorithms for extracting a measurable quantity analogous to AE arising from the solution of equations governing rock deformation. Since there is no physical property to quantify AE derivable from the governing equations, an appropriate rock-mechanical analog needs to be found. In this work, we identify a general behavior of the AE generation process preceding rock failure. This behavior includes arbitrary localization of low magnitude events during pre-failure stage, followed by increase in number and amplitude, and finally localization around the incipient failure plane during macroscopic failure. We propose deviatoric strain rate as the numerical analog that mimics this behavior, and develop two different algorithms designed to detect rapid increases in deviatoric strain using moving averages. The numerical model solves a fully poro-elasto-plastic continuum model and is coupled to a two-phase flow model. We test our model by comparing simulation results with experimental data of drained compression and of fluid injection experiments. We find for both cases that occurrence and amplitude of our AE analog mimic the observed general behavior of the AE generation process. Our technique can be extended to modeling at the field scale, possibly providing a mechanistic basis for seismic hazard assessment from seismicity that occasionally precedes large earthquakes.

  5. Locating Microseismic Events Using Fat-Ray Double-Difference Tomography for Monitoring CO2 Injection at the Aneth EOR Field

    NASA Astrophysics Data System (ADS)

    Chen, T.; Huang, L.; Rutledge, J. T.

    2014-12-01

    During CO2 injection, the increase in pore pressure and volume may change stress distribution in the field, and induce microseismic events as brittle failure on small faults or fractures. An accurate location of these induced microseismic events can help understand the migration of CO2 and stress evolution in the reservoir. A geophone string spanning 800-1700 m in depth was cemented into a monitoring well at the Aneth oil field in Utah in 2007 for monitoring CO2 injection for enhanced oil recovery (EOR). The monitoring continued till 2010. A total of 24 geophone levels recorded induced microseismic events, including 18 levels of three-component geophones and six vertical-component levels spaced 106.7 m (350 ft) apart to take full advantage of the entire array aperture. We apply a fat-ray double-difference tomography method to microseismic data acquired at the Aneth EOR field. We obtain high-precision locations of microseismic events and improve the velocity structure simultaneously. We demonstrate the improvements by comparing our results with those obtained using the conventional double-difference tomography.

  6. Epicenter Location of Regional Seismic Events Using Love Wave and Rayleigh Wave Ambient Seismic Noise Green's Functions

    NASA Astrophysics Data System (ADS)

    Levshin, A. L.; Barmin, M. P.; Moschetti, M. P.; Mendoza, C.; Ritzwoller, M. H.

    2011-12-01

    We describe a novel method to locate regional seismic events based on exploiting Empirical Green's Functions (EGF) that are produced from ambient seismic noise. Elastic EGFs between pairs of seismic stations are determined by cross-correlating long time-series of ambient noise recorded at the two stations. The EGFs principally contain Rayleigh waves on the vertical-vertical cross-correlations and Love waves on the transverse-transverse cross-correlations. Earlier work (Barmin et al., "Epicentral location based on Rayleigh wave empirical Green's functions from ambient seismic noise", Geophys. J. Int., 2011) showed that group time delays observed on Rayleigh wave EGFs can be exploited to locate to within about 1 km moderate sized earthquakes using USArray Transportable Array (TA) stations. The principal advantage of the method is that the ambient noise EGFs are affected by lateral variations in structure similarly to the earthquake signals, so the location is largely unbiased by 3-D structure. However, locations based on Rayleigh waves alone may be biased by more than 1 km if the earthquake depth is unknown but lies between 2 km and 7 km. This presentation is motivated by the fact that group time delays for Love waves are much less affected by earthquake depth than Rayleigh waves; thus exploitation of Love wave EGFs may reduce location bias caused by uncertainty in event depth. The advantage of Love waves to locate seismic events, however, is mitigated by the fact that Love wave EGFs have a smaller SNR than Rayleigh waves. Here, we test the use of Love and Rayleigh wave EGFs between 5- and 15-sec period to locate seismic events based on the USArray TA in the western US. We focus on locating aftershocks of the 2008 M 6.0 Wells earthquake, mining blasts in Wyoming and Montana, and small earthquakes near Norman, OK and Dallas, TX, some of which may be triggered by hydrofracking or injection wells.

  7. 76 FR 31843 - Safety Zone; Temporary Change to Enforcement Location of Recurring Fireworks Display Event...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-02

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; Temporary Change to Enforcement Location of... final rule. SUMMARY: The Coast Guard is temporarily changing the enforcement location of a safety zone... location on land. The safety zone is necessary to provide for the safety of life on navigable waters...

  8. The magnetic network location of explosive events observed in the solar transition region

    NASA Technical Reports Server (NTRS)

    Porter, J. G.; Dere, K. P.

    1991-01-01

    Compact short-lived explosive events have been observed in solar transition region lines with the High-Resolution Telescope and Spectrograph (HRTS) flown by the Naval Research Laboratory on a series of rockets and on Spacelab 2. Data from Spacelab 2 are coaligned with a simultaneous magnetogram and near-simultaneous He I 10,380 -A spectroheliogram obtained at the National Solar Observatory at Kitt Peak. The comparison shows that the explosive events occur in the solar magnetic network lanes at the boundaries of supergranular convective cells. However, the events occur away from the larger concentrations of magnetic flux in the network, in contradiction to the observed tendency of the more energetic solar phenomena to be associated with the stronger magnetic fields.

  9. The prognostic value of injury severity, location of event, and age at injury in pediatric traumatic head injuries

    PubMed Central

    Halldorsson, Jonas G; Flekkoy, Kjell M; Arnkelsson, Gudmundur B; Tomasson, Kristinn; Gudmundsson, Kristinn R; Arnarson, Eirikur Orn

    2008-01-01

    Aims To estimate the prognostic value of injury severity, location of event, and demographic parameters, for symptoms of pediatric traumatic head injury (THI) 4 years later. Methods Data were collected prospectively from Reykjavik City Hospital on all patients age 0–19 years, diagnosed with THI (n = 408) during one year. Information was collected on patient demographics, location of traumatic event, cause of injury, injury severity, and ICD-9 diagnosis. Injury severity was estimated according to the Head Injury Severity Scale (HISS). Four years post-injury, a questionnaire on late symptoms attributed to the THI was sent. Results Symptoms reported were more common among patients with moderate/severe THI than among others (p < 0.001). The event location had prognostic value (p < 0.05). Overall, 72% of patients with moderate/severe motor vehicle-related THI reported symptoms. There was a curvilinear age effect (p < 0.05). Symptoms were least frequent in the youngest age group, 0–4 years, and most frequent in the age group 5–14 years. Gender and urban/rural residence were not significantly related to symptoms. Conclusions Motor vehicle related moderate/severe THI resulted in a high rate of late symptoms. Location had a prognostic value. Patients with motor vehicle-related THI need special consideration regardless of injury severity. PMID:18728737

  10. Characterization of Source and Wave Propagation Effects of Volcano-seismic Events and Tremor Using the Amplitude Source Location Method

    NASA Astrophysics Data System (ADS)

    Kumagai, H.; Londono, J. M.; López, C. M.; Ruiz, M. C.; Mothes, P. A.; Maeda, Y.

    2015-12-01

    We propose application of the amplitude source location (ASL) method to characterize source and wave propagation effects of volcano-seismic events and tremor observed at different volcanoes. We used this method to estimate the source location and source amplitude from high-frequency (5-10 Hz) seismic amplitudes under the assumption of isotropic S-wave radiation. We estimated the cumulative source amplitude (Is) as the offset value of the time-integrated envelope of the vertical seismogram corrected for geometrical spreading and medium attenuation in the 5-10 Hz band. We studied these parameters of tremor signals associated with eruptions and explosion events at Tungurahua volcano, Ecuador; long-period (LP) events at Cotopaxi volcano, Ecuador; and LP events at Nevado del Ruiz volcano, Colombia. We identified two types of eruption tremor at Tungurahua; noise-like inharmonic waveforms and harmonic oscillatory signals. We found that Is increased linearly with increasing source amplitude for explosion events and LP events, and that Is increased exponentially with increasing source amplitude for inharmonic eruption tremor signals. The source characteristics of harmonic eruption tremor signals differed from those of inharmonic tremor signals. The Is values we estimated for inharmonic eruption tremor were consistent with previous estimates of volumes of tephra fallout. The linear relationship between the source amplitude and Is for LP events can be explained by the wave propagation effects in the diffusion model for multiple scattering assuming a diffusion coefficient of 105 m2/s and an intrinsic Q factor of around 50. The resultant mean free path is approximately 100 m. Our results suggest that Cotopaxi and Nevado del Ruiz volcanoes have similar highly scattering and attenuating structures. Our approach provides a systematic way to compare the size of volcano-seismic signals observed at different volcanoes. The scaling relations among source parameters that we identified

  11. Encoding negative events under stress: high subjective arousal is related to accurate emotional memory despite misinformation exposure.

    PubMed

    Hoscheidt, Siobhan M; LaBar, Kevin S; Ryan, Lee; Jacobs, W Jake; Nadel, Lynn

    2014-07-01

    Stress at encoding affects memory processes, typically enhancing, or preserving, memory for emotional information. These effects have interesting implications for eyewitness accounts, which in real-world contexts typically involve encoding an aversive event under stressful conditions followed by potential exposure to misinformation. The present study investigated memory for a negative event encoded under stress and subsequent misinformation endorsement. Healthy young adults participated in a between-groups design with three experimental sessions conducted 48 h apart. Session one consisted of a psychosocial stress induction (or control task) followed by incidental encoding of a negative slideshow. During session two, participants were asked questions about the slideshow, during which a random subgroup was exposed to misinformation. Memory for the slideshow was tested during the third session. Assessment of memory accuracy across stress and no-stress groups revealed that stress induced just prior to encoding led to significantly better memory for the slideshow overall. The classic misinformation effect was also observed - participants exposed to misinformation were significantly more likely to endorse false information during memory testing. In the stress group, however, memory accuracy and misinformation effects were moderated by arousal experienced during encoding of the negative event. Misinformed-stress group participants who reported that the negative slideshow elicited high arousal during encoding were less likely to endorse misinformation for the most aversive phase of the story. Furthermore, these individuals showed better memory for components of the aversive slideshow phase that had been directly misinformed. Results from the current study provide evidence that stress and high subjective arousal elicited by a negative event act concomitantly during encoding to enhance emotional memory such that the most aversive aspects of the event are well remembered and

  12. Design and Test of an Event Detector and Locator for the ReflectoActive Seals System

    SciTech Connect

    Stinson, Brad J

    2006-06-01

    The purpose of this work was to research, design, develop and test a novel instrument for detecting fiber optic loop continuity and spatially locating fiber optic breaches. The work is for an active seal system called ReflectoActive{trademark} Seals whose purpose is to provide real time container tamper indication. A Field Programmable Gate Array was used to implement a loop continuity detector and a spatial breach locator based on a high acquisition speed single photon counting optical time domain reflectometer. Communication and other control features were added in order to create a usable instrument that met defined requirements. A host graphical user interface was developed to illustrate system use and performance. The resulting device meets performance specifications by exhibiting a dynamic range of 27dB and a spatial resolution of 1.5 ft. The communication scheme used expands installation options and allows the device to communicate to a central host via existing Local Area Networks and/or the Internet.

  13. Pyroclastic density current hazard maps at Campi Flegrei caldera (Italy): the effects of event scale, vent location and time forecasts.

    NASA Astrophysics Data System (ADS)

    Bevilacqua, Andrea; Neri, Augusto; Esposti Ongaro, Tomaso; Isaia, Roberto; Flandoli, Franco; Bisson, Marina

    2016-04-01

    Today hundreds of thousands people live inside the Campi Flegrei caldera (Italy) and in the adjacent part of the city of Naples making a future eruption of such volcano an event with huge consequences. Very high risks are associated with the occurrence of pyroclastic density currents (PDCs). Mapping of background or long-term PDC hazard in the area is a great challenge due to the unknown eruption time, scale and vent location of the next event as well as the complex dynamics of the flow over the caldera topography. This is additionally complicated by the remarkable epistemic uncertainty on the eruptive record, affecting the time of past events, the location of vents as well as the PDCs areal extent estimates. First probability maps of PDC invasion were produced combining a vent-opening probability map, statistical estimates concerning the eruptive scales and a Cox-type temporal model including self-excitement effects, based on the eruptive record of the last 15 kyr. Maps were produced by using a Monte Carlo approach and adopting a simplified inundation model based on the "box model" integral approximation tested with 2D transient numerical simulations of flow dynamics. In this presentation we illustrate the independent effects of eruption scale, vent location and time of forecast of the next event. Specific focus was given to the remarkable differences between the eastern and western sectors of the caldera and their effects on the hazard maps. The analysis allowed to identify areas with elevated probabilities of flow invasion as a function of the diverse assumptions made. With the quantification of some sources of uncertainty in relation to the system, we were also able to provide mean and percentile maps of PDC hazard levels.

  14. BlueDetect: An iBeacon-Enabled Scheme for Accurate and Energy-Efficient Indoor-Outdoor Detection and Seamless Location-Based Service.

    PubMed

    Zou, Han; Jiang, Hao; Luo, Yiwen; Zhu, Jianjie; Lu, Xiaoxuan; Xie, Lihua

    2016-01-01

    The location and contextual status (indoor or outdoor) is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS) for individuals. In addition, optimizations of building management systems (BMS), such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO) detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS) easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption. PMID:26907295

  15. BlueDetect: An iBeacon-Enabled Scheme for Accurate and Energy-Efficient Indoor-Outdoor Detection and Seamless Location-Based Service

    PubMed Central

    Zou, Han; Jiang, Hao; Luo, Yiwen; Zhu, Jianjie; Lu, Xiaoxuan; Xie, Lihua

    2016-01-01

    The location and contextual status (indoor or outdoor) is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS) for individuals. In addition, optimizations of building management systems (BMS), such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO) detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS) easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption. PMID:26907295

  16. BlueDetect: An iBeacon-Enabled Scheme for Accurate and Energy-Efficient Indoor-Outdoor Detection and Seamless Location-Based Service.

    PubMed

    Zou, Han; Jiang, Hao; Luo, Yiwen; Zhu, Jianjie; Lu, Xiaoxuan; Xie, Lihua

    2016-02-22

    The location and contextual status (indoor or outdoor) is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS) for individuals. In addition, optimizations of building management systems (BMS), such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO) detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS) easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption.

  17. A binary image reconstruction technique for accurate determination of the shape and location of metal objects in x-ray computed tomography.

    PubMed

    Wang, Jing; Xing, Lei

    2010-01-01

    The presence of metals in patients causes streaking artifacts in X-ray CT and has been recognized as a problem that limits various applications of CT imaging. Accurate localization of metals in CT images is a critical step for metal artifacts reduction in CT imaging and many practical applications of CT images. The purpose of this work is to develop a method of auto-determination of the shape and location of metallic object(s) in the image space. The proposed method is based on the fact that when a metal object is present in a patient, a CT image can be divided into two prominent components: high density metal and low density normal tissues. This prior knowledge is incorporated into an objective function as the regularization term whose role is to encourage the solution to take a form of two intensity levels. A computer simulation study and four experimental studies are performed to evaluate the proposed approach. Both simulation and experimental studies show that the presented algorithm works well even in the presence of complicated shaped metal objects. For a hexagonally shaped metal embedded in a water phantom, for example, it is found that the accuracy of metal reconstruction is within sub-millimeter.

  18. Sensitive and accurate identification of protein–DNA binding events in ChIP-chip assays using higher order derivative analysis

    PubMed Central

    Barrett, Christian L.; Cho, Byung-Kwan

    2011-01-01

    Immuno-precipitation of protein–DNA complexes followed by microarray hybridization is a powerful and cost-effective technology for discovering protein–DNA binding events at the genome scale. It is still an unresolved challenge to comprehensively, accurately and sensitively extract binding event information from the produced data. We have developed a novel strategy composed of an information-preserving signal-smoothing procedure, higher order derivative analysis and application of the principle of maximum entropy to address this challenge. Importantly, our method does not require any input parameters to be specified by the user. Using genome-scale binding data of two Escherichia coli global transcription regulators for which a relatively large number of experimentally supported sites are known, we show that ∼90% of known sites were resolved to within four probes, or ∼88 bp. Over half of the sites were resolved to within two probes, or ∼38 bp. Furthermore, we demonstrate that our strategy delivers significant quantitative and qualitative performance gains over available methods. Such accurate and sensitive binding site resolution has important consequences for accurately reconstructing transcriptional regulatory networks, for motif discovery, for furthering our understanding of local and non-local factors in protein–DNA interactions and for extending the usefulness horizon of the ChIP-chip platform. PMID:21051353

  19. Generating regional infrasound celerity-range models using ground-truth information and the implications for event location

    NASA Astrophysics Data System (ADS)

    Nippress, Alexandra; Green, David N.; Marcillo, Omar E.; Arrowsmith, Stephen J.

    2014-05-01

    Celerity-range models, where celerity is defined as the epicentral distance divided by the total traveltime (similar to the definition of group velocity for dispersed seismic surface waves), can be used for the association of infrasound automatic detections, for event location and for the validation of acoustic propagation simulations. Signals recorded from ground truth events are used to establish celerity-range models, but data coverage is uneven in both space and time. To achieve a high density of regional recordings we use data from USArray seismic stations recording air-to-ground coupled waves from explosions during the summers of 2004-2008 at the Utah Training and Test Range, in the western United States, together with data from five microbarograph arrays at regional distances (<1000 km). We have developed a consistent methodology for analysing the infrasound and seismic data, including choosing filter characteristics from a limited group of two-octave wide filter bands and picking the maximum peak-to-peak arrival. We clearly observe tropospheric, thermospheric and stratospheric arrivals, in agreement with regional ray tracing models. Due to data availability and the dependence of infrasound propagation on the season, we develop three regional celerity-range models for the U.S. summer, with a total of 2211 data picks. The new models suggest event locations using the Geiger method could be improved in terms of both accuracy (up to 80 per cent closer to ground truth) and precision (error ellipse area reduced by >90 per cent) when compared to those estimated using the global International Data Center model, particularly for events where stations detect arrivals at ranges <350 km. Whilst adding data-based prior information into the Bayesian Infrasound Source Localization (BISL) method is also shown to increase precision, to increase accuracy, the parameter space must be expanded to include station-specific celerity distributions.

  20. Testing the Quick Seismic Event Locator and Magnitude Calculator (SSL_Calc) by Marsite Project Data Base

    NASA Astrophysics Data System (ADS)

    Tunc, Suleyman; Tunc, Berna; Caka, Deniz; Baris, Serif

    2016-04-01

    Locating and calculating size of the seismic events is quickly one of the most important and challenging issue in especially real time seismology. In this study, we developed a Matlab application to locate seismic events and calculate their magnitudes (Local Magnitude and empirical Moment Magnitude) using single station called SSL_Calc. This newly developed sSoftware has been tested on the all stations of the Marsite project "New Directions in Seismic Hazard Assessment through Focused Earth Observation in the Marmara Supersite-MARsite". SSL_Calc algorithm is suitable both for velocity and acceleration sensors. Data has to be in GCF (Güralp Compressed Format). Online or offline data can be selected in SCREAM software (belongs to Guralp Systems Limited) and transferred to SSL_Calc. To locate event P and S wave picks have to be marked by using SSL_Calc window manually. During magnitude calculation, instrument correction has been removed and converted to real displacement in millimeter. Then the displacement data is converted to Wood Anderson Seismometer output by using; Z=[0;0]; P=[-6.28+4.71j; -6.28-4.71j]; A0=[2080] parameters. For Local Magnitude calculation,; maximum displacement amplitude (A) and distance (dist) are used in formula (1) for distances up to 200km and formula (2) for more than 200km. ML=log10(A)-(-1.118-0.0647*dist+0.00071*dist2-3.39E-6*dist3+5.71e-9*dist4) (1) ML=log10(A)+(2.1173+0.0082*dist-0.0000059628*dist2) (2) Following Local Magnitude calculation, the programcode calculates two empiric Moment Magnitudes using formulas (3) Akkar et al. (2010) and (4) Ulusay et al. (2004). Mw=0.953* ML+0.422 (3) Mw=0.7768* ML+1.5921 (4) SSL_Calc is a software that is easy to implement and user friendly and offers practical solution to individual users to location of event and ML, Mw calculation.

  1. Single-station and single-event marsquake location and inversion for structure using synthetic Martian waveforms

    NASA Astrophysics Data System (ADS)

    Khan, A.; van Driel, M.; Böse, M.; Giardini, D.; Ceylan, S.; Yan, J.; Clinton, J.; Euchner, F.; Lognonné, P.; Murdoch, N.; Mimoun, D.; Panning, M.; Knapmeyer, M.; Banerdt, W. B.

    2016-09-01

    In anticipation of the upcoming InSight mission, which is expected to deploy a single seismic station on the Martian surface in November 2018, we describe a methodology that enables locating marsquakes and obtaining information on the interior structure of Mars. The method works sequentially and is illustrated using single representative 3-component seismograms from two separate events: a relatively large teleseismic event (Mw5.1) and a small-to-moderate-sized regional event (Mw3.8). Location and origin time of the event is determined probabilistically from observations of Rayleigh waves and body-wave arrivals. From the recording of surface waves, averaged fundamental-mode group velocity dispersion data can be extracted and, in combination with body-wave arrival picks, inverted for crust and mantle structure. In the absence of Martian seismic data, we performed full waveform computations using a spectral element method (AxiSEM) to compute seismograms down to a period of 1 s. The model (radial profiles of density, P- and S-wave-speed, and attenuation) used for this purpose is constructed on the basis of an average Martian mantle composition and model areotherm using thermodynamic principles, mineral physics data, and viscoelastic modeling. Noise was added to the synthetic seismic data using an up-to-date noise model that considers a whole series of possible noise sources generated in instrument and lander, including wind-, thermal-, and pressure-induced effects and electromagnetic noise. The examples studied here, which are based on the assumption of spherical symmetry, show that we are able to determine epicentral distance and origin time to accuracies of ∼ 0.5-1° and ± 3-6 s, respectively. For the events and the particular noise level chosen, information on Rayleigh-wave group velocity dispersion in the period range ∼ 14-48 s (Mw5.1) and ∼ 14-34 s (Mw3.8) could be determined. Stochastic inversion of dispersion data in combination with body-wave travel time

  2. A global 3D P-velocity model of the Earth's crust and mantle for improved event location : SALSA3D.

    SciTech Connect

    Young, Christopher John; Steck, Lee K.; Phillips, William Scott; Ballard, Sanford; Chang, Marcus C.; Rowe, Charlotte A.; Encarnacao, Andre Villanova; Begnaud, Michael A.; Hipp, James Richard

    2010-07-01

    To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D version 1.5, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is {approx}50%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model is assessed

  3. SALSA3D : a global 3D p-velocity model of the Earth's crust and mantle for improved event location.

    SciTech Connect

    Encarnacao, Andre Villanova; Begnaud, Michael A.; Rowe, Charlotte A.; Young, Christopher John; Chang, Marcus C.; Ballard, Sally C.; Hipp, James Richard

    2010-06-01

    To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D version 1.5, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is {approx}50%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model is assessed

  4. a Multiple Data Set Joint Inversion Global 3d P-Velocity Model of the Earth's Crust and Mantle for Improved Seismic Event Location

    NASA Astrophysics Data System (ADS)

    Ballard, S.; Begnaud, M. L.; Hipp, J. R.; Chael, E. P.; Encarnacao, A.; Maceira, M.; Yang, X.; Young, C. J.; Phillips, W.

    2013-12-01

    SALSA3D is a global 3D P wave velocity model of the Earth's crust and mantle developed specifically to provide seismic event locations that are more accurate and more precise than are locations from 1D and 2.5D models. In this paper, we present the most recent version of our model, for the first time jointly derived from multiple types of data: body wave travel times, surface wave group velocities, and gravity. The latter two are added to provide information in areas with poor body wave coverage, and are down-weighted in areas where body wave coverage is good. To constrain the inversions, we invoked empirical relations among the density, S velocity, and P velocity. We demonstrate the ability of the new SALSA3D model to reduce mislocations and generate statistically robust uncertainty estimates for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. We obtain path-dependent travel time prediction uncertainties for our model by computing the full 3D model covariance matrix of our tomographic system and integrating the model slowness variance and covariance along paths of interest. This approach yields very low travel time prediction uncertainties for well-sampled paths through the Earth and higher uncertainties for paths that are poorly represented in the data set used to develop the model. While the calculation of path-dependent prediction uncertainties with this approach is computationally expensive, uncertainties can be pre-computed for a network of stations and stored in 3D lookup tables that can be quickly and efficiently interrogated using GeoTess software.

  5. Location, Location, Location!

    ERIC Educational Resources Information Center

    Ramsdell, Kristin

    2004-01-01

    Of prime importance in real estate, location is also a key element in the appeal of romances. Popular geographic settings and historical periods sell, unpopular ones do not--not always with a logical explanation, as the author discovered when she conducted a survey on this topic last year. (Why, for example, are the French Revolution and the…

  6. A global 3D P-Velocity model of the Earth%3CU%2B2019%3Es crust and mantle for improved event location.

    SciTech Connect

    Ballard, Sanford; Encarnacao, Andre Villanova; Begnaud, Michael A.; Rowe, Charlotte A.; Lewis, Jennifer E.; Young, Christopher John; Chang, Marcus C.; Hipp, James Richard

    2010-05-01

    To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D (SAndia LoS Alamos) version 1.4, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is > 55%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model

  7. SALSA3D - A Global 3D P-Velocity Model of the Earth's Crust and Mantle for Improved Event Location

    NASA Astrophysics Data System (ADS)

    Ballard, S.; Begnaud, M. L.; Young, C. J.; Hipp, J. R.; Chang, M.; Encarnacao, A. V.; Rowe, C. A.; Phillips, W. S.; Steck, L.

    2010-12-01

    To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth’s crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D version 1.5, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is ~50%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions.. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with ~400 processors. Resolution of our model is assessed using a

  8. The Role of Color Cues in Facilitating Accurate and Rapid Location of Aided Symbols by Children with and without Down Syndrome

    ERIC Educational Resources Information Center

    Wilkinson, Krista; Carlin, Michael; Thistle, Jennifer

    2008-01-01

    Purpose: This research examined how the color distribution of symbols within a visual aided augmentative and alternative communication array influenced the speed and accuracy with which participants with and without Down syndrome located a target picture symbol. Method: Eight typically developing children below the age of 4 years, 8 typically…

  9. [Research on impact of dust event frequency on atmosphere visibility variance: a case study of typical weather stations locating in the dust route to Beijing].

    PubMed

    Qiu, Yu-jun; Zou, Xue-yong; Zhang, Chun-lai

    2006-06-01

    Relationship between dust event frequency and atmosphere visibility deviation is analyzed by using the data of daily visibility and various dust events in Beijing and other 13 typical weather stations locating in the dust events route to Beijing from 1971 to 2000. Results show that the visibility variance increases a standard deviation in the response to the dust event frequency decrease once. The influence of dust event to visibility comes from the high-frequency change of wind velocity. The change of wind velocity in one standard deviation can result in dust event frequency increasing by 30%. The high-frequency changes of near-surface wind influence the occurrence of dust event, and also the fluctuation of daily visibility deviation. The relationship between abnormal low visibility event and visibility deviation is in significant positive correlation. The increase of wind average distance leads to the enhance frequency of dust event and consequently the abnormal low visibility event. There are different relationships between abnormal low visibility event and floating dust, sandstorm and flying-dust respectively. PMID:16921932

  10. Clustering and location of mining induced seismicity in the Ruhr Basin by automated master event comparison based on Dynamic Waveform Matching (DWM)

    NASA Astrophysics Data System (ADS)

    Schulte-Theis, Hartwig; Joswig, Manfred

    1993-02-01

    Most of the local seismicity in the Ruhr Basin can be separated into characteristic clusters of similar, mining induced earthquakes. Each cluster can be represented by a strong master event. Therefore, it is possible to associate weak events to the corresponding clusters by master event comparison. The seismic signal matching is performed by a nonlinear correlation termed DWM for the entire seismogram length. DWM permits stretchings and shortenings between the two signals and overcomes the ambiguities in phase correlation by a consistent matching path. The automatic cluster association searches for the best DWM-correlation between the actual event and all master events of the appropriate epicenter region. Knowing the P- and S-onsets of the master event, they can be transposed to the actual event by the correlation path with one sample accuracy. The method has been applied to all BUG small array recordings 1987-1990 of local events from the Hamm-region to investigate spatial and temporal clustering. Within the clusters, a high percentage of weak events could be located relative to its master event. The temporal clustering resolved seismic activities that typically last a few months per cluster, but single aftershocks occur in the following years.

  11. A multi-station matched filter and coherent network processing approach to the automatic detection and relative location of seismic events

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Näsholm, Sven Peter; Kværna, Tormod

    2014-05-01

    Correlation detectors facilitate seismic monitoring in the near vicinity of previously observed events at far lower detection thresholds than are possible using the methods applied in most existing processing pipelines. The use of seismic arrays has been demonstrated to be highly beneficial in pressing down the detection threshold, due to superior noise suppression, and also in eliminating vast numbers of false alarms by performing array processing on the multi-channel output of the correlation detectors. This last property means that it is highly desirable to run continuous detectors for sites of repeating seismic events on a single-array basis for many arrays across a global network. Spurious detections for a given signal template on a single array can however still occur when an unrelated wavefront crosses the array from a very similar direction to that of the master event wavefront. We present an algorithm which scans automatically the output from multiple stations - both array and 3-component - for coherence between the individual station correlator outputs that is consistent with a disturbance in the vicinity of the master event. The procedure results in a categorical rejection of an event hypothesis in the absence of support from stations other than the one generating the trigger and provides a fully automatic relative event location estimate when patterns in the correlation detector outputs are found to be consistent with a common event. This coherence-based approach removes the need to make explicit measurements of the time-differences for single stations and this eliminates a potential source of error. The method is demonstrated for the North Korea nuclear test site and the relative event location estimates obtained for the 2006, 2009, and 2013 events are compared with previous estimates from different station configurations.

  12. Contribution of harmonicity and location to auditory object formation in free field: Evidence from event-related brain potentials

    NASA Astrophysics Data System (ADS)

    McDonald, Kelly L.; Alain, Claude

    2005-09-01

    The contribution of location and harmonicity cues in sound segregation was investigated using behavioral reports and source waveforms derived from the scalp-recorded evoked potentials. Participants were presented with sounds composed of multiple harmonics in a free-field environment. The third harmonic was either tuned or mistuned and could be presented from the same or different location from the remaining harmonics. Presenting the third harmonic at a different location than the remaining harmonics increased the likelihood of hearing the tuned or slightly (i.e., 2%) mistuned harmonic as a separate object. Partials mistuned by 16% of their original value ``pop out'' of the complex and were paralleled by an object-related negativity (ORN) that superimposed the N1 and P2 components. For the 2% mistuned stimuli, the ORN was present only when the mistuned harmonic was presented at a different location than the remaining harmonics. Presenting the tuned harmonic at a different location also yielded changes in neural activity between 150 and 250 ms after sound onset. The behavioral and electrophysiological results indicate that listeners can segregate sounds based on harmonicity or location alone. The results also indicate that a conjunction of harmonicity and location cues contribute to sound segregation primarily when harmonicity is ambiguous.

  13. Locations of Long-Period Seismic Events Beneath the Soufriere Hills Volcano, Montserrat, W.I., Inferred from a Waveform Semblance Method

    NASA Astrophysics Data System (ADS)

    Taira, T.; Linde, A. T.; Sacks, I. S.; Shalev, E.; Malin, P. E.; Nielsen, J. M.; Voight, B.; Hidayat, D.; Mattioli, G. S.

    2005-05-01

    Analysis of long-period (LP) seismic events provides information about the internal state of a volcano because LP events are attributed mainly to fluid dynamics between magma and hydrothermal reservoirs in its volcano (e.g., Chouet, 1992). We analyzed LP events recorded by three borehole seismic stations (AIRS, OLVN, and TRNT) at Soufriere Hills Volcano (SHV), Montserrat, W.I., during the period from March to June 2003. Borehole stations were deployed by the Caribbean Andesite Lava Island Precision Seismo-geodetic Observatory project (e.g., Shalev et al., 2003; Mattioli et al., 2004) and equipped with three-component short-period velocity seismometers with a sampling rate of 200 Hz. We selected 61 LP events with high signal-to-noise ratios. Almost all of the selected LP events are characterized by dominant periods in a range of 0.3 to 2.0 sec and durations of about 30 sec. Several LP events appear to be generated by a single source, based on the strong similarity in their waveforms. We first identified a family of LP events based on the dimensionless cross-correlation coefficient (CCC) of their spectral amplitudes of a period in a range of 0.2 to 2.0 sec, under the assumption of a fluid-driven crack model (Chouet, 1986). Seven LP events are identified as a family of LP events with high CCCs, particularly CCCs at AIRS in the vertical component greater than 0.88 in each event. This result suggested that these LP events are probably due to a repeated excitation of an identical source mechanism. We next attempted to estimate the locations of the identified a family of LP events by a waveform semblance method (Kawakatsu et al., 2000; Almendros and Chouet, 2003). To apply the above method, we searched the seismic phases with a rectilinear polarization from LP events, by performing a complex polarization analysis (Vidale, 1986). These phases are identified as averaged particle motion ellipticities of all stations in a time window less than 0.50. Incident angles of the

  14. Does visual working memory represent the predicted locations of future target objects? An event-related brain potential study.

    PubMed

    Grubert, Anna; Eimer, Martin

    2015-11-11

    During the maintenance of task-relevant objects in visual working memory, the contralateral delay activity (CDA) is elicited over the hemisphere opposite to the visual field where these objects are presented. The presence of this lateralised CDA component demonstrates the existence of position-dependent object representations in working memory. We employed a change detection task to investigate whether the represented object locations in visual working memory are shifted in preparation for the known location of upcoming comparison stimuli. On each trial, bilateral memory displays were followed after a delay period by bilateral test displays. Participants had to encode and maintain three visual objects on one side of the memory display, and to judge whether they were identical or different to three objects in the test display. Task-relevant memory and test stimuli were located in the same visual hemifield in the no-shift task, and on opposite sides in the horizontal shift task. CDA components of similar size were triggered contralateral to the memorized objects in both tasks. The absence of a polarity reversal of the CDA in the horizontal shift task demonstrated that there was no preparatory shift of memorized object location towards the side of the upcoming comparison stimuli. These results suggest that visual working memory represents the locations of visual objects during encoding, and that the matching of memorized and test objects at different locations is based on a comparison process that can bridge spatial translations between these objects. This article is part of a Special Issue entitled SI: Prediction and Attention.

  15. Does visual working memory represent the predicted locations of future target objects? An event-related brain potential study.

    PubMed

    Grubert, Anna; Eimer, Martin

    2015-11-11

    During the maintenance of task-relevant objects in visual working memory, the contralateral delay activity (CDA) is elicited over the hemisphere opposite to the visual field where these objects are presented. The presence of this lateralised CDA component demonstrates the existence of position-dependent object representations in working memory. We employed a change detection task to investigate whether the represented object locations in visual working memory are shifted in preparation for the known location of upcoming comparison stimuli. On each trial, bilateral memory displays were followed after a delay period by bilateral test displays. Participants had to encode and maintain three visual objects on one side of the memory display, and to judge whether they were identical or different to three objects in the test display. Task-relevant memory and test stimuli were located in the same visual hemifield in the no-shift task, and on opposite sides in the horizontal shift task. CDA components of similar size were triggered contralateral to the memorized objects in both tasks. The absence of a polarity reversal of the CDA in the horizontal shift task demonstrated that there was no preparatory shift of memorized object location towards the side of the upcoming comparison stimuli. These results suggest that visual working memory represents the locations of visual objects during encoding, and that the matching of memorized and test objects at different locations is based on a comparison process that can bridge spatial translations between these objects. This article is part of a Special Issue entitled SI: Prediction and Attention. PMID:25445999

  16. Seismicity Along the Endeavour Segment of the Juan de Fuca Ridge: Automated Event Locations for an Ocean-Bottom Seismometer Network

    NASA Astrophysics Data System (ADS)

    Weekly, R. T.; Wilcock, W. S.; Hooft, E. E.; Toomey, D. R.; McGill, P. R.

    2007-12-01

    From 2003-2006, the W.M. Keck Foundation supported the operation of a network of eight ocean-bottom seismometers (OBSs) that were deployed with a remotely operated vehicle along the central portion of the Endeavour Segment of the Juan de Fuca mid-ocean ridge as part of a multidisciplinary prototype NEPTUNE experiment. Data from 2003-2004 were initially analyzed during a research apprenticeship class at the University of Washington's Friday Harbor Laboratories. Eight student analysts located ~13,000 earthquakes along the Endeavour Segment. Analysis of data from 2004-2005 has to date been limited to locating ~6,000 earthquakes associated with a swarm in February-March 2005 near the northern end of the Endeavour Segment. The remaining data includes several significant swarms and it is anticipated that tens of thousands of earthquakes still need to be located. In order to efficiently obtain a complete catalog of high-quality locations for the 3-year experiment, we are developing an automatic method for earthquake location. We first apply a 5-Hz high-pass filter and identify triggers when the ratio of the root-mean square (RMS) amplitudes in short- and long- term windows exceeds a specified threshold. We search for events that are characterized by triggers within a short time interval on the majority of stations and use the signal spectra to eliminate events that are the result of 20-Hz Fin and Blue whale vocalizations. An autoregressive technique is applied to a short time window centered on the trigger time to pick P-wave times on each station's vertical channel. We locate the earthquake with these picks and either attempt to repick or eliminate arrivals with unacceptable residuals. Preliminary S-wave picks are then made on the horizontal channels by applying a 5-12 Hz bandpass filter, identifying the peak RMS amplitude for a short running window, and making a pick at the time the RMS amplitude rises above 50% of this value. The picks are refined using the

  17. Seismicity Along the Endeavour Segment of the Juan de Fuca Ridge: Automated Event Locations for an Ocean-Bottom Seismometer Network

    NASA Astrophysics Data System (ADS)

    Weekly, R. T.; Wilcock, W. S.; Hooft, E. E.; Toomey, D. R.; McGill, P. R.

    2004-12-01

    From 2003-2006, the W.M. Keck Foundation supported the operation of a network of eight ocean-bottom seismometers (OBSs) that were deployed with a remotely operated vehicle along the central portion of the Endeavour Segment of the Juan de Fuca mid-ocean ridge as part of a multidisciplinary prototype NEPTUNE experiment. Data from 2003-2004 were initially analyzed during a research apprenticeship class at the University of Washington's Friday Harbor Laboratories. Eight student analysts located ~13,000 earthquakes along the Endeavour Segment. Analysis of data from 2004-2005 has to date been limited to locating ~6,000 earthquakes associated with a swarm in February-March 2005 near the northern end of the Endeavour Segment. The remaining data includes several significant swarms and it is anticipated that tens of thousands of earthquakes still need to be located. In order to efficiently obtain a complete catalog of high-quality locations for the 3-year experiment, we are developing an automatic method for earthquake location. We first apply a 5-Hz high-pass filter and identify triggers when the ratio of the root-mean square (RMS) amplitudes in short- and long- term windows exceeds a specified threshold. We search for events that are characterized by triggers within a short time interval on the majority of stations and use the signal spectra to eliminate events that are the result of 20-Hz Fin and Blue whale vocalizations. An autoregressive technique is applied to a short time window centered on the trigger time to pick P-wave times on each station's vertical channel. We locate the earthquake with these picks and either attempt to repick or eliminate arrivals with unacceptable residuals. Preliminary S-wave picks are then made on the horizontal channels by applying a 5-12 Hz bandpass filter, identifying the peak RMS amplitude for a short running window, and making a pick at the time the RMS amplitude rises above 50% of this value. The picks are refined using the

  18. Regional seismic event identification and improved locations with small arrays and networks. Final report, 7 May 1993-30 September 1995

    SciTech Connect

    Vernon, F.L.; Minster, J.B.; Orcutt, J.A.

    1995-09-20

    This final report contains a summary of our work on the use of seismic networks and arrays to improve locations and identify small seismic event. We have developed techniques to migrate 3-component array records of local, regional and teleseismic wavetrains to directly image buried two- and three-dimensional heterogeneities (e.g. layer irregularities, volumetric heterogeneities) in the vicinity of the array. We have developed a technique to empirically characterize local and regional seismic code by binning and stacking network recordings of dense aftershock sequences. The principle motivation for this work was to look for robust coda phases dependent on source depth. We have extended our ripple-fired event discriminant (based on the time-independence of coda produced by ripple firing) by looking for an independence of the coda from the recording direction (also indicative of ripple-firing).

  19. Development of double-pair double difference earthquake location algorithm for improving earthquake locations

    NASA Astrophysics Data System (ADS)

    Guo, Hao; Zhang, Haijiang

    2016-10-01

    Event-pair double-difference (DD) earthquake location method, as incorporated in hypoDD, has been widely used to improve relative earthquake locations by using event-pair differential arrival times from pairs of events to common stations because some common path anomalies outside the source region can be canceled out due to similar ray paths. Similarly, station-pair differential arrival times from one event to pairs of stations can also be used to improve earthquake locations by canceling out the event origin time and some path anomalies inside the source region. To utilize advantages of both DD location methods, we have developed a new double-pair DD location method to use differential times constructed from pairs of events to pairs of stations to determine higher-precision relative earthquake locations. Compared to the event-pair and station-pair DD location methods, the new method can remove event origin times and station correction terms from the inversion system and cancel out path anomalies both outside and inside the source region at the same time. The new method is tested on earthquakes around the San Andreas Fault, California to validate its performance. From earthquake relocations it is demonstrated that the double-pair DD location method is able to better sharpen the images of seismicity with smaller relative location uncertainties compared to the event-pair DD location method and thus to reveal more fine-scale structures. In comparison, among three DD location methods, station-pair DD location method can better improve the absolute earthquake locations. For this reason, we further propose a new location strategy combining station-pair and double-pair differential times to determine accurate absolute and relative locations at the same time, which is validated by both synthetic and real datasets.

  20. Theatre Applications: Locations, Event, Futurity

    ERIC Educational Resources Information Center

    Mackey, Sally; Fisher, Amanda Stuart

    2011-01-01

    The three papers and the pictorial essay that follow Rustom Bharucha's keynote all originated at "Theatre Applications" (Central School of Speech and Drama, London, April 2010). One theme of the conference was "cultural geographies of dislocation, place and space"; the three papers and pictorial essay respond to that theme. All address issues of…

  1. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  2. Relative Locations of the DPRK Nuclear Tests Using Regional and Teleseismic Data

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Kværna, Tormod; Mykkeltveit, Svein

    2016-04-01

    Accurate relative location estimates for the announced nuclear tests carried out at the Punggye-ri test site in North Korea make it far easier to constrain the absolute coordinates of the events. With four tests now recorded well both at regional and teleseismic distances with excellent azimuthal coverage, we have a vast number of differential traveltime measurements which reduce substantially the variability in the relative location estimates. A large redundancy of data allows for independent relative location estimates for each event pair using different sets of stations and phases. Superposition of multiple grids of differential traveltime residuals results in relative event location estimates which are less sensitive to uncertainties in the time measurements or in the modelled traveltime gradients. Of particular interest is the location of the October 9, 2006, test. This event was approximately 2 km to the East of the 2009, 2013, and 2016 nuclear tests and its precise location will help to fix the template of relative locations in the terrain at the test site. This smaller event was recorded by fewer stations and with poorer signal-to-noise ratio. Due to the somewhat different source location the waveform semblance with the other events is diminished and this complicates the measurement of differential time-delays, in particular for the higher frequency regional observations. The uncertainty in the location of the 2006 event is reduced considerably by being able to be estimated relative to 3 different events with exceptionally accurate relative location estimates.

  3. Earthquake location in island arcs

    USGS Publications Warehouse

    Engdahl, E.R.; Dewey, J.W.; Fujita, K.

    1982-01-01

    -velocity lithospheric slab. In application, JHD has the practical advantage that it does not require the specification of a theoretical velocity model for the slab. Considering earthquakes within a 260 km long by 60 km wide section of the Aleutian main thrust zone, our results suggest that the theoretical velocity structure of the slab is presently not sufficiently well known that accurate locations can be obtained independently of locally recorded data. Using a locally recorded earthquake as a calibration event, JHD gave excellent results over the entire section of the main thrust zone here studied, without showing a strong effect that might be attributed to spatially varying source-station anomalies. We also calibrated the ray-tracing method using locally recorded data and obtained results generally similar to those obtained by JHD. ?? 1982.

  4. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  5. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  6. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  7. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  8. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  9. A Hybrid Waveform Inversion Scheme for the Determination of Locations and Moment Tensors of the Microseismic Events and the uncertainty and Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Li, J.; Droujinine, A.; Shen, P.

    2011-12-01

    In this research, we developed a new hybrid waveform inversion scheme to determine the hypocenters, origin times and moment tensors of the microseismic events induced by hydraulic fracturing. To overcome the nonlinearity in the determination of the hypocenter and origin time of a microseismic event, we perform a global search for the hypocenter (x,y,z) and origin time (t0) in a gridded four-dimensional model space, and at each grid point of the four-dimensional model space, we perform a linear inversion for the moment tensor components (M11, M22, M33, M12, M13, M23) in a six-dimensional model subspace. By this two-step approach, we find a global estimate optimum solution in the four- plus six-dimensional total model space. Then we further perform a nonlinear, gradient-based inversion for a better hypocenter and origin time of the microseismic event starting from the global estimate optimum solution. The linear inversion for the moment tensor can also be performed at each iteration of the nonlinear inversion for the hypocenter and origin time. In the grid-linear-nonlinear hybrid approach, we avoid being trapped in the local minima in the inverse problem while reducing the computational cost. The Green's functions between a monitored regions and receivers are computed by the elastic wave reciprocity. We also have performed a systematic study of the uncertainty, resolution and sensitivity of the method and found that it has superior performance in determining the hypocenter and origin time of a microseismic event over the traditional travel time methods, while being able to deliver the focal mechanism solution for the event as well. The method is tested on a dataset from a hydraulic fracturing practice in an oil reservoir.

  10. Photographic Analysis Technique for Assessing External Tank Foam Loss Events

    NASA Technical Reports Server (NTRS)

    Rieckhoff, T. J.; Covan, M.; OFarrell, J. M.

    2001-01-01

    A video camera and recorder were placed inside the solid rocket booster forward skirt in order to view foam loss events over an area on the external tank (ET) intertank surface. In this Technical Memorandum, a method of processing video images to allow rapid detection of permanent changes indicative of foam loss events on the ET surface was defined and applied to accurately count, categorize, and locate such events.

  11. Improved location procedures at the International Seismological Centre

    NASA Astrophysics Data System (ADS)

    Bondár, István; Storchak, Dmitry

    2011-09-01

    The International Seismological Centre (ISC) is a non-governmental, non-profit organization with the primary mission of producing the definitive account of the Earth's seismicity. The ISC Bulletin covers some 50 yr (1960-2011) of seismicity. The recent years have seen a dramatic increase both in the number of reported events and especially in the number of reported phases, owing to the ever-increasing number of stations worldwide. Similar ray paths will produce correlated traveltime prediction errors due to unmodelled heterogeneities in the Earth, resulting in underestimated location uncertainties, and for unfavourable network geometries, location bias. Hence, the denser and more unbalanced the global seismic station coverage becomes, the less defensible is the assumption (that is the observations are independent), which is made by most location algorithms. To address this challenge we have developed a new location algorithm for the ISC that accounts for correlated error structure, and uses all IASPEI standard phases with a valid ak135 traveltime prediction to obtain more accurate event locations. In this paper we describe the new ISC locator, and present validation tests by relocating the ground truth events in the IASPEI Reference Event List, as well as by relocating the entire ISC Bulletin. We show that the new ISC location algorithm provides small, but consistent location improvements, considerable improvements in depth determination and significantly more accurate formal uncertainty estimates. We demonstrate that the new algorithm, through the use of later phases and testing for depth resolution, considerably clusters event locations more tightly, thus providing an improved view of the seismicity of the Earth.

  12. Relative earthquake location for remote offshore and tectonically active continental regions using surface waves

    NASA Astrophysics Data System (ADS)

    Cleveland, M.; Ammon, C. J.; Vandemark, T. F.

    2015-12-01

    Earthquake locations are a fundamental parameter necessary for reliable seismic monitoring and seismic event characterization. Within dense continental seismic networks, event locations can be accurately and precisely estimated. However, for many regions of interest, existing catalog data and traditional location methods provide neither accurate nor precise hypocenters. In particular, for isolated continental and offshore areas, seismic event locations are estimated primarily using distant observations, often resulting in inaccurate and imprecise locations. The use of larger, moderate-size events is critical to the construction of useful travel-time corrections in regions of strong geologic heterogeneity. Double difference methods applied to cross-correlation measured Rayleigh and Love wave time shifts are an effective tool at providing improved epicentroid locations and relative origin-time shifts in these regions. Previous studies have applied correlation of R1 and G1 waveforms to moderate-magnitude vertical strike-slip transform-fault and normal faulting earthquakes from nearby ridges. In this study, we explore the utility of phase-match filtering techniques applied to surface waves to improve cross-correlation measurements, particularly for smaller magnitude seismic events. We also investigate the challenges associated with applying surface-wave location methods to shallow earthquakes in tectonically active continental regions.

  13. Towards an Accurate Orbital Calibration of Late Miocene Climate Events: Insights From a High-Resolution Chemo- and Magnetostratigraphy (8-6 Ma) from Equatorial Pacific IODP Sites U1337 and U1338

    NASA Astrophysics Data System (ADS)

    Drury, A. J.; Westerhold, T.; Frederichs, T.; Wilkens, R.; Channell, J. E. T.; Evans, H. F.; Hodell, D. A.; John, C. M.; Lyle, M. W.; Roehl, U.; Tian, J.

    2015-12-01

    In the 8-6 Ma interval, the late Miocene is characterised by a long-term -0.3 ‰ reduction in benthic foraminiferal δ18O and distinctive short-term δ18O cycles, possibly related to dynamic Antarctic ice sheet variability. In addition, the late Miocene carbon isotope shift (LMCIS) marks a permanent long-term -1 ‰ shift in oceanic δ13CDIC, which is the largest, long-term perturbation in the global marine carbon cycle since the mid Miocene Monterey excursion. Accurate age control is crucial to investigate the origin of the δ18O cyclicity and determine the precise onset of the LMCIS. The current Geological Time Scale in the 8-6 Ma interval is constructed using astronomical tuning of sedimentary cycles in Mediterranean outcrops. However, outside of the Mediterranean, a comparable high-resolution chemo-, magneto-, and cyclostratigraphy at a single DSDP/ODP/IODP site does not exist. Generating an accurate astronomically-calibrated chemo- and magneto-stratigraphy in the 8-6 Ma interval became possible with retrieval of equatorial Pacific IODP Sites U1337 and U1338, as both sites have sedimentation rates ~2 cm/kyr, high biogenic carbonate content, and magnetic polarity stratigraphies. Here we present high-resolution correlation of Sites U1337 and U1338 using Milankovitch-related cycles in core images and X-ray fluorescence core scanning data. By combining inclination and declination data from ~400 new discrete samples with shipboard measurements, we are able to identify 14 polarity reversals at Site U1337 from the young end of Chron C3An.1n (~6.03 Ma) to the onset of Chron C4n.2n (~8.11 Ma). New high-resolution (<1.5 kyr) stable isotope records from Site U1337 correlate highly with Site U1338 records, enabling construction of a high-resolution stack. Initial orbital tuning of the U1337-U1338 records show that the δ18O cyclicity is obliquity driven, indicating high-latitude climate forcing. The LMCIS starts ~7.55 Ma and is anchored in Chron C4n.1n, which is

  14. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  15. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  16. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  17. Location of microearthquakes in a hot, permeable and noisy geothermal production environment - a challenge (Invited)

    NASA Astrophysics Data System (ADS)

    Bannister, S. C.; Sherburn, S.; Bourguignon, S.; Clarke, D. S.; Parolai, S.

    2009-12-01

    The detection and high-resolution location of microearthquakes in a hot geothermal production reservoir can be a challenge, with high background noise levels and variable Q often encountered. Here we examine the utility of new advances in polarization filtering, denoising, event location and event classification for examining microearthquakes in the Rotokawa production reservoir, New Zealand. More than 700 (ML <2.5) events have been recorded in that reservoir since 2006, with events showing spatio-temporal changes highly correlated to flow changes in the re-injection wells. The approach used for accurately locating and characterising these microearthquakes has to take into account the high background noise levels, the complex sub-surface geology, the high, variable attenuation of near-surface volcanic deposits, and the high (> 300 degree C) temperatures encountered at shallow depths in Rotokawa.

  18. Accurate Optical Reference Catalogs

    NASA Astrophysics Data System (ADS)

    Zacharias, N.

    2006-08-01

    Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.

  19. Distributed Pedestrian Detection Alerts Based on Data Fusion with Accurate Localization

    PubMed Central

    García, Fernando; Jiménez, Felipe; Anaya, José Javier; Armingol, José María; Naranjo, José Eugenio; de la Escalera, Arturo

    2013-01-01

    Among Advanced Driver Assistance Systems (ADAS) pedestrian detection is a common issue due to the vulnerability of pedestrians in the event of accidents. In the present work, a novel approach for pedestrian detection based on data fusion is presented. Data fusion helps to overcome the limitations inherent to each detection system (computer vision and laser scanner) and provides accurate and trustable tracking of any pedestrian movement. The application is complemented by an efficient communication protocol, able to alert vehicles in the surroundings by a fast and reliable communication. The combination of a powerful location, based on a GPS with inertial measurement, and accurate obstacle localization based on data fusion has allowed locating the detected pedestrians with high accuracy. Tests proved the viability of the detection system and the efficiency of the communication, even at long distances. By the use of the alert communication, dangerous situations such as occlusions or misdetections can be avoided. PMID:24008284

  20. Distributed pedestrian detection alerts based on data fusion with accurate localization.

    PubMed

    García, Fernando; Jiménez, Felipe; Anaya, José Javier; Armingol, José María; Naranjo, José Eugenio; de la Escalera, Arturo

    2013-01-01

    Among Advanced Driver Assistance Systems (ADAS) pedestrian detection is a common issue due to the vulnerability of pedestrians in the event of accidents. In the present work, a novel approach for pedestrian detection based on data fusion is presented. Data fusion helps to overcome the limitations inherent to each detection system (computer vision and laser scanner) and provides accurate and trustable tracking of any pedestrian movement. The application is complemented by an efficient communication protocol, able to alert vehicles in the surroundings by a fast and reliable communication. The combination of a powerful location, based on a GPS with inertial measurement, and accurate obstacle localization based on data fusion has allowed locating the detected pedestrians with high accuracy. Tests proved the viability of the detection system and the efficiency of the communication, even at long distances. By the use of the alert communication, dangerous situations such as occlusions or misdetections can be avoided.

  1. Underwater hydrophone location survey

    NASA Technical Reports Server (NTRS)

    Cecil, Jack B.

    1993-01-01

    The Atlantic Undersea Test and Evaluation Center (AUTEC) is a U.S. Navy test range located on Andros Island, Bahamas, and a Division of the Naval Undersea Warfare Center (NUWC), Newport, RI. The Headquarters of AUTEC is located at a facility in West Palm Beach, FL. AUTEC's primary mission is to provide the U.S. Navy with a deep-water test and evaluation facility for making underwater acoustic measurements, testing and calibrating sonars, and providing accurate underwater, surface, and in-air tracking data on surface ships, submarines, aircraft, and weapon systems. Many of these programs are in support of Antisubmarine Warfare (ASW), undersea research and development programs, and Fleet assessment and operational readiness trials. Most tests conducted at AUTEC require precise underwater tracking (plus or minus 3 yards) of multiple acoustic signals emitted with the correct waveshape and repetition criteria from either a surface craft or underwater vehicle.

  2. Location of Microearthquakes in Various Noisy Environments Using Envelope Stacking

    NASA Astrophysics Data System (ADS)

    Oye, V.; Gharti, H.

    2009-12-01

    Monitoring of microearthquakes is routinely conducted in various environments such as hydrocarbon and geothermal reservoirs, mines, dams, seismically active faults, volcanoes, nuclear power plants and CO2 storages. In many of these cases the handled data is sensitive and the interpretation of the data may be vital. In some cases, such as during mining or hydraulic fracturing activities, the number of microearthquakes is very large with tens to thousands of events per hour. In others, almost no events occur during a week and furthermore, it might not be anticipated that many events occur at all. However, the general setup of seismic networks, including surface and downhole stations, is usually optimized to record as many microearthquakes as possible, thereby trying to lower the detection threshold of the network. This process is obviously limited to some extent. Most microearthquake location techniques take advantage of a combination of P- and S-wave onset times that often can be picked reliably in an automatic mode. Moreover, when using seismic wave onset times, sometimes in combination with seismic wave polarization, these methods are more accurate compared to migration-based location routines. However, many events cannot be located because their magnitude is too small, i.e. the P- and/or S-wave onset times cannot be picked accurately on a sufficient number of receivers. Nevertheless, these small events are important for the interpretation of the processes that are monitored and even an inferior estimate of event locations and strengths is valuable information. Moreover, the smaller the event the more often such events statistically occur and the more important such additional information becomes. In this study we try to enhance the performance of any microseismic network, providing additional estimates of event locations below the actual detection threshold. We present a migration-based event location method, where we project the recorded seismograms onto the ray

  3. An information-theoretic approach to microseismic source location

    NASA Astrophysics Data System (ADS)

    Prange, Michael D.; Bose, Sandip; Kodio, Ousmane; Djikpesse, Hugues A.

    2015-04-01

    There has been extensive work on seismic source localization, going as far back as Geiger's 1912 paper, that is based on least-squares fitting of arrival times. The primary advantage of time-based methods over waveform-based methods (e.g. reverse-time migration and beam forming) is that simulated arrival times are considerably more reliable than simulated waveforms, especially in the context of an uncertain velocity model, thereby yielding more reliable estimates of source location. However, time-based methods are bedeviled by the unsolved challenges of accurate time picking and labelling of the seismic phases in the waveforms for each event. Drawing from Woodward's canonical 1953 text on the application of information theory to radar applications, we show that time-based methods can be applied directly to waveform data, thus capturing the advantages of time-based methods without being impacted by the aforementioned hindrances. We extend Woodward's approach to include an unknown distortion on wavelet amplitude and phase, showing that the related marginalization integrals can be analytically evaluated. We also provide extensions for correlation-based location methods such as relative localization and the S-P method. We demonstrate this approach through applications to microseismic event location, presenting formulations and results for both absolute and relative localization approaches, with receiver arrays either in a borehole or on the surface. By properly quantifying uncertainty in our location estimates, our formulations provide an objective measure for ranking the accuracy of microseismic source location methodologies.

  4. Pregnancy of unknown location.

    PubMed

    Schuneman, Margaret; Von Wald, Tiffany; Hansen, Keith

    2015-04-01

    The development of highly sensitive and accurate human chorionic gonadotropin assays as well as the improvement of vaginal ultrasound have allowed for the early detection of pregnancy and have reduced the morbidity and mortality associated with ectopic gestations. One of the byproducts of this increased sensitivity is pregnancy of unknown location (PUL), a term which is used to describe pregnancy in a woman with a positive pregnancy test but no signs of intrauterine or extrauterine pregnancy. A PUL can include an early intrauterine pregnancy, a failing intrauterine/extrauterine pregnancy or ectopic pregnancy. Modern medical management has improved the diagnosis and treatment of early pregnancy and pregnancy loss. In the hemodynamically stable patient with PUL, expectant management has been shown to be safe and allows for confirmatory studies before proceeding with therapy.

  5. Pan-information Location Map

    NASA Astrophysics Data System (ADS)

    Zhu, X. Y.; Guo, W.; Huang, L.; Hu, T.; Gao, W. X.

    2013-11-01

    A huge amount of information, including geographic, environmental, socio-economic, personal and social network information, has been generated from diverse sources. Most of this information exists separately and is disorderly even if some of it is about the same person, feature, phenomenon or event. Users generally need to collect related information from different sources and then utilize them in applications. An automatic mechanism, therefore, for establishing a connection between potentially-related information will profoundly expand the usefulness of this huge body of information. A connection tie is semantic location describing semantically concepts and attributes of locations as well as relationships between locations, since 80% of information contains some kind of geographic reference but not all of geographic reference has explicit geographic coordinates. Semantic location is an orthogonal form of location representation which can be represented as domain ontology or UML format. Semantic location associates various kinds of information about a same object to provide timely information services according to users' demands, habits, preferences and applications. Based on this idea, a Pan-Information Location Map (PILM) is proposed as a new-style 4D map to associates semantic location-based information dynamically to organize and consolidate the locality and characteristics of corresponding features and events, and delivers on-demand information with a User-Adaptive Smart Display (UASD).

  6. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  7. The onset location of neuromyelitis optica spectrum disorder predicts the location of subsequent relapses.

    PubMed

    Zandoná, Manuella Edler; Kim, Su-Hyun; Hyun, Jae-Won; Park, Boram; Joo, Jungnam; Kim, Ho Jin

    2014-12-01

    We evaluated whether the location of the initial attack predicted the locations of subsequent events in neuromyelitis optica spectrum disorder (NMOSD). In the retrospective analysis from 164 patients with NMOSD, increased odds of a second attack occurring in the initial event location were seen in all locations (odds ratio [OR] brain: 16.00; brainstem: 4.42; optic nerve: 4.08; and spinal cord: 4.59), as was a positive linear trend when evaluating the number of previous events in the same location as the third event location (OR brain: 62.52; brainstem: 44.55; optic nerve: 6.48; and spinal cord: 2.98). This study suggests early clinical events of NMOSD tend to recur in the same anatomical location within the central nervous system (CNS).

  8. Regional Travel-Time Predictions, Uncertainty and Location Improvement

    SciTech Connect

    Flanagan, M; Myers, S

    2004-07-15

    We investigate our ability to improve regional travel-time prediction and seismic event location using an a priori three-dimensional (3D) velocity model of Western Eurasia and North Africa (WENA 1.0). Three principal results are presented. First, the 3D WENA 1.0 velocity model improves travel-time prediction over the IASPI91 model, as measured by variance reduction, for regional phases recorded at 22 stations throughout the modeled region, including aseismic areas. Second, a distance-dependent uncertainty model is developed and tested for the WENA 1.0 model. Third, relocation using WENA 1.0 and the associated uncertainty model provides an end-to-end validation test. Model validation is based on a comparison of approximately 10,000 Pg, Pn, and P travel-time predictions and empirical observations from ground truth (GT) events. Ray coverage for the validation dataset provides representative, regional-distances sampling across Eurasia and North Africa. The WENA 1.0 model markedly improves travel-time predictions for most stations with an average variance reduction of 14% for all ray paths. We find that improvement is station dependent, with some stations benefiting greatly from WENA predictions (25% at OBN, and 16% at BKR), some stations showing moderate improvement (12% at ARU, and 17% at NIL), and some stations benefiting only slightly (7% at AAE, and 8% at TOL). We further test WENA 1.0 by relocating five calibration events. Again, relocation of these events is dependent on ray paths that evenly sample WENA 1.0 and therefore provide an unbiased assessment of location performance. These results highlight the importance of accurate GT datasets in assessing regional travel-time models and demonstrate that an a priori 3D model can markedly improve our ability to locate small magnitude events in a regional monitoring context.

  9. Video event detection: from subvolume localization to spatiotemporal path search.

    PubMed

    Tran, Du; Yuan, Junsong; Forsyth, David

    2014-02-01

    Although sliding window-based approaches have been quite successful in detecting objects in images, it is not a trivial problem to extend them to detecting events in videos. We propose to search for spatiotemporal paths for video event detection. This new formulation can accurately detect and locate video events in cluttered and crowded scenes, and is robust to camera motions. It can also well handle the scale, shape, and intraclass variations of the event. Compared to event detection using spatiotemporal sliding windows, the spatiotemporal paths correspond to the event trajectories in the video space, thus can better handle events composed by moving objects. We prove that the proposed search algorithm can achieve the global optimal solution with the lowest complexity. Experiments are conducted on realistic video data sets with different event detection tasks, such as anomaly event detection, walking person detection, and running detection. Our proposed method is compatible with different types of video features or object detectors and robust to false and missed local detections. It significantly improves the overall detection and localization accuracy over the state-of-the-art methods. PMID:24356358

  10. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  11. Integrating diverse calibration products to improve seismic location

    SciTech Connect

    Schultz, C; Myers, S; Swenson, J; Flanagan, M; Pasyanos, M; Bhattacharyya, J; Dodge, D

    2000-07-17

    The monitoring of nuclear explosions on a global basis requires accurate event locations. As an example, under the Comprehensive Test Ban Treaty, the size of an on-site inspection search area is 1,000 square kilometers or approximately 17 km accuracy assuming a circular area. This level of accuracy is a significant challenge for small events that are recorded using a sparse regional network. In such cases, the travel-time of seismic energy is strongly affected by crustal and upper mantle heterogeneity and large biases can result. This can lead to large systematic errors in location and, more importantly, to invalid error bounds associated with location estimates. Corrections can be developed and integrated to correct for these biases. These path corrections take the form of both three-dimensional model corrections along with three-dimensional empirically based travel time corrections. LLNL is currently working to integrate a diverse set of three-dimensional velocity model and empirical based travel-time products into one consistent and validated calibration set. To perform this task, we have developed a hybrid approach that uses three-dimensional model corrections for a region and then uses reference events when available to improve the path correction. This Bayesian kriging approach uses the best apriori three-dimensional velocity model that is produced for a local region and uses this as a baseline correction. When multiple models are produced for a local region, uncertainties in the models are compared against each other using ground truth data and an optimal model is chosen. We .are in the process of combining three-dimensional models on a region-by-region basis and integrating the uncertainties to form a global correction set. The Bayesian kriging prediction combines this a priori model and its statistics with the empirical calibrations to give an optimal aposteriori calibration estimate. In regions where there is limited or no coverage by reference events the

  12. Downhole microseismic monitoring for low signal-to-noise ratio events

    NASA Astrophysics Data System (ADS)

    Zhou, Hang; Zhang, Wei; Zhang, Jie

    2016-10-01

    Microseismic monitoring plays an important role in the process of hydraulic fracturing for shale gas/oil production. The accuracy of event location is an essential issue in microseismic monitoring. The data obtained from downhole monitoring system usually show a higher signal-to-noise ratio (SNR) than the recorded data from the surface. For small microseismic events, however, P waves recorded in a downhole array may be very weak, while S waves are generally dominant and strong. Numerical experiments suggest that inverting S-wave arrival times alone is not sufficient to constrain event locations. In this study, we perform extensive location tests with various noise effects using a grid search method that matches the travel time data of the S wave across a recording array. We conclude that fitting S-wave travel time data along with at least one P-wave travel time of the same event can significantly improve location accuracy. In practice, picking S-wave arrival time data and at least one P-wave pick is possible for many small events. We demonstrate that fitting the combination of the travel time data is a robust approach, which can help increase the number of microseismic events to be located accurately during hydraulic fracturing.

  13. Accurate documentation and wound measurement.

    PubMed

    Hampton, Sylvie

    This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.

  14. Location, Location, Location: Development of Spatiotemporal Sequence Learning in Infancy

    ERIC Educational Resources Information Center

    Kirkham, Natasha Z.; Slemmer, Jonathan A.; Richardson, Daniel C.; Johnson, Scott P.

    2007-01-01

    We investigated infants' sensitivity to spatiotemporal structure. In Experiment 1, circles appeared in a statistically defined spatial pattern. At test 11-month-olds, but not 8-month-olds, looked longer at a novel spatial sequence. Experiment 2 presented different color/shape stimuli, but only the location sequence was violated during test;…

  15. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  16. Activating Event Knowledge

    PubMed Central

    Hare, Mary; Jones, Michael; Thomson, Caroline; Kelly, Sarah; McRae, Ken

    2009-01-01

    An increasing number of results in sentence and discourse processing demonstrate that comprehension relies on rich pragmatic knowledge about real-world events, and that incoming words incrementally activate such knowledge. If so, then even outside of any larger context, nouns should activate knowledge of the generalized events that they denote or typically play a role in. We used short stimulus onset asynchrony priming to demonstrate that (1) event nouns prime people (sale-shopper) and objects (trip-luggage) commonly found at those events; (2) location nouns prime people/animals (hospital-doctor) and objects (barn-hay) commonly found at those locations; and (3) instrument nouns prime things on which those instruments are commonly used (key-door), but not the types of people who tend to use them (hose-gardener). The priming effects are not due to normative word association. On our account, facilitation results from event knowledge relating primes and targets. This has much in common with computational models like LSA or BEAGLE in which one word primes another if they frequently occur in similar contexts. LSA predicts priming for all six experiments, whereas BEAGLE correctly predicted that priming should not occur for the instrument-people relation but should occur for the other five. We conclude that event-based relations are encoded in semantic memory and computed as part of word meaning, and have a strong influence on language comprehension. PMID:19298961

  17. Helicopter magnetic survey conducted to locate wells

    SciTech Connect

    Veloski, G.A.; Hammack, R.W.; Stamp, V.; Hall, R.; Colina, K.

    2008-07-01

    A helicopter magnetic survey was conducted in August 2007 over 15.6 sq mi at the Naval Petroleum Reserve No. 3’s (NPR-3) Teapot Dome Field near Casper, Wyoming. The survey’s purpose was to accurately locate wells drilled there during more than 90 years of continuous oilfield operation. The survey was conducted at low altitude and with closely spaced flight lines to improve the detection of wells with weak magnetic response and to increase the resolution of closely spaced wells. The survey was in preparation for a planned CO2 flood for EOR, which requires a complete well inventory with accurate locations for all existing wells. The magnetic survey was intended to locate wells missing from the well database and to provide accurate locations for all wells. The ability of the helicopter magnetic survey to accurately locate wells was accomplished by comparing airborne well picks with well locations from an intense ground search of a small test area.

  18. Improved integrated sniper location system

    NASA Astrophysics Data System (ADS)

    Figler, Burton D.; Spera, Timothy J.

    1999-01-01

    In July of 1995, Lockheed Martin IR Imaging Systems, of Lexington, Massachusetts began the development of an integrated sniper location system for the Defense Advanced Research Projects Agency and for the Department of the Navy's Naval Command Control & Ocean Surveillance Center, RDTE Division in San Diego, California. The I-SLS integrates acoustic and uncooled infrared sensing technologies to provide an affordable and highly effective sniper detection and location capability. This system, its performance and results from field tests at Camp Pendleton, California, in October 1996 were described in a paper presented at the November 1996 SPIE Photonics East Symposium1 on Enabling Technologies for Law Enforcement and Security. The I-SLS combines an acoustic warning system with an uncooled infrared warning system. The acoustic warning system has been developed by SenTech, Inc., of Lexington, Massachusetts. This acoustic warning system provides sniper detection and coarse location information based upon the muzzle blast of the sniper's weapon and/or upon the shock wave produced by the sniper's bullet, if the bullet is supersonic. The uncooled infrared warning system provides sniper detection and fine location information based upon the weapon's muzzle flash. In addition, the uncooled infrared warning system can provide thermal imagery that can be used to accurately locate and identify the sniper. Combining these two technologies improves detection probability, reduces false alarm rate and increases utility. In the two years since the last report of the integrated sniper location system, improvements have been made and a second field demonstration was planned. In this paper, we describe the integrated sniper location system modifications in preparation for the new field demonstration. In addition, fundamental improvements in the uncooled infrared sensor technology continue to be made. These improvements include higher sensitivity (lower minimum resolvable temperature

  19. SPLASH: Accurate OH maser positions

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney

    2013-10-01

    The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.

  20. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  1. Accurate thickness measurement of graphene.

    PubMed

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  2. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  3. Sudden Event Recognition: A Survey

    PubMed Central

    Suriani, Nor Surayahani; Hussain, Aini; Zulkifley, Mohd Asyraf

    2013-01-01

    Event recognition is one of the most active research areas in video surveillance fields. Advancement in event recognition systems mainly aims to provide convenience, safety and an efficient lifestyle for humanity. A precise, accurate and robust approach is necessary to enable event recognition systems to respond to sudden changes in various uncontrolled environments, such as the case of an emergency, physical threat and a fire or bomb alert. The performance of sudden event recognition systems depends heavily on the accuracy of low level processing, like detection, recognition, tracking and machine learning algorithms. This survey aims to detect and characterize a sudden event, which is a subset of an abnormal event in several video surveillance applications. This paper discusses the following in detail: (1) the importance of a sudden event over a general anomalous event; (2) frameworks used in sudden event recognition; (3) the requirements and comparative studies of a sudden event recognition system and (4) various decision-making approaches for sudden event recognition. The advantages and drawbacks of using 3D images from multiple cameras for real-time application are also discussed. The paper concludes with suggestions for future research directions in sudden event recognition. PMID:23921828

  4. Locating Continuing Education Programs.

    ERIC Educational Resources Information Center

    Mason, Robert C.

    1986-01-01

    Emphasizes program location as an important component of the marketing plan for continuing education. Also discusses relations among program location and quality, costs, supportive services, and economies of scale. (CH)

  5. Accurate Cross Sections for Microanalysis

    PubMed Central

    Rez, Peter

    2002-01-01

    To calculate the intensity of x-ray emission in electron beam microanalysis requires a knowledge of the energy distribution of the electrons in the solid, the energy variation of the ionization cross section of the relevant subshell, the fraction of ionizations events producing x rays of interest and the absorption coefficient of the x rays on the path to the detector. The theoretical predictions and experimental data available for ionization cross sections are limited mainly to K shells of a few elements. Results of systematic plane wave Born approximation calculations with exchange for K, L, and M shell ionization cross sections over the range of electron energies used in microanalysis are presented. Comparisons are made with experimental measurement for selected K shells and it is shown that the plane wave theory is not appropriate for overvoltages less than 2.5 V. PMID:27446747

  6. Locating Acoustic Events Based on Large-Scale Sensor Networks

    PubMed Central

    Kim, Yungeun; Ahn, Junho; Cha, Hojung

    2009-01-01

    Research on acoustic source localization is actively being conducted to enhance accuracy and coverage. However, the performance is inherently limited due to the use of expensive sensor nodes and inefficient communication methods. This paper proposes an acoustic source localization algorithm for a large area that uses low-cost sensor nodes. The proposed mechanism efficiently handles multiple acoustic sources by removing false-positive errors that arise from the different propagation ranges of radio and sound. Extensive outdoor experiments with real hardware validated that the proposed mechanism could localize four acoustic sources within a 3 m error in a 60 m by 60 m area, where conventional systems could hardly achieve similar performance. PMID:22303155

  7. Cable-fault locator

    NASA Technical Reports Server (NTRS)

    Cason, R. L.; Mcstay, J. J.; Heymann, A. P., Sr.

    1979-01-01

    Inexpensive system automatically indicates location of short-circuited section of power cable. Monitor does not require that cable be disconnected from its power source or that test signals be applied. Instead, ground-current sensors are installed in manholes or at other selected locations along cable run. When fault occurs, sensors transmit information about fault location to control center. Repair crew can be sent to location and cable can be returned to service with minimum of downtime.

  8. Location, Location, Location: Where Do Location-Based Services Fit into Your Institution's Social Media Mix?

    ERIC Educational Resources Information Center

    Nekritz, Tim

    2011-01-01

    Foursquare is a location-based social networking service that allows users to share their location with friends. Some college administrators have been thinking about whether and how to take the leap into location-based services, which are also known as geosocial networking services. These platforms, which often incorporate gaming elements like…

  9. Travel-time source-specific station correction improves location accuracy

    NASA Astrophysics Data System (ADS)

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  10. Seismic Monitoring of Ice Generated Events at the Bering Glacier

    NASA Astrophysics Data System (ADS)

    Fitzgerald, K.; Richardson, J.; Pennington, W.

    2008-12-01

    The Bering Glacier, located in southeast Alaska, is the largest glacier in North America with a surface area of approximately 5,175 square kilometers. It extends from its source in the Bagley Icefield to its terminus in tidal Vitus Lake, which drains into the Gulf of Alaska. It is known that the glacier progresses downhill through the mechanisms of plastic crystal deformation and basal sliding. However, the basal processes which take place tens to hundreds of meters below the surface are not well understood, except through the study of sub- glacial landforms and passive seismology. Additionally, the sub-glacial processes enabling the surges, which occur approximately every two decades, are poorly understood. Two summer field campaigns in 2007 and 2008 were designed to investigate this process near the terminus of the glacier. During the summer of 2007, a field experiment at the Bering Glacier was conducted using a sparse array of L-22 short period sensors to monitor ice-related events. The array was in place for slightly over a week in August and consisted of five stations centered about the final turn of the glacier west of the Grindle Hills. Many events were observed, but due to the large distance between stations and the highly attenuating surface ice, few events were large enough to be recorded on sufficient stations to be accurately located and described. During August 2008, six stations were deployed for a similar length of time, but with a closer spacing. With this improved array, events were located and described more accurately, leading to additional conclusions about the surface, interior, and sub-glacial ice processes producing seismic signals. While the glacier was not surging during the experiment, this study may provide information on the non-surging, sub-glacial base level activity. It is generally expected that another surge will take place within a few years, and baseline studies such as this may assist in understanding the nature of surges.

  11. Accurately measuring MPI broadcasts in a computational grid

    SciTech Connect

    Karonis N T; de Supinski, B R

    1999-05-06

    timing of events and, thus, eliminate concurrency between the collective communications that they measure. However, accurate event timing predictions are often impossible since network delays and local processing overheads are stochastic. Further, reasonable predictions are not possible if source code of the implementation is unavailable to the benchmark. We focus on measuring the performance of broadcast communication.

  12. Impact location estimation in anisotropic structures

    NASA Astrophysics Data System (ADS)

    Zhou, Jingru; Mathews, V. John; Adams, Daniel O.

    2015-03-01

    Impacts are major causes of in-service damage in aerospace structures. Therefore, impact location estimation techniques are necessary components of Structural Health Monitoring (SHM). In this paper, we consider impact location estimation in anisotropic composite structures using acoustic emission signals arriving at a passive sensor array attached to the structure. Unlike many published location estimation algorithms, the algorithm presented in this paper does not require the waveform velocity profile for the structure. Rather, the method employs time-of-arrival information to jointly estimate the impact location and the average signal transmission velocities from the impact to each sensor on the structure. The impact location and velocities are estimated as the solution of a nonlinear optimization problem with multiple quadratic constraints. The optimization problem is solved by using first-order optimality conditions. Numerical simulations as well as experimental results demonstrate the ability of the algorithm to accurately estimate the impact location using acoustic emission signals.

  13. Sensor-Generated Time Series Events: A Definition Language

    PubMed Central

    Anguera, Aurea; Lara, Juan A.; Lizcano, David; Martínez, Maria Aurora; Pazos, Juan

    2012-01-01

    There are now a great many domains where information is recorded by sensors over a limited time period or on a permanent basis. This data flow leads to sequences of data known as time series. In many domains, like seismography or medicine, time series analysis focuses on particular regions of interest, known as events, whereas the remainder of the time series contains hardly any useful information. In these domains, there is a need for mechanisms to identify and locate such events. In this paper, we propose an events definition language that is general enough to be used to easily and naturally define events in time series recorded by sensors in any domain. The proposed language has been applied to the definition of time series events generated within the branch of medicine dealing with balance-related functions in human beings. A device, called posturograph, is used to study balance-related functions. The platform has four sensors that record the pressure intensity being exerted on the platform, generating four interrelated time series. As opposed to the existing ad hoc proposals, the results confirm that the proposed language is valid, that is generally applicable and accurate, for identifying the events contained in the time series.

  14. Accurate Mass Measurements in Proteomics

    SciTech Connect

    Liu, Tao; Belov, Mikhail E.; Jaitly, Navdeep; Qian, Weijun; Smith, Richard D.

    2007-08-01

    To understand different aspects of life at the molecular level, one would think that ideally all components of specific processes should be individually isolated and studied in details. Reductionist approaches, i.e., studying one biological event at a one-gene or one-protein-at-a-time basis, indeed have made significant contributions to our understanding of many basic facts of biology. However, these individual “building blocks” can not be visualized as a comprehensive “model” of the life of cells, tissues, and organisms, without using more integrative approaches.1,2 For example, the emerging field of “systems biology” aims to quantify all of the components of a biological system to assess their interactions and to integrate diverse types of information obtainable from this system into models that could explain and predict behaviors.3-6 Recent breakthroughs in genomics, proteomics, and bioinformatics are making this daunting task a reality.7-14 Proteomics, the systematic study of the entire complement of proteins expressed by an organism, tissue, or cell under a specific set of conditions at a specific time (i.e., the proteome), has become an essential enabling component of systems biology. While the genome of an organism may be considered static over short timescales, the expression of that genome as the actual gene products (i.e., mRNAs and proteins) is a dynamic event that is constantly changing due to the influence of environmental and physiological conditions. Exclusive monitoring of the transcriptomes can be carried out using high-throughput cDNA microarray analysis,15-17 however the measured mRNA levels do not necessarily correlate strongly with the corresponding abundances of proteins,18-20 The actual amount of functional proteins can be altered significantly and become independent of mRNA levels as a result of post-translational modifications (PTMs),21 alternative splicing,22,23 and protein turnover.24,25 Moreover, the functions of expressed

  15. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    PubMed

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to

  16. Transformational Events

    ERIC Educational Resources Information Center

    Denning, Peter J.; Hiles, John E.

    2006-01-01

    Transformational Events is a new pedagogic pattern that explains how innovations (and other transformations) happened. The pattern is three temporal stages: an interval of increasingly unsatisfactory ad hoc solutions to a persistent problem (the "mess"), an offer of an invention or of a new way of thinking, and a period of widespread adoption and…

  17. Maintenance Event

    Atmospheric Science Data Center

    2014-07-22

    Time:  08:00 am - 08:30 am EDT Event Impact:  Science Directorate websites will ... outage Thursday morning, 7/24, to perform upgrades to the web environment and are expected to be down for about 30 minutes. ...

  18. Reversible micromachining locator

    DOEpatents

    Salzer, Leander J.; Foreman, Larry R.

    1999-01-01

    This invention provides a device which includes a locator, a kinematic mount positioned on a conventional tooling machine, a part carrier disposed on the locator and a retainer ring. The locator has disposed therein a plurality of steel balls, placed in an equidistant position circumferentially around the locator. The kinematic mount includes a plurality of magnets which are in registry with the steel balls on the locator. In operation, a blank part to be machined is placed between a surface of a locator and the retainer ring (fitting within the part carrier). When the locator (with a blank part to be machined) is coupled to the kinematic mount, the part is thus exposed for the desired machining process. Because the locator is removably attachable to the kinematic mount, it can easily be removed from the mount, reversed, and reinserted onto the mount for additional machining. Further, the locator can likewise be removed from the mount and placed onto another tooling machine having a properly aligned kinematic mount. Because of the unique design and use of magnetic forces of the present invention, positioning errors of less than 0.25 micrometer for each machining process can be achieved.

  19. Reversible micromachining locator

    DOEpatents

    Salzer, L.J.; Foreman, L.R.

    1999-08-31

    This invention provides a device which includes a locator, a kinematic mount positioned on a conventional tooling machine, a part carrier disposed on the locator and a retainer ring. The locator has disposed therein a plurality of steel balls, placed in an equidistant position circumferentially around the locator. The kinematic mount includes a plurality of magnets which are in registry with the steel balls on the locator. In operation, a blank part to be machined is placed between a surface of a locator and the retainer ring (fitting within the part carrier). When the locator (with a blank part to be machined) is coupled to the kinematic mount, the part is thus exposed for the desired machining process. Because the locator is removably attachable to the kinematic mount, it can easily be removed from the mount, reversed, and reinserted onto the mount for additional machining. Further, the locator can likewise be removed from the mount and placed onto another tooling machine having a properly aligned kinematic mount. Because of the unique design and use of magnetic forces of the present invention, positioning errors of less than 0.25 micrometer for each machining process can be achieved. 7 figs.

  20. Variant of more accurate determination of the locations of shortwave radio emission sources

    NASA Astrophysics Data System (ADS)

    Ivanov, V. F.; Myslivtsev, T. O.; Troitskii, B. V.

    2013-04-01

    The paper discusses how the trajectory calculation method can be used to solve the problem of locality determination of shortwave (SW) emission sources. The dependence of the electron concentration on the coordinates is specified using the SPIM model; it is corrected using the ionospheric solar activity index, which is specified with the help of maps of total electron content. We suggested a variant of how a regional map of the total electron content can be plotted according to measurements of signals from GLONASS/GPS navigation systems. It is shown that the trajectory calculation method, coupled with an adjustable ionospheric model, allows for a more exact locality determination of SW radio emission sources.

  1. Semantic Location Extraction from Crowdsourced Data

    NASA Astrophysics Data System (ADS)

    Koswatte, S.; Mcdougall, K.; Liu, X.

    2016-06-01

    Crowdsourced Data (CSD) has recently received increased attention in many application areas including disaster management. Convenience of production and use, data currency and abundancy are some of the key reasons for attracting this high interest. Conversely, quality issues like incompleteness, credibility and relevancy prevent the direct use of such data in important applications like disaster management. Moreover, location information availability of CSD is problematic as it remains very low in many crowd sourced platforms such as Twitter. Also, this recorded location is mostly related to the mobile device or user location and often does not represent the event location. In CSD, event location is discussed descriptively in the comments in addition to the recorded location (which is generated by means of mobile device's GPS or mobile communication network). This study attempts to semantically extract the CSD location information with the help of an ontological Gazetteer and other available resources. 2011 Queensland flood tweets and Ushahidi Crowd Map data were semantically analysed to extract the location information with the support of Queensland Gazetteer which is converted to an ontological gazetteer and a global gazetteer. Some preliminary results show that the use of ontologies and semantics can improve the accuracy of place name identification of CSD and the process of location information extraction.

  2. System and Method of Locating Lightning Strikes

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro J. (Inventor); Starr, Stanley O. (Inventor)

    2002-01-01

    A system and method of determining locations of lightning strikes has been described. The system includes multiple receivers located around an area of interest, such as a space center or airport. Each receiver monitors both sound and electric fields. The detection of an electric field pulse and a sound wave are used to calculate an area around each receiver in which the lighting is detected. A processor is coupled to the receivers to accurately determine the location of the lighting strike. The processor can manipulate the receiver data to compensate for environmental variables such as wind, temperature, and humidity. Further, each receiver processor can discriminate between distant and local lightning strikes.

  3. Assessment of User Home Location Geoinference Methods

    SciTech Connect

    Harrison, Joshua J.; Bell, Eric B.; Corley, Courtney D.; Dowling, Chase P.; Cowell, Andrew J.

    2015-05-29

    This study presents an assessment of multiple approaches to determine the home and/or other important locations to a Twitter user. In this study, we present a unique approach to the problem of geotagged data sparsity in social media when performing geoinferencing tasks. Given the sparsity of explicitly geotagged Twitter data, the ability to perform accurate and reliable user geolocation from a limited number of geotagged posts has proven to be quite useful. In our survey, we have achieved accuracy rates of over 86% in matching Twitter user profile locations with their inferred home locations derived from geotagged posts.

  4. Object locating system

    DOEpatents

    Novak, James L.; Petterson, Ben

    1998-06-09

    A sensing system locates an object by sensing the object's effect on electric fields. The object's effect on the mutual capacitance of electrode pairs varies according to the distance between the object and the electrodes. A single electrode pair can sense the distance from the object to the electrodes. Multiple electrode pairs can more precisely locate the object in one or more dimensions.

  5. Reversible micromachining locator

    DOEpatents

    Salzer, Leander J.; Foreman, Larry R.

    2002-01-01

    A locator with a part support is used to hold a part onto the kinematic mount of a tooling machine so that the part can be held in or replaced in exactly the same position relative to the cutting tool for machining different surfaces of the part or for performing different machining operations on the same or different surfaces of the part. The locator has disposed therein a plurality of steel balls placed at equidistant positions around the planar surface of the locator and the kinematic mount has a plurality of magnets which alternate with grooves which accommodate the portions of the steel balls projecting from the locator. The part support holds the part to be machined securely in place in the locator. The locator can be easily detached from the kinematic mount, turned over, and replaced onto the same kinematic mount or another kinematic mount on another tooling machine without removing the part to be machined from the locator so that there is no need to touch or reposition the part within the locator, thereby assuring exact replication of the position of the part in relation to the cutting tool on the tooling machine for each machining operation on the part.

  6. Reconstructing Spatial Distributions from Anonymized Locations

    SciTech Connect

    Horey, James L; Forrest, Stephanie; Groat, Michael

    2012-01-01

    Devices such as mobile phones, tablets, and sensors are often equipped with GPS that accurately report a person's location. Combined with wireless communication, these devices enable a wide range of new social tools and applications. These same qualities, however, leave location-aware applications vulnerable to privacy violations. This paper introduces the Negative Quad Tree, a privacy protection method for location aware applications. The method is broadly applicable to applications that use spatial density information, such as social applications that measure the popularity of social venues. The method employs a simple anonymization algorithm running on mobile devices, and a more complex reconstruction algorithm on a central server. This strategy is well suited to low-powered mobile devices. The paper analyzes the accuracy of the reconstruction method in a variety of simulated and real-world settings and demonstrates that the method is accurate enough to be used in many real-world scenarios.

  7. Lightning Location Using Electric Field Change Meters

    NASA Astrophysics Data System (ADS)

    Bitzer, P. M.; Christian, H.; Burchfield, J.

    2010-12-01

    Briefly introduced last year, the Huntsville Alabama Field Change Array (HAFCA) is a collection of electric field change meters deployed in and around Huntsville. Armed with accurate GPS timing, the array is able to sample electric field changes due to lightning strokes simultaneously at several locations. For the first time, different components of the lightning flash can be located in three dimensions using only electric field change records. In particular, this research will show spacetime locations throughout entire lightning strokes, from preliminary breakdown pulses to the return stroke and later processes that may be related to charge neutralization. To find the spacetime locations, standard time of arrival methods will be used: finding the parameters that best fit the model using the Marquardt method. However, we will also discuss using Markov Chain Monte Carlo methods which yield a better estimation of errors. With this information, we will discuss selected cases from the array to date. In particular, we will discuss the inter-comparison of HAFCA with two other well known lightning location arrays, NLDN and NALMA. Specifically, we will explore the relationship between the first LMA pulse in a lightning stroke and the locations of preliminary breakdown pulses and the implications on lightning initiation. Further, the return stroke locations will be shown to agree reasonably well with NLDN locations. We will also locate compact intracloud discharges (CIDs) and compare with NLDN locations.

  8. Sensors Locate Radio Interference

    NASA Technical Reports Server (NTRS)

    2009-01-01

    After receiving a NASA Small Business Innovation Research (SBIR) contract from Kennedy Space Center, Soneticom Inc., based in West Melbourne, Florida, created algorithms for time difference of arrival and radio interferometry, which it used in its Lynx Location System (LLS) to locate electromagnetic interference that can disrupt radio communications. Soneticom is collaborating with the Federal Aviation Administration (FAA) to install and test the LLS at its field test center in New Jersey in preparation for deploying the LLS at commercial airports. The software collects data from each sensor in order to compute the location of the interfering emitter.

  9. Tectonic events in Greenland

    NASA Astrophysics Data System (ADS)

    Dahl-Jensen, T.; Voss, P.; Larsen, T.; Pinna, L.

    2012-12-01

    In Greenland a station separation of around 400km mean that many earthquakes are only detected on one or two stations. The development of the seismic monitoring have gone from having only three seismic stations in Greenland up to the late 1990'ies, till today where there are 18 permanent stations. All stations are equipped with broadband sensors and all of the permanent stations transmit data in real time. The recent major improvement of the seismic monitoring is performed by the Greenland ice sheet monitoring network (GLISN, http://glisn.info). The primary goal of GLISN is to provide broadband seismic data for the detection of glacial earthquakes. GLISN is now fully implemented with Iridium real time data transfer is in operation at five stations. In the Ammassalik region in Southeast Greenland, where small earthquakes often are felt, data from a temporary additional station has been utilized for a study covering 9 months in 2008/9. In this period 62 local earthquakes have been analyzed and re-located. Some of the events had formerly been located from distant stations by using a universal earth model. The result of this localization was a scattered distribution of the events in the region. The locations have now been improved by using a local earth model along with phase readings from two local stations not previously included; ANG in Tasiilaq and ISOG in Isortoq. From relocating the events two zones with a higher degree of seismicity than in the rest of the region are observed. The first zone is located by felsic intrusions. The second zone is at the boundary between the Archaean Craton and the Ammasalik region where reworked Archaean gneisses are dominating the geology. During the analysis it was observed that the additional information from the local stations are of great importance for the result. Active broad band stations in Greenland

  10. Video Event Detection: From Subvolume Localization To Spatio-Temporal Path Search.

    PubMed

    Tran, Du; Yuan, Junsong; Forsyth, David

    2013-07-23

    Although sliding window-based approaches have been quite successful in detecting objects in images, it is not a trivial problem to extend them to detecting events in videos. We propose to search for spatio-temporal paths for video event detection. This new formulation can accurately detect and locate video events in cluttered and crowded scenes, and is robust to camera motions. It can also well handle the scale, shape, and intra-class variations of the event. Compared to event detection using spatio-temporal sliding windows, the spatio-temporal paths correspond to the event trajectories in the video space, thus can better handle events composed by moving objects. We prove that the proposed search algorithm can achieve the global optimal solution with the lowest complexity. Experiments are conducted on realistic video datasets with different event detection tasks, such as anomaly event detection, walking person detection, and running detection. Our proposed method is compatible to different types of video features or object detectors and robust to false and missed local detections. It significantly improves the overall detection and localization accuracy over the state-of-the-art methods. PMID:23898011

  11. Object locating system

    DOEpatents

    Novak, J.L.; Petterson, B.

    1998-06-09

    A sensing system locates an object by sensing the object`s effect on electric fields. The object`s effect on the mutual capacitance of electrode pairs varies according to the distance between the object and the electrodes. A single electrode pair can sense the distance from the object to the electrodes. Multiple electrode pairs can more precisely locate the object in one or more dimensions. 12 figs.

  12. Lunar Impact Flash Locations

    NASA Technical Reports Server (NTRS)

    Moser, D. E.; Suggs, R. M.; Kupferschmidt, L.; Feldman, J.

    2015-01-01

    A bright impact flash detected by the NASA Lunar Impact Monitoring Program in March 2013 brought into focus the importance of determining the impact flash location. A process for locating the impact flash, and presumably its associated crater, was developed using commercially available software tools. The process was successfully applied to the March 2013 impact flash and put into production on an additional 300 impact flashes. The goal today: provide a description of the geolocation technique developed.

  13. Automatic Event Bulletin Built by Waveform Cross Correlation Using the Global Grid of Master Events with Adjustable Templates

    NASA Astrophysics Data System (ADS)

    Rozhkov, M.; Bobrov, D.; Kitov, I.

    2015-12-01

    We built an automatic seismic event bulletin for the whole globe using waveform cross correlation at array stations of the International Monitoring System (IMS). To detect signals and associate them into robust event hypotheses in an automatic pipeline we created a global grid (GG) of master events with a diversity of waveform templates. For the Comprehensive Nuclear-Test-Ban Treaty (CTBT), the GG provides an almost uniform distribution of monitoring capabilities and adjustable templates. For seismic areas, we select high quality signals at IMS stations from earthquakes. For test sites, signals from UNEs are best templates. Global detection and association with cross correlation technique for research and monitoring purposes demands templates from master events outside the regions of natural seismicity and test sites. We populate aseismic areas with masters having synthetic templates calculated for predefined sets of IMS array stations. We applied various technologies to synthesize most representative signals for cross correlation and tested them using the Reviewed Event Bulletin (REB) issued by the International Data Centre (IDC). At first, we tested these global sets of master events and synthetic templates using IMS seismic data for February 13, 2013 and demonstrated excellent detection and location capability. Then, using the REB and cross correlation bulletins (XSELs) experienced analysts from the IDC compared the relative performance of various templates and built reliable sets of events and detections for machine learning. In this study, we carefully compile global training sets for machine learning in order to establish statistical decision lines between reliable and unreliable event hypotheses, then apply classification procedures to the intermediate automatic cross correlation bulletin based on the GG, and compile the final XSEL, which is more accurate and has lower detection threshold than the REB.

  14. Automatic Event Bulletin Built By Waveform Cross Correlation Using The Global Grid Of Master Events With Adjustable Templates

    NASA Astrophysics Data System (ADS)

    Kitov, Ivan; Bobrov, Dmitry; Rozhkov, Mikhail

    2016-04-01

    We built an automatic seismic event bulletin for the whole globe using waveform cross correlation at array stations of the International Monitoring System (IMS). To detect signals and associate them into robust event hypotheses in an automatic pipeline we created a global grid (GG) of master events with a diversity of waveform templates. For the Comprehensive Nuclear-Test-Ban Treaty (CTBT), the GG provide an almost uniform distribution of monitoring capabilities and adjustable templates. For seismic areas, we select high quality signals at IMS stations from earthquakes. For test sites, signals from UNEs are best templates. Global detection and association with cross correlation technique for research and monitoring purposes demands templates from master events outside the regions of natural seismicity and test sites. We populate aseismic areas with masters having synthetic templates calculated for predefined sets of IMS array stations. We applied various technologies to synthesize most representative signals for cross correlation and tested them using the Reviewed Event Bulletin (REB) issued by the International Data Centre (IDC). At first, we tested these global sets of master events and synthetic templates using IMS seismic data for February 13, 2013 and demonstrated excellent detection and location capability. Then, using the REB and cross correlation bulletins (XSELs) experienced analysts from the IDC compared the relative performance of various templates and built reliable sets of events and detections for machine learning. In this study, we carefully compile global training sets for machine learning in order to establish statistical decision lines between reliable and unreliable event hypotheses, then apply classification procedures to the intermediate automatic cross correlation bulletin based on the GG, and compile the final XSEL, which is more accurate and has lower detection threshold than the REB.

  15. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  16. Geostar - Navigation location system

    NASA Astrophysics Data System (ADS)

    Keyser, Donald A.

    The author describes the Radiodetermination Satellite Service (RDSS). The initial phase of the RDSS provides for a unique service enabling central offices and headquarters to obtain position-location information and receive short digital messages from mobile user terminals throughout the contiguous United States, southern Canada, and northern Mexico. The system employs a spread-spectrum, CDMA modulation technique allowing multiple customers to use the system simultaneously, without preassigned coordination with fellow users. Position location is currently determined by employing an existing radio determination receiver, such as Loran-C, GPS, or Transit, in the mobile user terminal. In the early 1990s position location will be determined at a central earth station by time-differential ranging of the user terminals via two or more geostationary satellites. A brief overview of the RDSS system architecture is presented with emphasis on the user terminal and its diverse applications.

  17. Accurate and Reliable Gait Cycle Detection in Parkinson's Disease.

    PubMed

    Hundza, Sandra R; Hook, William R; Harris, Christopher R; Mahajan, Sunny V; Leslie, Paul A; Spani, Carl A; Spalteholz, Leonhard G; Birch, Benjamin J; Commandeur, Drew T; Livingston, Nigel J

    2014-01-01

    There is a growing interest in the use of Inertial Measurement Unit (IMU)-based systems that employ gyroscopes for gait analysis. We describe an improved IMU-based gait analysis processing method that uses gyroscope angular rate reversal to identify the start of each gait cycle during walking. In validation tests with six subjects with Parkinson disease (PD), including those with severe shuffling gait patterns, and seven controls, the probability of True-Positive event detection and False-Positive event detection was 100% and 0%, respectively. Stride time validation tests using high-speed cameras yielded a standard deviation of 6.6 ms for controls and 11.8 ms for those with PD. These data demonstrate that the use of our angular rate reversal algorithm leads to improvements over previous gyroscope-based gait analysis systems. Highly accurate and reliable stride time measurements enabled us to detect subtle changes in stride time variability following a Parkinson's exercise class. We found unacceptable measurement accuracy for stride length when using the Aminian et al gyro-based biomechanical algorithm, with errors as high as 30% in PD subjects. An alternative method, using synchronized infrared timing gates to measure velocity, combined with accurate mean stride time from our angular rate reversal algorithm, more accurately calculates mean stride length.

  18. Acoustic Location of Lightning Using Interferometric Techniques

    NASA Astrophysics Data System (ADS)

    Erives, H.; Arechiga, R. O.; Stock, M.; Lapierre, J. L.; Edens, H. E.; Stringer, A.; Rison, W.; Thomas, R. J.

    2013-12-01

    Acoustic arrays have been used to accurately locate thunder sources in lightning flashes. The acoustic arrays located around the Magdalena mountains of central New Mexico produce locations which compare quite well with source locations provided by the New Mexico Tech Lightning Mapping Array. These arrays utilize 3 outer microphones surrounding a 4th microphone located at the center, The location is computed by band-passing the signal to remove noise, and then computing the cross correlating the outer 3 microphones with respect the center reference microphone. While this method works very well, it works best on signals with high signal to noise ratios; weaker signals are not as well located. Therefore, methods are being explored to improve the location accuracy and detection efficiency of the acoustic location systems. The signal received by acoustic arrays is strikingly similar to th signal received by radio frequency interferometers. Both acoustic location systems and radio frequency interferometers make coherent measurements of a signal arriving at a number of closely spaced antennas. And both acoustic and interferometric systems then correlate these signals between pairs of receivers to determine the direction to the source of the received signal. The primary difference between the two systems is the velocity of propagation of the emission, which is much slower for sound. Therefore, the same frequency based techniques that have been used quite successfully with radio interferometers should be applicable to acoustic based measurements as well. The results presented here are comparisons between the location results obtained with current cross correlation method and techniques developed for radio frequency interferometers applied to acoustic signals. The data were obtained during the summer 2013 storm season using multiple arrays sensitive to both infrasonic frequency and audio frequency acoustic emissions from lightning. Preliminary results show that

  19. Marine cable location system

    SciTech Connect

    Zachariadis, R.G.

    1984-05-01

    An acoustic positioning system locates a marine cable at an exploration site, such cable employing a plurality of hydrophones at spaced-apart positions along the cable. A marine vessel measures water depth to the cable as the vessel passes over the cable and interrogates the hydrophones with sonar pulses along a slant range as the vessel travels in a parallel and horizontally offset path to the cable. The location of the hydrophones is determined from the recordings of water depth and slant range.

  20. The Challenges of On-Campus Recruitment Events

    ERIC Educational Resources Information Center

    McCoy, Amy

    2012-01-01

    On-campus admissions events are the secret weapon that colleges and universities use to convince students to apply and enroll. On-campus events vary depending on the size, location, and type of institution; they include campus visitations, open houses, preview days, scholarship events, admitted student events, and summer yield events. These events…

  1. Events diary

    NASA Astrophysics Data System (ADS)

    2000-01-01

    as Imperial College, the Royal Albert Hall, the Royal College of Art, the Natural History and Science Museums and the Royal Geographical Society. Under the heading `Shaping the future together' BA2000 will explore science, engineering and technology in their wider cultural context. Further information about this event on 6 - 12 September may be obtained from Sandra Koura, BA2000 Festival Manager, British Association for the Advancement of Science, 23 Savile Row, London W1X 2NB (tel: 0171 973 3075, e-mail: sandra.koura@britassoc.org.uk ). Details of the creating SPARKS events may be obtained from creating.sparks@britassoc.org.uk or from the website www.britassoc.org.uk . Other events 3 - 7 July, Porto Alegre, Brazil VII Interamerican conference on physics education: The preparation of physicists and physics teachers in contemporary society. Info: IACPE7@if.ufrgs.br or cabbat1.cnea.gov.ar/iacpe/iacpei.htm 27 August - 1 September, Barcelona, Spain GIREP conference: Physics teacher education beyond 2000. Info: www.blues.uab.es/phyteb/index.html

  2. Location of Geothermal Resources

    SciTech Connect

    2004-07-01

    Geothermal resources, which utilize the heat of the earth, are located throughout the plant's crust. Those closer to the surface are most commonly used because geothermal drilling costs are currently prohibitive below depths of between 10,000 and 15,000 feet.

  3. Birefringent Stress Location Sensor

    NASA Astrophysics Data System (ADS)

    Franks, R. B.; Torruellas, W.; Youngquist, R. C.

    1986-08-01

    A new type of stress location sensor is discussed in which the FMCW technique is used to detect the difference in propagation time between two optical paths in an optical fiber due to stress induced modal coupling. Two versions of the system are included, and experimental results are presented for each system.

  4. LOCATING AREAS OF CONCERN

    EPA Science Inventory

    A simple method to locate changes in vegetation cover, which can be used to identify areas under stress. The method only requires inexpensive NDVI data. The use of remotely sensed data is far more cost-effective than field studies and can be performed more quickly. Local knowledg...

  5. Particle impact location detector

    NASA Technical Reports Server (NTRS)

    Auer, S. O.

    1974-01-01

    Detector includes delay lines connected to each detector surface strip. When several particles strike different strips simultaneously, pulses generated by each strip are time delayed by certain intervals. Delay time for each strip is known. By observing time delay in pulse, it is possible to locate strip that is struck by particle.

  6. Human Rights Event Detection from Heterogeneous Social Media Graphs.

    PubMed

    Chen, Feng; Neill, Daniel B

    2015-03-01

    Human rights organizations are increasingly monitoring social media for identification, verification, and documentation of human rights violations. Since manual extraction of events from the massive amount of online social network data is difficult and time-consuming, we propose an approach for automated, large-scale discovery and analysis of human rights-related events. We apply our recently developed Non-Parametric Heterogeneous Graph Scan (NPHGS), which models social media data such as Twitter as a heterogeneous network (with multiple different node types, features, and relationships) and detects emerging patterns in the network, to identify and characterize human rights events. NPHGS efficiently maximizes a nonparametric scan statistic (an aggregate measure of anomalousness) over connected subgraphs of the heterogeneous network to identify the most anomalous network clusters. It summarizes each event with information such as type of event, geographical locations, time, and participants, and provides documentation such as links to videos and news reports. Building on our previous work that demonstrates the utility of NPHGS for civil unrest prediction and rare disease outbreak detection, we present an analysis of human rights events detected by NPHGS using two years of Twitter data from Mexico. NPHGS was able to accurately detect relevant clusters of human rights-related tweets prior to international news sources, and in some cases, prior to local news reports. Analysis of social media using NPHGS could enhance the information-gathering missions of human rights organizations by pinpointing specific abuses, revealing events and details that may be blocked from traditional media sources, and providing evidence of emerging patterns of human rights violations. This could lead to more timely, targeted, and effective advocacy, as well as other potential interventions. PMID:27442843

  7. Human Rights Event Detection from Heterogeneous Social Media Graphs.

    PubMed

    Chen, Feng; Neill, Daniel B

    2015-03-01

    Human rights organizations are increasingly monitoring social media for identification, verification, and documentation of human rights violations. Since manual extraction of events from the massive amount of online social network data is difficult and time-consuming, we propose an approach for automated, large-scale discovery and analysis of human rights-related events. We apply our recently developed Non-Parametric Heterogeneous Graph Scan (NPHGS), which models social media data such as Twitter as a heterogeneous network (with multiple different node types, features, and relationships) and detects emerging patterns in the network, to identify and characterize human rights events. NPHGS efficiently maximizes a nonparametric scan statistic (an aggregate measure of anomalousness) over connected subgraphs of the heterogeneous network to identify the most anomalous network clusters. It summarizes each event with information such as type of event, geographical locations, time, and participants, and provides documentation such as links to videos and news reports. Building on our previous work that demonstrates the utility of NPHGS for civil unrest prediction and rare disease outbreak detection, we present an analysis of human rights events detected by NPHGS using two years of Twitter data from Mexico. NPHGS was able to accurately detect relevant clusters of human rights-related tweets prior to international news sources, and in some cases, prior to local news reports. Analysis of social media using NPHGS could enhance the information-gathering missions of human rights organizations by pinpointing specific abuses, revealing events and details that may be blocked from traditional media sources, and providing evidence of emerging patterns of human rights violations. This could lead to more timely, targeted, and effective advocacy, as well as other potential interventions.

  8. Leak Locating Experiment for Actual Underground Water Supply Pipelines with a Novel Locating System

    NASA Astrophysics Data System (ADS)

    Lee, Young-Sup; Yoon, Dong-Jin; Kang, Seokhoon; Jun, Kyungkoo; Choi, Byoungjo

    This paper presents a novel leak locating system to identify precise position of leak spots of underground water supply pipelines. The system has been studied and developed upon excellent foundation with modern mobile communication technology and the internet. However, the leak locating algorithm in the new system requires knowing the exact acoustical wave speed inside water-filled pipelines and the accurate time arrival difference between sensors to detect precise leak location. Especially the time difference is calculated with optimal maximum likelihood method. For the demonstration of the new system, an intensive experiment performed with 315 m long actual underground water supply pipelines showed an excellent detection capability.

  9. Interferometric locating system

    NASA Technical Reports Server (NTRS)

    Macdoran, P. F. (Inventor)

    1980-01-01

    A system is described for determining the position of a vehicle or other target that emits radio waves and which is of the type that senses the difference in time of arrival at spaced ground stations of signals from the vehicle to locate the vehicle on a set of intersecting hyperbolas. A network of four ground stations detects the radio emissions from the vehicle and by means of cross correlation derives the relative signal delay at the ground stations from which the vehicle position is deduced. Because the signal detection is by cross correlation, no knowledge of the emission is needed, which makes even unintentional radio noise emissions usable as a locator beacon. By positioning one of the four ground stations at an elevation significantly above the plane of the other three stations, a three dimensional fix on the vehicle is possible.

  10. Dipole Well Location

    1998-08-03

    The problem here is to model the three-dimensional response of an electromagnetic logging tool to a practical situation which is often encountered in oil and gas exploration. The DWELL code provide the electromagnetic fields on the axis of a borehole due to either an electric or a magnetic dipole located on the same axis. The borehole is cylindrical, and is located within a stratified formation in which the bedding planes are not horizontal. The anglemore » between the normal to the bedding planes and the axis of the borehole may assume any value, or in other words, the borehole axis may be tilted with respect to the bedding planes. Additionally, all of the formation layers may have invasive zones of drilling mud. The operating frequency of the source dipole(s) extends from a few Hertz to hundreds of Megahertz.« less

  11. Electric current locator

    DOEpatents

    King, Paul E.; Woodside, Charles Rigel

    2012-02-07

    The disclosure herein provides an apparatus for location of a quantity of current vectors in an electrical device, where the current vector has a known direction and a known relative magnitude to an input current supplied to the electrical device. Mathematical constants used in Biot-Savart superposition equations are determined for the electrical device, the orientation of the apparatus, and relative magnitude of the current vector and the input current, and the apparatus utilizes magnetic field sensors oriented to a sensing plane to provide current vector location based on the solution of the Biot-Savart superposition equations. Description of required orientations between the apparatus and the electrical device are disclosed and various methods of determining the mathematical constants are presented.

  12. Optimal Facility-Location

    PubMed Central

    Goldman, A. J.

    2006-01-01

    Dr. Christoph Witzgall, the honoree of this Symposium, can count among his many contributions to applied mathematics and mathematical operations research a body of widely-recognized work on the optimal location of facilities. The present paper offers to non-specialists a sketch of that field and its evolution, with emphasis on areas most closely related to Witzgall’s research at NBS/NIST. PMID:27274920

  13. Ammonia Leak Locator Study

    NASA Technical Reports Server (NTRS)

    Dodge, Franklin T.; Wuest, Martin P.; Deffenbaugh, Danny M.

    1995-01-01

    The thermal control system of International Space Station Alpha will use liquid ammonia as the heat exchange fluid. It is expected that small leaks (of the order perhaps of one pound of ammonia per day) may develop in the lines transporting the ammonia to the various facilities as well as in the heat exchange equipment. Such leaks must be detected and located before the supply of ammonia becomes critically low. For that reason, NASA-JSC has a program underway to evaluate instruments that can detect and locate ultra-small concentrations of ammonia in a high vacuum environment. To be useful, the instrument must be portable and small enough that an astronaut can easily handle it during extravehicular activity. An additional complication in the design of the instrument is that the environment immediately surrounding ISSA will contain small concentrations of many other gases from venting of onboard experiments as well as from other kinds of leaks. These other vapors include water, cabin air, CO2, CO, argon, N2, and ethylene glycol. Altogether, this local environment might have a pressure of the order of 10(exp -7) to 10(exp -6) torr. Southwest Research Institute (SwRI) was contracted by NASA-JSC to provide support to NASA-JSC and its prime contractors in evaluating ammonia-location instruments and to make a preliminary trade study of the advantages and limitations of potential instruments. The present effort builds upon an earlier SwRI study to evaluate ammonia leak detection instruments [Jolly and Deffenbaugh]. The objectives of the present effort include: (1) Estimate the characteristics of representative ammonia leaks; (2) Evaluate the baseline instrument in the light of the estimated ammonia leak characteristics; (3) Propose alternative instrument concepts; and (4) Conduct a trade study of the proposed alternative concepts and recommend promising instruments. The baseline leak-location instrument selected by NASA-JSC was an ion gauge.

  14. Magnetic Location Indicator

    NASA Technical Reports Server (NTRS)

    Stegman, Thomas W.

    1992-01-01

    Ferrofluidic device indicates point of highest magnetic-flux density in workspace. Consists of bubble of ferrofluid in immiscible liquid carrier in clear plastic case. Used in flat block or tube. Axes of centering circle on flat-block version used to mark location of maximum flux density when bubble in circle. Device used to find point on wall corresponding to known point on opposite side of wall.

  15. Coso MT Site Locations

    SciTech Connect

    Doug Blankenship

    2011-05-04

    This data includes the locations of the MT data collected in and around the Coso Geothermal field that covered the West Flank area. These are the data that the 3D MT models were created from that were discussed in Phase 1 of the West Flank FORGE project. The projected coordinate system is NAD 1927 State Plane California IV FIPS 0404 and the Projection is Lambert Conformal Conic. Units are in feet.

  16. Event reconstruction for line source releases

    SciTech Connect

    Zajic, Dragan; Brown, Michael J; Williams, Michael D

    2010-01-01

    The goal of source inversion, also called event reconstruction, is the calculation of source parameters from information obtained by network of concentration (or dosage) and meteorological sensors. Source parameters include source location and strength, but in certain cases there could be more than one source so the inversion procedure could deal with determination of number of sources, as well. In a case of limited time period pollutant emission events, as for example during accidents or intentional releases, it is of great use to estimate starting and ending times of the event. This kind of research is very useful for estimating the source parameters of industrial pollutants since it provides important information for regulation purposes. Also it provides information to fast responders in a case of accidental pollutant releases or for homeland security needs when chemical, biological or radiological agent is deliberately released. Development of faster and more accurate algorithms is very important since it could help reduce the populace's exposure to dangerous airborne contaminants, plan evacuation routes, and help assess the magnitude of clean up. During the last decade, the large number of research papers in area of source inversion was published where many different approaches were used. Most of the source inversion work publish to date apply to point source releases. The forward dispersion models used range from fast Gaussian plume and puff codes that enable almost instantaneous calculations of concentrations and dosages to Computational Fluid Dynamics (CFD) codes that provide more detailed and precise calculation but at the same time are expensive with respect to time and computer resources. The optimization methods were often used and examples are simulated annealing and genetic algorithms.

  17. Machine tool locator

    DOEpatents

    Hanlon, John A.; Gill, Timothy J.

    2001-01-01

    Machine tools can be accurately measured and positioned on manufacturing machines within very small tolerances by use of an autocollimator on a 3-axis mount on a manufacturing machine and positioned so as to focus on a reference tooling ball or a machine tool, a digital camera connected to the viewing end of the autocollimator, and a marker and measure generator for receiving digital images from the camera, then displaying or measuring distances between the projection reticle and the reference reticle on the monitoring screen, and relating the distances to the actual position of the autocollimator relative to the reference tooling ball. The images and measurements are used to set the position of the machine tool and to measure the size and shape of the machine tool tip, and examine cutting edge wear. patent

  18. A New Multiscale Technique for Time-Accurate Geophysics Simulations

    NASA Astrophysics Data System (ADS)

    Omelchenko, Y. A.; Karimabadi, H.

    2006-12-01

    Large-scale geophysics systems are frequently described by multiscale reactive flow models (e.g., wildfire and climate models, multiphase flows in porous rocks, etc.). Accurate and robust simulations of such systems by traditional time-stepping techniques face a formidable computational challenge. Explicit time integration suffers from global (CFL and accuracy) timestep restrictions due to inhomogeneous convective and diffusion processes, as well as closely coupled physical and chemical reactions. Application of adaptive mesh refinement (AMR) to such systems may not be always sufficient since its success critically depends on a careful choice of domain refinement strategy. On the other hand, implicit and timestep-splitting integrations may result in a considerable loss of accuracy when fast transients in the solution become important. To address this issue, we developed an alternative explicit approach to time-accurate integration of such systems: Discrete-Event Simulation (DES). DES enables asynchronous computation by automatically adjusting the CPU resources in accordance with local timescales. This is done by encapsulating flux- conservative updates of numerical variables in the form of events, whose execution and synchronization is explicitly controlled by imposing accuracy and causality constraints. As a result, at each time step DES self- adaptively updates only a fraction of the global system state, which eliminates unnecessary computation of inactive elements. DES can be naturally combined with various mesh generation techniques. The event-driven paradigm results in robust and fast simulation codes, which can be efficiently parallelized via a new preemptive event processing (PEP) technique. We discuss applications of this novel technology to time-dependent diffusion-advection-reaction and CFD models representative of various geophysics applications.

  19. Three-dimensional Probabilistic Earthquake Location Applied to 2002-2003 Mt. Etna Eruption

    NASA Astrophysics Data System (ADS)

    Mostaccio, A.; Tuve', T.; Zuccarello, L.; Patane', D.; Saccorotti, G.; D'Agostino, M.

    2005-12-01

    Recorded seismicity for the Mt. Etna volcano, occurred during the 2002-2003 eruption, has been relocated using a probabilistic, non-linear, earthquake location approach. We used the software package NonLinLoc (Lomax et al., 2000) adopting the 3D velocity model obtained by Cocina et al., 2005. We applied our data through different algorithms: (1) via a grid-search; (2) via a Metropolis-Gibbs; and (3) via an Oct-tree. The Oct-Tree algorithm gives efficient, faster and accurate mapping of the PDF (Probability Density Function) of the earthquake location problem. More than 300 seismic events were analyzed in order to compare non-linear location results with the ones obtained by using traditional, linearized earthquake location algorithm such as Hypoellipse, and a 3D linearized inversion (Thurber, 1983). Moreover, we compare 38 focal mechanisms, chosen following stricta criteria selection, with the ones obtained by the 3D and 1D results. Although the presented approach is more of a traditional relocation application, probabilistic earthquake location could be used in routinely survey.

  20. Accurate adjoint design sensitivities for nano metal optics.

    PubMed

    Hansen, Paul; Hesselink, Lambertus

    2015-09-01

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483

  1. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  2. A rapid, economical, and accurate method to determining the physical risk of storm marine inundations using sedimentary evidence

    NASA Astrophysics Data System (ADS)

    Nott, Jonathan F.

    2015-04-01

    The majority of physical risk assessments from storm surge inundations are derived from synthetic time series generated from short climate records, which can often result in inaccuracies and are time-consuming and expensive to develop. A new method is presented here for the wet tropics region of northeast Australia. It uses lidar-generated topographic cross sections of beach ridge plains, which have been demonstrated to be deposited by marine inundations generated by tropical cyclones. Extreme value theory statistics are applied to data derived from the cross sections to generate return period plots for a given location. The results suggest that previous methods to estimate return periods using synthetic data sets have underestimated the magnitude/frequency relationship by at least an order of magnitude. The new method promises to be a more rapid, economical, and accurate assessment of the physical risk of these events.

  3. Developmental aspects of memory for spatial location.

    PubMed

    Ellis, N R; Katz, E; Williams, J E

    1987-12-01

    The purpose was to show whether or not the encoding of location met criteria defining an automatic process (L. Hasher & R. T. Zacks, 1979, Journal of Experimental Psychology: General, 108, 356-388; 1984, American Psychologist, 39, 1372-1388). Among other criteria, automatic processes are not expected to show developmental changes beyond an early age, to be unrelated to intelligence level, and to be unaffected by instructions. In the first experiment preschool through sixth-grade children were compared on a 40-picturebook task following incidental (remember the names of pictures) or intentional (remember location) instruction. Subjects viewed and named pictures in sets of four, arranged in quadrants in the opened book, and then attempted to recall names of the objects pictured and to relocate pictures on blank pages. In the second experiment, second and sixth graders, college students, elderly persons, and mentally retarded persons were compared on a 60-picturebook task following either incidental or semantic incidental instructions (give the function of objects pictured). Memory for location was invariant across age groups and intelligence level. The only exception was that 3 and 4 year olds were more accurate following intentional instructions. Otherwise there were no differences between intentional and incidental instructions. Semantic instructions resulted in slightly more accurate locations. The results were interpreted as supportive of the Hasher and Zacks' automaticity hypothesis. PMID:3694123

  4. Sonar Locator Systems

    NASA Technical Reports Server (NTRS)

    1985-01-01

    An underwater locator device called a Pinger is attached to an airplane's flight recorder for recovery in case of a crash. Burnett Electronics Pinger Model 512 resulted from a Burnett Electronics Laboratory, Inc./Langley Research Center contract for development of a search system for underwater mines. The Pinger's battery-powered transmitter is activated when immersed in water, and sends multidirectional signals for up to 500 hours. When a surface receiver picks up the signal, a diver can retrieve the pinger and the attached airplane flight recorder. Other pingers are used to track whales, mark underwater discoveries and assist oil drilling vessels.

  5. Location of Planet X

    SciTech Connect

    Harrington, R.S.

    1988-10-01

    Observed positions of Uranus and Neptune along with residuals in right ascension and declination are used to constrain the location of a postulated tenth planet. The residuals are converted into residuals in ecliptic longitude and latitude. The results are then combined into seasonal normal points, producing average geocentric residuals spaced slightly more than a year apart that are assumed to represent the equivalent heliocentric average residuals for the observed oppositions. Such a planet is found to most likely reside in the region of Scorpius, with considerably less likelihood that it is in Taurus. 8 references.

  6. Huntington's disease gene located.

    PubMed

    Kolata, G

    1983-11-25

    Investigators have found a restriction enzyme marker, a piece of DNA that can be located with recombinant DNA techniques, that is so close to the Huntington's disease gene that its presence can be used as an indicator for that gene. If this marker is used as a diagnostic test for Huntington's disease, people at risk for getting the disease will be able to learn whether or not they will in fact develop the disease. The ability to predict the inevitable onset of this progressive, degenerative disease raises ethical questions about counseling, screening, and disclosure of risk status to patients and family members.

  7. Time-reversal imaging techniques applied to tremor waveforms near Cholame, California to locate tectonic tremor

    NASA Astrophysics Data System (ADS)

    Horstmann, T.; Harrington, R. M.; Cochran, E. S.

    2012-12-01

    Thurber et al. (2006) interpolated to a grid spacing of 50 m. Such grid spacing corresponds to frequencies of up to 8 Hz, which is suitable to calculate the wave propagation of tremor. Our dataset contains continuous broadband data from 13 STS-2 seismometers deployed from May 2010 to July 2011 along the Cholame segment of the San Andreas Fault as well as data from the HRSN and PBO networks. Initial synthetic results from tests on a 2D plane using a line of 15 receivers suggest that we are able to recover accurate event locations to within 100 m horizontally and 300 m depth. We conduct additional synthetic tests to determine the influence of signal-to-noise ratio, number of stations used, and the uncertainty in the velocity model on the location result by adding noise to the seismograms and perturbations to the velocity model. Preliminary results show accurate show location results to within 400 m with a median signal-to-noise ratio of 3.5 and 5% perturbations in the velocity model. The next steps will entail performing the synthetic tests on the 3D velocity model, and applying the method to tremor waveforms. Furthermore, we will determine the spatial and temporal distribution of the source locations and compare our results to those by Sumy and others.

  8. METHOD OF LOCATING GROUNDS

    DOEpatents

    Macleish, K.G.

    1958-02-11

    ABS>This patent presents a method for locating a ground in a d-c circult having a number of parallel branches connected across a d-c source or generator. The complete method comprises the steps of locating the ground with reference to the mildpoint of the parallel branches by connecting a potentiometer across the terminals of the circuit and connecting the slider of the potentiometer to ground through a current indicating instrument, adjusting the slider to right or left of the mildpoint so as to cause the instrument to indicate zero, connecting the terminal of the network which is farthest from the ground as thus indicated by the potentiometer to ground through a condenser, impressing a ripple voltage on the circuit, and then measuring the ripple voltage at the midpoint of each parallel branch to find the branch in which is the lowest value of ripple voltage, and then measuring the distribution of the ripple voltage along this branch to determine the point at which the ripple voltage drops off to zero or substantially zero due to the existence of a ground. The invention has particular application where a circuit ground is present which will disappear if the normal circuit voltage is removed.

  9. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    PubMed

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  10. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    PubMed Central

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  11. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    PubMed

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  12. Lunar Impact Flash Locations from NASA's Lunar Impact Monitoring Program

    NASA Technical Reports Server (NTRS)

    Moser, D. E.; Suggs, R. M.; Kupferschmidt, L.; Feldman, J.

    2015-01-01

    dependent upon LRO finding a fresh impact crater associated with one of the impact flashes recorded by Earth-based instruments, either the bright event of March 2013 or any other in the database of impact observations. To find the crater, LRO needed an accurate area to search. This Technical Memorandum (TM) describes the geolocation technique developed to accurately determine the impact flash location, and by association, the location of the crater, thought to lie directly beneath the brightest portion of the flash. The workflow and software tools used to geolocate the impact flashes are described in detail, along with sources of error and uncertainty and a case study applying the workflow to the bright impact flash in March 2013. Following the successful geolocation of the March 2013 flash, the technique was applied to all impact flashes detected by the MEO between November 7, 2005, and January 3, 2014.

  13. Acoustic wave-equation-based earthquake location

    NASA Astrophysics Data System (ADS)

    Tong, Ping; Yang, Dinghui; Liu, Qinya; Yang, Xu; Harris, Jerry

    2016-04-01

    We present a novel earthquake location method using acoustic wave-equation-based traveltime inversion. The linear relationship between the location perturbation (δt0, δxs) and the resulting traveltime residual δt of a particular seismic phase, represented by the traveltime sensitivity kernel K(t0, xs) with respect to the earthquake location (t0, xs), is theoretically derived based on the adjoint method. Traveltime sensitivity kernel K(t0, xs) is formulated as a convolution between the forward and adjoint wavefields, which are calculated by numerically solving two acoustic wave equations. The advantage of this newly derived traveltime kernel is that it not only takes into account the earthquake-receiver geometry but also accurately honours the complexity of the velocity model. The earthquake location is obtained by solving a regularized least-squares problem. In 3-D realistic applications, it is computationally expensive to conduct full wave simulations. Therefore, we propose a 2.5-D approach which assumes the forward and adjoint wave simulations within a 2-D vertical plane passing through the earthquake and receiver. Various synthetic examples show the accuracy of this acoustic wave-equation-based earthquake location method. The accuracy and efficiency of the 2.5-D approach for 3-D earthquake location are further verified by its application to the 2004 Big Bear earthquake in Southern California.

  14. Record-breaking events during the compressive failure of porous materials

    NASA Astrophysics Data System (ADS)

    Pál, Gergő; Raischel, Frank; Lennartz-Sassinek, Sabine; Kun, Ferenc; Main, Ian G.

    2016-03-01

    An accurate understanding of the interplay between random and deterministic processes in generating extreme events is of critical importance in many fields, from forecasting extreme meteorological events to the catastrophic failure of materials and in the Earth. Here we investigate the statistics of record-breaking events in the time series of crackling noise generated by local rupture events during the compressive failure of porous materials. The events are generated by computer simulations of the uniaxial compression of cylindrical samples in a discrete element model of sedimentary rocks that closely resemble those of real experiments. The number of records grows initially as a decelerating power law of the number of events, followed by an acceleration immediately prior to failure. The distribution of the size and lifetime of records are power laws with relatively low exponents. We demonstrate the existence of a characteristic record rank k*, which separates the two regimes of the time evolution. Up to this rank deceleration occurs due to the effect of random disorder. Record breaking then accelerates towards macroscopic failure, when physical interactions leading to spatial and temporal correlations dominate the location and timing of local ruptures. The size distribution of records of different ranks has a universal form independent of the record rank. Subsequences of events that occur between consecutive records are characterized by a power-law size distribution, with an exponent which decreases as failure is approached. High-rank records are preceded by smaller events of increasing size and waiting time between consecutive events and they are followed by a relaxation process. As a reference, surrogate time series are generated by reshuffling the event times. The record statistics of the uncorrelated surrogates agrees very well with the corresponding predictions of independent identically distributed random variables, which confirms that temporal and spatial

  15. Record-breaking events during the compressive failure of porous materials.

    PubMed

    Pál, Gergő; Raischel, Frank; Lennartz-Sassinek, Sabine; Kun, Ferenc; Main, Ian G

    2016-03-01

    An accurate understanding of the interplay between random and deterministic processes in generating extreme events is of critical importance in many fields, from forecasting extreme meteorological events to the catastrophic failure of materials and in the Earth. Here we investigate the statistics of record-breaking events in the time series of crackling noise generated by local rupture events during the compressive failure of porous materials. The events are generated by computer simulations of the uniaxial compression of cylindrical samples in a discrete element model of sedimentary rocks that closely resemble those of real experiments. The number of records grows initially as a decelerating power law of the number of events, followed by an acceleration immediately prior to failure. The distribution of the size and lifetime of records are power laws with relatively low exponents. We demonstrate the existence of a characteristic record rank k(*), which separates the two regimes of the time evolution. Up to this rank deceleration occurs due to the effect of random disorder. Record breaking then accelerates towards macroscopic failure, when physical interactions leading to spatial and temporal correlations dominate the location and timing of local ruptures. The size distribution of records of different ranks has a universal form independent of the record rank. Subsequences of events that occur between consecutive records are characterized by a power-law size distribution, with an exponent which decreases as failure is approached. High-rank records are preceded by smaller events of increasing size and waiting time between consecutive events and they are followed by a relaxation process. As a reference, surrogate time series are generated by reshuffling the event times. The record statistics of the uncorrelated surrogates agrees very well with the corresponding predictions of independent identically distributed random variables, which confirms that temporal and spatial

  16. Empirical law for fault-creep events

    USGS Publications Warehouse

    Crough, S.T.; Burford, R.O.

    1977-01-01

    Fault-creep events measured on the San Andreas and related faults near Hollister, California, can be described by a rheological model consisting of a spring, power-law dashpotand sliding block connected in series. An empirical creep-event law, derived from many creep-event records analyzed within the constraints of the model, provides a remarkably simple and accurate representation of creep-event behavior. The empirical creep law is expressed by the equation: D(t)= Df [1-1/{ct(n-1)Dfn-1+1}/(n-1)] where D is the value of displacement at time t following the onset of an event, Df is the final equilibrium value of the event displacementand C is a proportionality constant. This discovery should help determine whether the time-displacement character of creep events is controlled by the material properties of fault gouge, or by other parameters. ?? 1977.

  17. Surface Properties Associated With Dust Storm Plume's Point-Source Locations In The Border Region Of The US And Mexico

    NASA Astrophysics Data System (ADS)

    Bleiweiss, M. P.; DuBois, D. W.; Flores, M. I.

    2013-12-01

    Dust storms in the border region of the Southwest US and Northern Mexico are a serious problem for air quality (PM10 exceedances), health (Valley Fever is pandemic in the region) and transportation (road closures and deadly traffic accidents). In order to better understand the phenomena, we are attempting to identify critical characteristics of dust storm sources so that, possibly, one can perform more accurate predictions of events and, thus, mitigate some of the deleterious effects. Besides the emission mechanisms for dust storm production that are tied to atmospheric dynamics, one must know those locations whose source characteristics can be tied to dust production and, therefore, identify locations where a dust storm is eminent under favorable atmospheric dynamics. During the past 13 years, we have observed, on satellite imagery, more than 500 dust events in the region and are in the process of identifying the source regions for the dust plumes that make up an event. Where satellite imagery exists with high spatial resolution (less than or equal to 250m), dust 'plumes' appear to be made up of individual and merged plumes that are emitted from a 'point source' (smaller than the resolution of the imagery). In particular, we have observed events from the ASTER sensor whose spatial resolution is 15m as well as Landsat whose spatial resolution is 30m. Tying these source locations to surface properties such as NDVI, albedo, and soil properties (percent sand, silt, clay, and gravel; soil moisture; etc.) will identify regions with enhanced capability to produce a dust storm. This, along with atmospheric dynamics, will allow the forecast of dust events. The analysis of 10 events from the period 2004-2013, for which we have identified 1124 individual plumes, will be presented.

  18. Quantum Image Location

    NASA Astrophysics Data System (ADS)

    Jiang, Nan; Dang, Yijie; Zhao, Na

    2016-10-01

    Quantum image processing has been a hot topic as a consequence of the development of quantum computation. Many quantum image processing algorithms have been proposed, whose efficiency are theoretically higher than their corresponding classical algorithms. However, most of the quantum schemes do not consider the problem of measurement. If users want to get the results, they must measure the final state many times to get all the pixels' values. Moreover, executing the algorithm one time, users can only measure the final state one time. In order to measure it many times, users must execute the algorithms many times. If the measurement process is taken into account, whether or not the algorithms are really efficient needs to be reconsidered. In this paper, we try to solve the problem of measurement and give a quantum image location algorithm. This scheme modifies the probability of pixels to make the target pixel to be measured with higher probability. Furthermore, it only has linear complexity.

  19. On the Accurate Prediction of CME Arrival At the Earth

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Hess, Phillip

    2016-07-01

    We will discuss relevant issues regarding the accurate prediction of CME arrival at the Earth, from both observational and theoretical points of view. In particular, we clarify the importance of separating the study of CME ejecta from the ejecta-driven shock in interplanetary CMEs (ICMEs). For a number of CME-ICME events well observed by SOHO/LASCO, STEREO-A and STEREO-B, we carry out the 3-D measurements by superimposing geometries onto both the ejecta and sheath separately. These measurements are then used to constrain a Drag-Based Model, which is improved through a modification of including height dependence of the drag coefficient into the model. Combining all these factors allows us to create predictions for both fronts at 1 AU and compare with actual in-situ observations. We show an ability to predict the sheath arrival with an average error of under 4 hours, with an RMS error of about 1.5 hours. For the CME ejecta, the error is less than two hours with an RMS error within an hour. Through using the best observations of CMEs, we show the power of our method in accurately predicting CME arrival times. The limitation and implications of our accurate prediction method will be discussed.

  20. A method for accurate temperature measurement using infrared thermal camera.

    PubMed

    Tokunaga, Tomoharu; Narushima, Takashi; Yonezawa, Tetsu; Sudo, Takayuki; Okubo, Shuichi; Komatsubara, Shigeyuki; Sasaki, Katsuhiro; Yamamoto, Takahisa

    2012-08-01

    The temperature distribution on a centre-holed thin foil of molybdenum, used as a sample and heated using a sample-heating holder for electron microscopy, was measured using an infrared thermal camera. The temperature on the heated foil area located near the heating stage of the heating holder is almost equal to the temperature on the heating stage. However, during the measurement of the temperature at the edge of the hole of the foil located farthest from the heating stage, a drop in temperature should be taken into consideration; however, so far, no method has been developed to locally measure the temperature distribution on the heated sample. In this study, a method for the accurate measurement of temperature distribution on heated samples for electron microscopy is discussed.

  1. Creating Special Events

    ERIC Educational Resources Information Center

    deLisle, Lee

    2009-01-01

    "Creating Special Events" is organized as a systematic approach to festivals and events for students who seek a career in event management. This book looks at the evolution and history of festivals and events and proceeds to the nuts and bolts of event management. The book presents event management as the means of planning, organizing, directing,…

  2. Relative elastic interferometric imaging for microseismic source location

    NASA Astrophysics Data System (ADS)

    Li, Lei; Chen, Hao; Wang, Xiuming

    2016-10-01

    Combining a relative location method and seismic interferometric imaging, a relative elastic interferometric imaging method for microseismic source location is proposed. In the method, the information of a known event (the main event) is fully used to improve the location precision of the unknown events (the target events). First, the principles of both conventional and the relative interferometric imaging methods are analyzed. Traveltime differences from the position of the same potential event to different receivers are used in direct interferometric imaging, while relative interferometric imaging utilizes those of different events to the same receiver. Second, 2D and 3D numerical experiments demonstrate the feasibility of this newly proposed method in locating a single microseismic event. Envelopes of cross-correlation traces are utilized to eliminate the effects of changing polarities resulting from the source mechanism and receiver configuration. Finally, the location precision of the relative and conventional interferometric imaging methods are compared, and it indicates that the former hold both advantages of the relative method and interferometric imaging. Namely, it can adapt to comparatively high velocity error and low signal-to-noise ratio (SNR) microseismic data. Moreover, since there is no arrival time picking and fewer cross-correlograms are imaged, the method also significantly saves computational expense.

  3. Improving Accuracy in Locating Magnetic Sources: Combing Homogeneous Locator and Extreme Values for Inversion

    NASA Astrophysics Data System (ADS)

    Zhou, X.

    2015-12-01

    In this study, we present an algorithm to improve the accuracy in locating magnetic sources, especially dipole magnetic sources using a combination of a homogeneous locator and positions of extreme values. Homogeneous locator is a homogeneous function and point based algorithm that uses magnetic field intensity and its first and second order tensors. If using all magnetic data points for inversion of the locations of magnetic dipole sources, more solutions than actual targets will be produced. Also, interference from neighboring magnetic sources and noise will deteriorate the situation and make the locating task even more difficult. To improve such a situation, we investigated and compared various cases with different levels of interference and noise, and using a combination of the homogeneous locator and the positions of extreme values for inversion. Results show that (1) if the interference and noise levels are low, using magnetic and its first and second tensor values at the positions where the first vertical derivative of the magnetic field has extreme values - either maximum or minimum as inputs to the homogeneous locator resulted in the best results - accurate horizontal and vertical locations and structure indices; (2) if the interference and noise level are high, using magnetic and its first and second tensor values at the positions where the magnetic field intensity has extreme values as inputs to the homogeneous locator resulted in the best horizontal locations, but still using the values at the positions where the first vertical derivative of the magnetic field has extreme values produce the best vertical locations and structure indices. We applied the verified scheme to the field data for UXO detection and aeromagnetic data of Minnesota for geological structure study and the results are compared to Euler deconvolution and sole homogeneous locator method.

  4. Modified chemiluminescent NO analyzer accurately measures NOX

    NASA Technical Reports Server (NTRS)

    Summers, R. L.

    1978-01-01

    Installation of molybdenum nitric oxide (NO)-to-higher oxides of nitrogen (NOx) converter in chemiluminescent gas analyzer and use of air purge allow accurate measurements of NOx in exhaust gases containing as much as thirty percent carbon monoxide (CO). Measurements using conventional analyzer are highly inaccurate for NOx if as little as five percent CO is present. In modified analyzer, molybdenum has high tolerance to CO, and air purge substantially quenches NOx destruction. In test, modified chemiluminescent analyzer accurately measured NO and NOx concentrations for over 4 months with no denegration in performance.

  5. Close binding of identity and location in visual feature perception

    NASA Technical Reports Server (NTRS)

    Johnston, J. C.; Pashler, H.

    1990-01-01

    The binding of identity and location information in disjunctive feature search was studied. Ss searched a heterogeneous display for a color or a form target, and reported both target identity and location. To avoid better than chance guessing of target identity (by choosing the target less likely to have been seen), the difficulty of the two targets was equalized adaptively; a mathematical model was used to quantify residual effects. A spatial layout was used that minimized postperceptual errors in reporting location. Results showed strong binding of identity and location perception. After correction for guessing, no perception of identity without location was found. A weak trend was found for accurate perception of target location without identity. We propose that activated features generate attention-calling "interrupt" signals, specifying only location; attention then retrieves the properties at that location.

  6. Location of Maximum Credible Beam Losses in LCLS Injector

    SciTech Connect

    Mao, Stan

    2010-12-13

    The memo describes the maximum credible beam the LCLS injector can produce and lose at various locations along the beamline. The estimation procedure is based upon three previous reports [1, 2, 3]. While specific numbers have been updated to accurately reflect the present design parameters, the conclusions are very similar to those given in Ref 1. The source of the maximum credible beam results from the explosive electron emission from the photocathode if the drive laser intensity exceeds the threshold for plasma production. In this event, the gun's RF field can extract a large number of electrons from this plasma which are accelerated out of the gun and into the beamline. This electron emission persists until it has depleted the gun of all its energy. Hence the number of electrons emitted per pulse is limited by the amount of stored RF energy in the gun. It needs to be emphasized that this type of emission is highly undesirable, as it causes permanent damage to the cathode.

  7. Object Locating System

    NASA Technical Reports Server (NTRS)

    Arndt, G. Dickey (Inventor); Carl, James R. (Inventor)

    2000-01-01

    A portable system is provided that is operational for determining, with three dimensional resolution, the position of a buried object or approximately positioned object that may move in space or air or gas. The system has a plurality of receivers for detecting the signal front a target antenna and measuring the phase thereof with respect to a reference signal. The relative permittivity and conductivity of the medium in which the object is located is used along with the measured phase signal to determine a distance between the object and each of the plurality of receivers. Knowing these distances. an iteration technique is provided for solving equations simultaneously to provide position coordinates. The system may also be used for tracking movement of an object within close range of the system by sampling and recording subsequent position of the object. A dipole target antenna. when positioned adjacent to a buried object, may be energized using a separate transmitter which couples energy to the target antenna through the medium. The target antenna then preferably resonates at a different frequency, such as a second harmonic of the transmitter frequency.

  8. AOTV bow shock location

    NASA Technical Reports Server (NTRS)

    Desautel, D.

    1985-01-01

    Hypersonic bow-shock location and geometry are of central importance to the aerodynamics and aerothermodynamics of aeroassisted orbital transfer vehicles (AOTVs), but they are difficult to predict for a given vehicle configuration. This paper reports experimental measurements of shock standoff distance for the 70 deg cone AOTV configuration in shock-tunnel-test flows at Mach numbers of 3.8 to 7.9 and for angles of attack from 0 deg to 20 deg. The controlling parameter for hypersonic bow-shock standoff distance (for a given forebody shape) is the mean normal-shock density ratio. Values for this parameter in the tests reported are in the same range as those of the drag-brake AOTV perigee regime. Results for standoff distance are compared with those previously reported in the literature for this AOTV configuration. It is concluded that the AOTV shock standoff distance for the conical configuration, based on frustrum (base) radius, is equivalent to that of a sphere with a radius about 35 percent greater than that of the cone; the distance is, therefore, much less than reported in previous studies. Some reasons for the discrepancies between the present and previous are advanced. The smaller standoff distance determined here implies there will be less radiative heat transfer than was previously expected.

  9. Inferences on active faults at the Southern Alps-Liguria basin junction from accurate analysis of low energy seismicity

    NASA Astrophysics Data System (ADS)

    Turino, Chiara; Scafidi, Davide; Eva, Elena; Solarino, Stefano

    2009-10-01

    Seismotectonic studies concern themselves with understanding the distribution of earthquakes in space, time, size and style. Therefore, the better these parameters are known, the most correct the association of any seismic event with the faulting structure that caused it will result. The use of accurate location methods is especially required when dealing with very complex areas, where several faulting systems or relatively small seismogenic structures exist. In fact, even though routinely determined epicentres are capable of revealing the rough picture of the seismicity, they are not suitable for studies of the fine structure of the causative fault, as their location uncertainties are often larger than the source dimension itself. In this work the probabilistic approach of the "Non Linear Localization" has been used to compute precise locations for earthquakes occurred in the last twenty years nearby the Saorge-Taggia line, a complex fault system situated in Western Liguria, close to the border between Italy and France. Together with the Breil-Sospel-Monaco and the Peille-Laghet faults, this line is responsible for the seismic activity of the area. The seismotectonic study is completed through a local tomographic study and the analysis of the focal mechanisms computed for an enlarged area. The results show that the seismicity associated with this fault system is confined within the first 10 km depth. Many clusters of seismic events are identified along the Saorge-Taggia line. The existence of a not previously mapped branch perpendicular to the Saorge-Taggia line is also recognized. Although its position may suggest it to be the continuation of the Breil-Sospel-Monaco fault system towards NE, our finding would rather suggest no association with the fault. The overall results confirm the complexity of the area; in particular the hypothesis that the Saorge-Taggia system may represent the eastward limit of a subalpine crustal block comprised within the Nice Arc, the

  10. Atom location by electron channeling analysis

    SciTech Connect

    Pennycook, S.J.

    1984-07-01

    For many years the orientation dependence of the characteristic x-ray emission close to a Bragg reflection has been regarded as a hindrance to accurate microanalysis, and a random incident beam direction has always been recommended for accurate composition analysis. However, this orientation dependence can be put to use to extract information on the lattice location of foreign atoms within the crystalline matrix. Here a generalization of the technique is described which is applicable to any crystal structure including monatomic crystals, and can quantitatively determine substitutional fractions of impurities. The technique was referred to as electron channeling analysis, by analogy with the closely related and widely used bulk technique of ion channeling analysis, and was developed for lattice location studies of dopants in semiconductors at high spatial resolution. Only two spectra are required for each channeling analysis, one in each of the channeling conditions described above. If the matrix and dopant x-ray yields vary identically between the two orientations then the dopant necessarily lies within the reflecting matrix planes. If the dopant x-ray yield does not vary the dopant atoms are randomly located with respect to the matrix planes. 10 references, 2 figures.

  11. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.

    1997-09-23

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.

  12. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.

    1997-01-01

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.

  13. Can Appraisers Rate Work Performance Accurately?

    ERIC Educational Resources Information Center

    Hedge, Jerry W.; Laue, Frances J.

    The ability of individuals to make accurate judgments about others is examined and literature on this subject is reviewed. A wide variety of situational factors affects the appraisal of performance. It is generally accepted that the purpose of the appraisal influences the accuracy of the appraiser. The instrumentation, or tools, available to the…

  14. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  15. 14 CFR 91.207 - Emergency locator transmitters.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) There is attached to the airplane an approved automatic type emergency locator transmitter that is in... approved automatic type emergency locator transmitter that is in operable condition, except that after June... the event of crash impact is minimized. Fixed and deployable automatic type transmitters must...

  16. Assessing Special Events.

    ERIC Educational Resources Information Center

    Neff, Bonita Dostal

    Special events defined as being "newsworthy events" are becoming a way of American life. They are also a means for making a lot of money. Examples of special events that are cited most frequently are often the most minor of events; e.g., the open house, the new business opening day gala, or a celebration of some event in an organization. Little…

  17. Does the Nature of the Experience Influence Suggestibility? A Study of Children's Event Memory.

    ERIC Educational Resources Information Center

    Gobbo, Camilla; Mega, Carolina; Pipe, Margaret-Ellen

    2002-01-01

    Two experiments examined effects of event modality on young children's memory and suggestibility. Findings indicated that 5-year-olds were more accurate than 3-year-olds and those participating in the event were more accurate than those either observing or listening to a narrative. Assessment method, level of event learning, delay to testing, and…

  18. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  19. Event Segmentation Ability Uniquely Predicts Event Memory

    PubMed Central

    Sargent, Jesse Q.; Zacks, Jeffrey M.; Hambrick, David Z.; Zacks, Rose T.; Kurby, Christopher A.; Bailey, Heather R.; Eisenberg, Michelle L.; Beck, Taylor M.

    2013-01-01

    Memory for everyday events plays a central role in tasks of daily living, autobiographical memory, and planning. Event memory depends in part on segmenting ongoing activity into meaningful units. This study examined the relationship between event segmentation and memory in a lifespan sample to answer the following question: Is the ability to segment activity into meaningful events a unique predictor of subsequent memory, or is the relationship between event perception and memory accounted for by general cognitive abilities? Two hundred and eight adults ranging from 20 to 79 years old segmented movies of everyday events and attempted to remember the events afterwards. They also completed psychometric ability tests and tests measuring script knowledge for everyday events. Event segmentation and script knowledge both explained unique variance in event memory above and beyond the psychometric measures, and did so as strongly in older as in younger adults. These results suggest that event segmentation is a basic cognitive mechanism, important for memory across the lifespan. PMID:23942350

  20. Time reversal processing for source location in an urban environment

    NASA Astrophysics Data System (ADS)

    Albert, Donald G.; Liu, Lanbo; Moran, Mark L.

    2005-08-01

    A simulation study is conducted to demonstrate in principle that time reversal processing can be used to locate sound sources in an outdoor urban area with many buildings. Acoustic pulse propagation in this environment is simulated using a two-dimensional finite difference time domain (FDTD) computation. Using the simulated time traces from only a few sensors and back propagating them with the FDTD model, the sound energy refocuses in the vicinity of the true source location. This time reversal numerical experiment confirms that using information acquired only at non-line-of-sight locations is sufficient to obtain accurate source locations in a complex urban terrain.

  1. Researchermap: a tool for visualizing author locations using Google maps.

    PubMed

    Rastegar-Mojarad, Majid; Bales, Michael E; Yu, Hong

    2013-01-01

    We hereby present ResearcherMap, a tool to visualize locations of authors of scholarly papers. In response to a query, the system returns a map of author locations. To develop the system we first populated a database of author locations, geocoding institution locations for all available institutional affiliation data in our database. The database includes all authors of Medline papers from 1990 to 2012. We conducted a formative heuristic usability evaluation of the system and measured the system's accuracy and performance. The accuracy of finding the accurate address is 97.5% in our system.

  2. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  3. Bayesian analysis of rare events

    NASA Astrophysics Data System (ADS)

    Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  4. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  5. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  6. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  7. Preparation and accurate measurement of pure ozone.

    PubMed

    Janssen, Christof; Simone, Daniela; Guinet, Mickaël

    2011-03-01

    Preparation of high purity ozone as well as precise and accurate measurement of its pressure are metrological requirements that are difficult to meet due to ozone decomposition occurring in pressure sensors. The most stable and precise transducer heads are heated and, therefore, prone to accelerated ozone decomposition, limiting measurement accuracy and compromising purity. Here, we describe a vacuum system and a method for ozone production, suitable to accurately determine the pressure of pure ozone by avoiding the problem of decomposition. We use an inert gas in a particularly designed buffer volume and can thus achieve high measurement accuracy and negligible degradation of ozone with purities of 99.8% or better. The high degree of purity is ensured by comprehensive compositional analyses of ozone samples. The method may also be applied to other reactive gases. PMID:21456766

  8. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  9. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  10. Line gas sampling system ensures accurate analysis

    SciTech Connect

    Not Available

    1992-06-01

    Tremendous changes in the natural gas business have resulted in new approaches to the way natural gas is measured. Electronic flow measurement has altered the business forever, with developments in instrumentation and a new sensitivity to the importance of proper natural gas sampling techniques. This paper reports that YZ Industries Inc., Snyder, Texas, combined its 40 years of sampling experience with the latest in microprocessor-based technology to develop the KynaPak 2000 series, the first on-line natural gas sampling system that is both compact and extremely accurate. This means the composition of the sampled gas must be representative of the whole and related to flow. If so, relative measurement and sampling techniques are married, gas volumes are accurately accounted for and adjustments to composition can be made.

  11. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  12. Fault Location Methods for Ungrounded Distribution Systems Using Local Measurements

    NASA Astrophysics Data System (ADS)

    Xiu, Wanjing; Liao, Yuan

    2013-08-01

    This article presents novel fault location algorithms for ungrounded distribution systems. The proposed methods are capable of locating faults by using obtained voltage and current measurements at the local substation. Two types of fault location algorithms, using line to neutral and line to line measurements, are presented. The network structure and parameters are assumed to be known. The network structure needs to be updated based on information obtained from utility telemetry system. With the help of bus impedance matrix, local voltage changes due to the fault can be expressed as a function of fault currents. Since the bus impedance matrix contains information about fault location, superimposed voltages at local substation can be expressed as a function of fault location, through which fault location can be solved. Simulation studies have been carried out based on a sample distribution power system. From the evaluation study, it is evinced that very accurate fault location estimates are obtained from both types of methods.

  13. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  14. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-04-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  15. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  16. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  17. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  18. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  19. Event-Based Science.

    ERIC Educational Resources Information Center

    Wright, Russell G.

    1992-01-01

    Suggests that an event-based science curriculum can provide the framework for deciding what to retain in an overloaded science curriculum. Provides examples of current events and the science concepts explored related to the event. (MDH)

  20. Binding targets' responses to distractors' locations: distractor response bindings in a location-priming task.

    PubMed

    Frings, Christian; Möller, Birte

    2010-11-01

    Responses to target stimuli can be encoded together with distracting objects accompanying these targets into a single stimulus-response episode or a single event file. Repeating any object of such an episode can trigger the response encoded in this episode. Hence, repeating a distractor may retrieve the response given to the target that was accompanied by this distractor. In the present experiments, we analyzed whether the binding of target responses to the distractor can be generalized even to the location of a distractor. In two experiments, we used a location-based prime-probe task and found that repeating the location of a distractor triggered the response to the target that had previously been accompanied by a distractor in the repeated location, even if the identity of the distractor changed from the prime to the probe.

  1. Accurate determination of segmented X-ray detector geometry

    PubMed Central

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A.; Chapman, Henry N.; Barty, Anton

    2015-01-01

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments. PMID:26561117

  2. Accurate determination of segmented X-ray detector geometry.

    PubMed

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A; Chapman, Henry N; Barty, Anton

    2015-11-01

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments.

  3. Accurate determination of segmented X-ray detector geometry.

    PubMed

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A; Chapman, Henry N; Barty, Anton

    2015-11-01

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments. PMID:26561117

  4. Seismicity patterns along the Ecuadorian subduction zone: new constraints from earthquake location in a 3-D a priori velocity model

    NASA Astrophysics Data System (ADS)

    Font, Yvonne; Segovia, Monica; Vaca, Sandro; Theunissen, Thomas

    2013-04-01

    To improve earthquake location, we create a 3-D a priori P-wave velocity model (3-DVM) that approximates the large velocity variations of the Ecuadorian subduction system. The 3-DVM is constructed from the integration of geophysical and geological data that depend on the structural geometry and velocity properties of the crust and the upper mantle. In addition, specific station selection is carried out to compensate for the high station density on the Andean Chain. 3-D synthetic experiments are then designed to evaluate the network capacity to recover the event position using only P arrivals and the MAXI technique. Three synthetic earthquake location experiments are proposed: (1) noise-free and (2) noisy arrivals used in the 3-DVM, and (3) noise-free arrivals used in a 1-DVM. Synthetic results indicate that, under the best conditions (exact arrival data set and 3-DVM), the spatiotemporal configuration of the Ecuadorian network can accurately locate 70 per cent of events in the frontal part of the subduction zone (average azimuthal gap is 289° ± 44°). Noisy P arrivals (up to ± 0.3 s) can accurately located 50 per cent of earthquakes. Processing earthquake location within a 1-DVM almost never allows accurate hypocentre position for offshore earthquakes (15 per cent), which highlights the role of using a 3-DVM in subduction zone. For the application to real data, the seismicity distribution from the 3-D-MAXI catalogue is also compared to the determinations obtained in a 1-D-layered VM. In addition to good-quality location uncertainties, the clustering and the depth distribution confirm the 3-D-MAXI catalogue reliability. The pattern of the seismicity distribution (a 13 yr record during the inter-seismic period of the seismic cycle) is compared to the pattern of rupture zone and asperity of the Mw = 7.9 1942 and the Mw = 7.7 1958 events (the Mw = 8.8 1906 asperity patch is not defined). We observe that the nucleation of 1942, 1958 and 1906 events coincides with

  5. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  6. Skylab short-lived event alert program

    NASA Technical Reports Server (NTRS)

    Citron, R. A.

    1974-01-01

    During the three manned Skylab missions, the Center for Short-Lived Phenomena (CSLP) reported a total of 39 significant events to the Johnson Space Center (JSC) as part of the Skylab Short-Lived Event Alert Program. The telegraphed daily status reports included the names and locations of the events, the track number and revolution number during which the event could be observed, the time (GMT) to within plus or minus 2 sec when Skylab was closest to the event area, and the light condition (daylight or darkness) at that time and place. The messages sent to JSC during the Skylab 4 mission also included information pertaining to ground-truth studies and observations being conducted on the events. Photographic priorities were assigned for each event.

  7. FFTF Asbestos Location Tracking Program

    SciTech Connect

    Reynolds, J.A.

    1994-09-15

    An Asbestos Location Tracking Program was prepared to list, locate, and determine Asbestos content and to provide baseline {open_quotes}good faith{close_quotes} for yearly condition inspections for the FFTF Plant and buildings and grounds.

  8. Gaze location prediction for broadcast football video.

    PubMed

    Cheng, Qin; Agrafiotis, Dimitris; Achim, Alin M; Bull, David R

    2013-12-01

    The sensitivity of the human visual system decreases dramatically with increasing distance from the fixation location in a video frame. Accurate prediction of a viewer's gaze location has the potential to improve bit allocation, rate control, error resilience, and quality evaluation in video compression. Commercially, delivery of football video content is of great interest because of the very high number of consumers. In this paper, we propose a gaze location prediction system for high definition broadcast football video. The proposed system uses knowledge about the context, extracted through analysis of a gaze tracking study that we performed, to build a suitable prior map. We further classify the complex context into different categories through shot classification thus allowing our model to prelearn the task pertinence of each object category and build the prior map automatically. We thus avoid the limitation of assigning the viewers a specific task, allowing our gaze prediction system to work under free-viewing conditions. Bayesian integration of bottom-up features and top-down priors is finally applied to predict the gaze locations. Results show that the prediction performance of the proposed model is better than that of other top-down models that we adapted to this context. PMID:23996558

  9. Acoustic emission source location in complex structures using full automatic delta T mapping technique

    NASA Astrophysics Data System (ADS)

    Al-Jumaili, Safaa Kh.; Pearson, Matthew R.; Holford, Karen M.; Eaton, Mark J.; Pullin, Rhys

    2016-05-01

    An easy to use, fast to apply, cost-effective, and very accurate non-destructive testing (NDT) technique for damage localisation in complex structures is key for the uptake of structural health monitoring systems (SHM). Acoustic emission (AE) is a viable technique that can be used for SHM and one of the most attractive features is the ability to locate AE sources. The time of arrival (TOA) technique is traditionally used to locate AE sources, and relies on the assumption of constant wave speed within the material and uninterrupted propagation path between the source and the sensor. In complex structural geometries and complex materials such as composites, this assumption is no longer valid. Delta T mapping was developed in Cardiff in order to overcome these limitations; this technique uses artificial sources on an area of interest to create training maps. These are used to locate subsequent AE sources. However operator expertise is required to select the best data from the training maps and to choose the correct parameter to locate the sources, which can be a time consuming process. This paper presents a new and improved fully automatic delta T mapping technique where a clustering algorithm is used to automatically identify and select the highly correlated events at each grid point whilst the "Minimum Difference" approach is used to determine the source location. This removes the requirement for operator expertise, saving time and preventing human errors. A thorough assessment is conducted to evaluate the performance and the robustness of the new technique. In the initial test, the results showed excellent reduction in running time as well as improved accuracy of locating AE sources, as a result of the automatic selection of the training data. Furthermore, because the process is performed automatically, this is now a very simple and reliable technique due to the prevention of the potential source of error related to manual manipulation.

  10. ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104

  11. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    PubMed Central

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  12. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    PubMed

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  13. DIORAMA Location Type User's Guide

    SciTech Connect

    Terry, James Russell

    2015-01-29

    The purpose of this report is to present the current design and implementation of the DIORAMA location type object (LocationType) and to provide examples and use cases. The LocationType object is included in the diorama-app package in the diorama::types namespace. Abstractly, the object is intended to capture the full time history of the location of an object or reference point. For example, a location may be speci ed as a near-Earth orbit in terms of a two-line element set, in which case the location type is capable of propagating the orbit both forward and backward in time to provide a location for any given time. Alternatively, the location may be speci ed as a xed set of geodetic coordinates (latitude, longitude, and altitude), in which case the geodetic location of the object is expected to remain constant for all time. From an implementation perspective, the location type is de ned as a union of multiple independent objects defi ned in the DIORAMA tle library. Types presently included in the union are listed and described in subsections below, and all conversions or transformation between these location types are handled by utilities provided by the tle library with the exception of the \\special-values" location type.

  14. Spring loaded locator pin assembly

    DOEpatents

    Groll, Todd A.; White, James P.

    1998-01-01

    This invention deals with spring loaded locator pins. Locator pins are sometimes referred to as captured pins. This is a mechanism which locks two items together with the pin that is spring loaded so that it drops into a locator hole on the work piece.

  15. Spring loaded locator pin assembly

    DOEpatents

    Groll, T.A.; White, J.P.

    1998-03-03

    This invention deals with spring loaded locator pins. Locator pins are sometimes referred to as captured pins. This is a mechanism which locks two items together with the pin that is spring loaded so that it drops into a locator hole on the work piece. 5 figs.

  16. Impact-Locator Sensor Panels

    NASA Technical Reports Server (NTRS)

    Christiansen, Eric L.; Byers, Terry; Gibbons, Frank

    2008-01-01

    Electronic sensor systems for detecting and locating impacts of rapidly moving particles on spacecraft have been invented. Systems of this type could also be useful on Earth in settings in which the occurrence of impacts and/or the locations of impacts are not immediately obvious and there are requirements to detect and quickly locate impacts to prevent or minimize damage.

  17. Tools for Accurate and Efficient Analysis of Complex Evolutionary Mechanisms in Microbial Genomes. Final Report

    SciTech Connect

    Nakhleh, Luay

    2014-03-12

    I proposed to develop computationally efficient tools for accurate detection and reconstruction of microbes' complex evolutionary mechanisms, thus enabling rapid and accurate annotation, analysis and understanding of their genomes. To achieve this goal, I proposed to address three aspects. (1) Mathematical modeling. A major challenge facing the accurate detection of HGT is that of distinguishing between these two events on the one hand and other events that have similar "effects." I proposed to develop a novel mathematical approach for distinguishing among these events. Further, I proposed to develop a set of novel optimization criteria for the evolutionary analysis of microbial genomes in the presence of these complex evolutionary events. (2) Algorithm design. In this aspect of the project, I proposed to develop an array of e cient and accurate algorithms for analyzing microbial genomes based on the formulated optimization criteria. Further, I proposed to test the viability of the criteria and the accuracy of the algorithms in an experimental setting using both synthetic as well as biological data. (3) Software development. I proposed the nal outcome to be a suite of software tools which implements the mathematical models as well as the algorithms developed.

  18. The use of waveform cross correlation for creation of an accurate catalogue of mining explosions within the Russian platform using joint capabilities of seismic array Miknevo and IMS arrays

    NASA Astrophysics Data System (ADS)

    Rozhkov, M.; Kitov, I.; Sanina, I.

    2014-12-01

    For seismic monitoring, the task of finding and indentifying the sources of various seismic events is getting more and more difficult when the size (magnitude, yield, energy) of these events decreases. Firstly, the number of seismic events dramatically increases with falling magnitude - approximately by an order of magnitude per unit of seismic magnitude. Secondly, mining explosions become detectable and represent one of the biggest challenges for monitoring for magnitudes below 3.5 to 4.0. In the current study of mining activity within the Russian platform, we use the advantages of location and historical bulletins/catalogues of mining explosions recorded by small-aperture seismic array Mikhnevo (MHVAR) and extensive data from several IMS arrays at regional and far regional distances from the studied area. The Institute of Geosphere Dynamics (IDG) of the Russian Academy of Sciences runs seismic array MHVAR (54.950 N; 37.767 E) since 2004. Approximately 50 areas with different levels of mining activity have been identified by MHVAR and reported in the IDG catalogue as mining events. Signals from select mining events detected by MHVAR are sought at IMS arrays. Continuous data from MHVAR and IMS arrays (e.g. AKASG) are processed jointly using waveform cross correlation technique. This technique allows reducing the detection threshold of repeated events by an order of magnitude as well as accurately locating and identifying mining explosions. To achieve the highest performance of cross correlation, we have selected the best sets of waveform templates recorded from a carefully tested set of master events for each of the studied mines. We also test the possibility to use the Principal and Independent Component Analysis to produce sets of synthetic templates, which best fit the whole set of master events for a given mine.

  19. Cross comparison of four DPRK events

    NASA Astrophysics Data System (ADS)

    Bobrov, Dmitry; Kitov, Ivan; Rozhkov, Mikhail

    2016-04-01

    Seismic signals were detected by the IMS seismic network from four announced underground test conducted by the DPRK in 2006, 2009, 2013, and 2016. These data allow thorough comparison of relative locations, including depth estimates, and magnitudes using several techniques based on waveforms cross correlation. Seismic signals from these events also provide waveform templates for detection of possible aftershocks with magnitudes by two-to-three units lower than the events themselves. We have processed one month of continuous data after each of four events and detected no aftershocks. Independent Component Analysis based Blind Source Separation was conducted for all events at different stations to compare the robustness of the source function recovery.

  20. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  1. Locating influential nodes via dynamics-sensitive centrality

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Lin, Jian-Hong; Guo, Qiang; Zhou, Tao

    2016-02-01

    With great theoretical and practical significance, locating influential nodes of complex networks is a promising issue. In this paper, we present a dynamics-sensitive (DS) centrality by integrating topological features and dynamical properties. The DS centrality can be directly applied in locating influential spreaders. According to the empirical results on four real networks for both susceptible-infected-recovered (SIR) and susceptible-infected (SI) spreading models, the DS centrality is more accurate than degree, k-shell index and eigenvector centrality.

  2. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  3. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  4. Accurate density functional thermochemistry for larger molecules.

    SciTech Connect

    Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.

    1997-06-20

    Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).

  5. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  6. Universality: Accurate Checks in Dyson's Hierarchical Model

    NASA Astrophysics Data System (ADS)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  7. Challenges in Forecasting SEP Events

    NASA Astrophysics Data System (ADS)

    Luhmann, Janet; Mays, M. Leila; Odstrcil, Dusan; Bain, Hazel; Li, Yan; Leske, Richard; Cohen, Christina

    2015-04-01

    A long-standing desire of space weather prediction providers has been the ability to forecast SEP (Solar Energetic Particle) events as a part of their offerings. SEPs can have deleterious effects on the space environment and space hardware, that also impact human exploration missions. Developments of observationally driven, physics based models in the last solar cycle have made it possible to use solar magnetograms and coronagraph images to simulate, up to a month in advance for solar wind structure, and up to days in advance for interplanetary Coronal Mass Ejection (ICME) driven shocks, time series of upstream parameters similar in content to those obtained by L1 spacecraft. However, SEPs have been missing from these predictions. Because SEP event modeling requires different physical considerations it has typically been approached with cosmic ray transport concepts and treatments. However, many extra complications arise because of the moving, evolving nature of the ICME shock source of the largest events. In general, a realistic SEP event model for these so-called 'gradual' events requires an accurate description of the time-dependent 3D heliosphere as an underlying framework. We describe some applications of an approach to SEP event simulations that uses the widely-applied ENLIL heliospheric model to describe both underlying solar wind and ICME shock characteristics. Experimentation with this set-up illustrates the importance of knowing the shock connectivity to the observer, and of the need to include even non-observer-impacting CMEs in the heliospheric model. It also provides a possible path forward toward the goal of having routine SEP forecasts together with the other heliospheric predictions.

  8. Efficient and Accurate Indoor Localization Using Landmark Graphs

    NASA Astrophysics Data System (ADS)

    Gu, F.; Kealy, A.; Khoshelham, K.; Shang, J.

    2016-06-01

    Indoor localization is important for a variety of applications such as location-based services, mobile social networks, and emergency response. Fusing spatial information is an effective way to achieve accurate indoor localization with little or with no need for extra hardware. However, existing indoor localization methods that make use of spatial information are either too computationally expensive or too sensitive to the completeness of landmark detection. In this paper, we solve this problem by using the proposed landmark graph. The landmark graph is a directed graph where nodes are landmarks (e.g., doors, staircases, and turns) and edges are accessible paths with heading information. We compared the proposed method with two common Dead Reckoning (DR)-based methods (namely, Compass + Accelerometer + Landmarks and Gyroscope + Accelerometer + Landmarks) by a series of experiments. Experimental results show that the proposed method can achieve 73% accuracy with a positioning error less than 2.5 meters, which outperforms the other two DR-based methods.

  9. Identifying structures in clouds of induced microseismic events

    SciTech Connect

    Fehler, M.; House, L.; Phillips, W.S.

    1997-07-01

    A method for finding improved relative locations of microearthquakes accompanying fluid production and injection is presented. The method is based on the assumption that the microearthquake locations are more clustered than found when events are located using conventional techniques. By allowing the rms misfit between measured arrival times and predicted arrival times to increase if events move closer together, the authors find that there is more structure in the pattern of seismic locations. The method is demonstrated using a dataset of microearthquakes induced by hydraulic fracturing. The authors find that structures found using relative arrival times of events having similar waveforms to find improved relative locations of events can also be recovered using the new inversion method but without the laborious repicking procedure. The method provides improved relative locations and hence, an improved image of the structure within the seismic zone that may allow for a better relation between microearthquake locations and zones of increased fluid permeability to be found.

  10. The Chelyabinsk event

    NASA Astrophysics Data System (ADS)

    Borovička, Jiri

    2015-08-01

    On February 15, 2013, 3:20 UT, an asteroid of the size of about 19 meters and mass of 12,000 metric tons entered the Earth's atmosphere unexpectedly near the border of Kazakhstan and Russia. It was the largest confirmed Earth impactor since the Tunguska event in 1908. The body moved approximately westwards with a speed of 19 km/s, on a trajectory inclined 18 degrees to the surface, creating a fireball of steadily increasing brightness. Eleven seconds after the first sightings, the fireball reached its maximum brightness. At that point, it was located less than 40 km south from Chelyabinsk, a Russian city of population more than one million, at an altitude of 30 km. For people directly underneath, the fireball was 30 times brighter than the Sun. The cosmic body disrupted into fragments; the largest of them was visible for another five seconds before it disappeared at an altitude of 12.5 km, when it was decelerated to 3 km/s. Fifty six second later, that ~ 600 kg fragment landed in Lake Chebarkul and created an 8 m wide hole in the ice. More material remained, however, in the atmosphere forming a dust trail up to 2 km wide and extending along the fireball trajectory from altitude 18 to 70 km. People observing the dust trail from Chelyabinsk and other places were surprised by the arrival of a very strong blast wave 90 - 150 s after the fireball passage (depending on location). The wave, produced by the supersonic flight of the body, broke ~10% of windows in Chelyabinsk (~40% of buildings were affected). More than 1600 people were injured, mostly from broken glass. Small meteorites landed in an area 60 km long and several km wide and caused no damage. The meteorites were classified as LL ordinary chondrites and were interesting by the presence of two phases, light and dark. The dust left in the atmosphere circled the Earth within few days and formed a ring around the northern hemisphere.The whole event was well documented by video cameras, seismic and infrasonic

  11. A highly accurate heuristic algorithm for the haplotype assembly problem

    PubMed Central

    2013-01-01

    Background Single nucleotide polymorphisms (SNPs) are the most common form of genetic variation in human DNA. The sequence of SNPs in each of the two copies of a given chromosome in a diploid organism is referred to as a haplotype. Haplotype information has many applications such as gene disease diagnoses, drug design, etc. The haplotype assembly problem is defined as follows: Given a set of fragments sequenced from the two copies of a chromosome of a single individual, and their locations in the chromosome, which can be pre-determined by aligning the fragments to a reference DNA sequence, the goal here is to reconstruct two haplotypes (h1, h2) from the input fragments. Existing algorithms do not work well when the error rate of fragments is high. Here we design an algorithm that can give accurate solutions, even if the error rate of fragments is high. Results We first give a dynamic programming algorithm that can give exact solutions to the haplotype assembly problem. The time complexity of the algorithm is O(n × 2t × t), where n is the number of SNPs, and t is the maximum coverage of a SNP site. The algorithm is slow when t is large. To solve the problem when t is large, we further propose a heuristic algorithm on the basis of the dynamic programming algorithm. Experiments show that our heuristic algorithm can give very accurate solutions. Conclusions We have tested our algorithm on a set of benchmark datasets. Experiments show that our algorithm can give very accurate solutions. It outperforms most of the existing programs when the error rate of the input fragments is high. PMID:23445458

  12. The importance of accurate convergence in addressing stereoscopic visual fatigue

    NASA Astrophysics Data System (ADS)

    Mayhew, Christopher A.

    2015-03-01

    Visual fatigue (asthenopia) continues to be a problem in extended viewing of stereoscopic imagery. Poorly converged imagery may contribute to this problem. In 2013, the Author reported that in a study sample a surprisingly high number of 3D feature films released as stereoscopic Blu-rays contained obvious convergence errors.1 The placement of stereoscopic image convergence can be an "artistic" call, but upon close examination, the sampled films seemed to have simply missed their intended convergence location. This failure maybe because some stereoscopic editing tools do not have the necessary fidelity to enable a 3D editor to obtain a high degree of image alignment or set an exact point of convergence. Compounding this matter further is the fact that a large number of stereoscopic editors may not believe that pixel accurate alignment and convergence is necessary. The Author asserts that setting a pixel accurate point of convergence on an object at the start of any given stereoscopic scene will improve the viewer's ability to fuse the left and right images quickly. The premise is that stereoscopic performance (acuity) increases when an accurately converged object is available in the image for the viewer to fuse immediately. Furthermore, this increased viewer stereoscopic performance should reduce the amount of visual fatigue associated with longer-term viewing because less mental effort will be required to perceive the imagery. To test this concept, we developed special stereoscopic imagery to measure viewer visual performance with and without specific objects for convergence. The Company Team conducted a series of visual tests with 24 participants between 25 and 60 years of age. This paper reports the results of these tests.

  13. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    SciTech Connect

    Helmberger, D; Tromp, J; Rodgers, A

    2007-07-16

    Comprehensive test ban monitoring in terms of location and discrimination has progressed significantly in recent years. However, the characterization of sources and the estimation of low yields remains a particular challenge. As the recent Korean shot demonstrated, we can probably expect to have a small set of teleseismic, far-regional and high-frequency regional data to analyze in estimating the yield of an event. Since stacking helps to bring signals out of the noise, it becomes useful to conduct comparable analyses on neighboring events, earthquakes in this case. If these auxiliary events have accurate moments and source descriptions, we have a means of directly comparing effective source strengths. Although we will rely on modeling codes, 1D, 2D, and 3D, we will also apply a broadband calibration procedure to use longer periods (P>5s) waveform data to calibrate short-period (P between .5 to 2 Hz) and high-frequency (P between 2 to 10 Hz) as path specify station corrections from well-known regional sources. We have expanded our basic Cut-and-Paste (CAP) methodology to include not only timing shifts but also amplitude (f) corrections at recording sites. The name of this method was derived from source inversions that allow timing shifts between 'waveform segments' (or cutting the seismogram up and re-assembling) to correct for crustal variation. For convenience, we will refer to these f-dependent refinements as CAP+ for (SP) and CAP++ for still higher frequency. These methods allow the retrieval of source parameters using only P-waveforms where radiation patterns are obvious as demonstrated in this report and are well suited for explosion P-wave data. The method is easily extended to all distances because it uses Green's function although there may be some changes required in t* to adjust for offsets between local vs. teleseismic distances. In short, we use a mixture of model-dependent and empirical corrections to tackle the path effects. Although we reply on the

  14. Accurate interlaminar stress recovery from finite element analysis

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Riggs, H. Ronald

    1994-01-01

    The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.

  15. Epicenter location by abnormal ULF electromagnetic emissions

    NASA Astrophysics Data System (ADS)

    Du, Aimin; Huang, Qinghua; Yang, Shaofeng

    2002-05-01

    The epicenter location before earthquake is very important. We investigated the characteristics of the ULF electromagnetic emissions, which were observed by an east-west chain at Kashi, Anxi, and Beijing stations before the ML = 7.1 Hetian earthquake, Xingjiang, China, on November 19, 1996. We define the polarization angle as the angle between the east direction of the geomagnetic field and the major axis of the polarization ellipse, which is obtained by the filtered data of two horizontal components. The source of ULF electromagnetic emissions before earthquake is probably coming from the direction perpendicularly to the polarization major axis of earthquake related emissions. We located the source as the cross of two source directions that were obtained at Kashi and Anxi station on November 18, 1996. The estimated source is consistent with the epicenter of the Hetian event.

  16. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  17. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  18. Accurate determination of characteristic relative permeability curves

    NASA Astrophysics Data System (ADS)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  19. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  20. Accurate Stellar Parameters for Exoplanet Host Stars

    NASA Astrophysics Data System (ADS)

    Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.

    2015-01-01

    A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.

  1. Intrusion-Tolerant Location Information Services in Intelligent Vehicular Networks

    NASA Astrophysics Data System (ADS)

    Yan, Gongjun; Yang, Weiming; Shaner, Earl F.; Rawat, Danda B.

    Intelligent Vehicular Networks, known as Vehicle-to-Vehicle and Vehicle-to-Roadside wireless communications (also called Vehicular Ad hoc Networks), are revolutionizing our daily driving with better safety and more infortainment. Most, if not all, applications will depend on accurate location information. Thus, it is of importance to provide intrusion-tolerant location information services. In this paper, we describe an adaptive algorithm that detects and filters the false location information injected by intruders. Given a noisy environment of mobile vehicles, the algorithm estimates the high resolution location of a vehicle by refining low resolution location input. We also investigate results of simulations and evaluate the quality of the intrusion-tolerant location service.

  2. Time forecast of a break-off event from a hanging glacier

    NASA Astrophysics Data System (ADS)

    Faillettaz, Jérome; Funk, Martin; Vagliasindi, Marco

    2016-06-01

    A cold hanging glacier located on the south face of the Grandes Jorasses (Mont Blanc, Italy) broke off on the 23 and 29 September 2014 with a total estimated ice volume of 105 000 m3. Thanks to accurate surface displacement measurements taken up to the final break-off, this event was successfully predicted 10 days in advance, enabling local authorities to take the necessary safety measures. The break-off event also confirmed that surface displacements experienced a power law acceleration along with superimposed log-periodic oscillations prior to the final rupture. This paper describes the methods used to achieve a satisfactory time forecast in real time and demonstrates, using a retrospective analysis, their potential for the development of early-warning systems in real time.

  3. Global Seismic Event Detection Using Surface Waves: 15 Possible Antarctic Glacial Sliding Events

    NASA Astrophysics Data System (ADS)

    Chen, X.; Shearer, P. M.; Walker, K. T.; Fricker, H. A.

    2008-12-01

    To identify overlooked or anomalous seismic events not listed in standard catalogs, we have developed an algorithm to detect and locate global seismic events using intermediate-period (35-70s) surface waves. We apply our method to continuous vertical-component seismograms from the global seismic networks as archived in the IRIS UV FARM database from 1997 to 2007. We first bandpass filter the seismograms, apply automatic gain control, and compute envelope functions. We then examine 1654 target event locations defined at 5 degree intervals and stack the seismogram envelopes along the predicted Rayleigh-wave travel times. The resulting function has spatial and temporal peaks that indicate possible seismic events. We visually check these peaks using a graphical user interface to eliminate artifacts and assign an overall reliability grade (A, B or C) to the new events. We detect 78% of events in the Global Centroid Moment Tensor (CMT) catalog. However, we also find 840 new events not listed in the PDE, ISC and REB catalogs. Many of these new events were previously identified by Ekstrom (2006) using a different Rayleigh-wave detection scheme. Most of these new events are located along oceanic ridges and transform faults. Some new events can be associated with volcanic eruptions such as the 2000 Miyakejima sequence near Japan and others with apparent glacial sliding events in Greenland (Ekstrom et al., 2003). We focus our attention on 15 events detected from near the Antarctic coastline and relocate them using a cross-correlation approach. The events occur in 3 groups which are well-separated from areas of cataloged earthquake activity. We speculate that these are iceberg calving and/or glacial sliding events, and hope to test this by inverting for their source mechanisms and examining remote sensing data from their source regions.

  4. The Chelyabinsk Airburst Event

    NASA Astrophysics Data System (ADS)

    Boslough, Mark

    2013-10-01

    On Feb. 15, 2013, an asteroid exploded about 40 km from the Russian city of Chelyabinsk. Its proximity led to many injuries and widespread blast damage, but also yielded a plethora of data, providing means to determine the projectile size and entry parameters, and develop a self-consistent model. We will present results of the first physics simulations to be initialized with accurate energy deposition derived from observations. The best estimate of the explosive yield is 400-500 kilotons, making Chelyabinsk the most powerful such event observed since Tunguska (3-5 megatons). Analysis of video combined with subsequent on-site stellar calibrations enable precise estimates of entry velocity (19 km/s), angle (17° elevation) and altitude of peak brightness (29 km). This implies a pre-entry diameter of ~20 m and mass of ~1200 tonnes. Satellite sensors recorded the emission peak at 03:20:33 UT, with a total radiated energy of 3.75×1014 J 90 kilotons). A typical bolide luminous efficiency of 20% implies a total energy of ~450 kilotons, consistent with infrasound and other observations. The maximum radiant intensity was 2.7×1013 W/ster, corresponding to a magnitude of -28. The shallow entry angle led to a long bolide duration (16.5 s) and energy was deposited over 100s of km leading to an extended, near-horizontal, linear explosion. The blast was distributed over a large area, and was much weaker than for a steep entry and a more concentrated explosion closer to the surface. The orientation also led to different phenomena than expected for a more vertical entry. There was no ballistic plume as observed from SL9 impacts (45°) or calculated for Tunguska 35°). Instead, buoyant instabilities grew into mushroom clouds and bifurcated the trail into two contra-rotating vortices. Chelyabinsk and Tunguska are “once-per-century” and “once-per-millennium” events, respectively. These outliers imply that the frequency of large airbursts is underestimated. Models also

  5. Isotopic signature of extreme precipitation events in the western U.S. and associated phases of Arctic and tropical climate modes

    NASA Astrophysics Data System (ADS)

    McCabe-Glynn, Staryl; Johnson, Kathleen R.; Strong, Courtenay; Zou, Yuhao; Yu, Jin-Yi; Sellars, Scott; Welker, Jeffrey M.

    2016-08-01

    Extreme precipitation events, commonly associated with "Atmospheric Rivers," are projected to increase in frequency and severity in western North America; however, the intensity and landfall position are difficult to forecast accurately. As the isotopic signature of precipitation has been widely utilized as a tracer of the hydrologic cycle and could potentially provide information about key physical processes, we utilize both climate and precipitation isotope data to investigate these events in California from 2001 to 2011. Although individual events have extreme isotopic signatures linked to associated circulation anomalies, the composite across all events unexpectedly resembles the weighted mean for the entire study period, reflecting diverse moisture trajectories and associated teleconnection phases. We document that 90% of events reaching this location occurred during the negative Arctic Oscillation, suggesting a possible link with higher-latitude warming. We also utilize precipitation data of extreme precipitation events across the entire western U.S. to investigate the relationships between key tropical and Arctic climate modes known to influence precipitation in this region. Results indicate that the wettest conditions occur when the negative Arctic Oscillation, negative Pacific/North American pattern, and positive Southern Oscillation are in sync and that precipitation has increased in the southwestern U.S. and decreased in the northwestern U.S. relative to this phase combination's 1979-2011 climatology. Furthermore, the type of El Niño-Southern Oscillation event, Central Pacific or Eastern Pacific, influences the occurrence, landfall location, and isotopic composition of precipitation.

  6. Seismicity in Pennsylvania: Evidence for Anthropogenic Events?

    NASA Astrophysics Data System (ADS)

    Homman, K.; Nyblade, A.

    2015-12-01

    The deployment and operation of the USArray Transportable Array (TA) and the PASEIS (XY) seismic networks in Pennsylvania during 2013 and 2014 provide a unique opportunity for investigating the seismicity of Pennsylvania. These networks, along with several permanent stations in Pennsylvania, resulted in a total of 104 seismometers in and around Pennsylvania that have been used in this study. Event locations were first obtained with Antelope Environmental Monitoring Software using P-wave arrival times. Arrival times were hand picked using a 1-5 Hz bandpass filter to within 0.1 seconds. Events were then relocated using a velocity model developed for Pennsylvania and the HYPOELLIPSE location code. In this study, 1593 seismic events occurred between February 2013 and December 2014 in Pennsylvania. These events ranged between magnitude (ML) 1.04 and 2.89 with an average MLof 1.90. Locations of the events occur across the state in many areas where no seismicity has been previously reported. Preliminary results indicate that most of these events are related to mining activity. Additional work using cross-correlation techniques is underway to examine a number of event clusters for evidence of hydraulic fracturing or wastewater injection sources.

  7. High-precision differential earthquake location in 3-D models: evidence for a rheological barrier controlling the microseismicity at the Irpinia fault zone in southern Apennines

    NASA Astrophysics Data System (ADS)

    De Landro, Grazia; Amoroso, Ortensia; Stabile, Tony Alfredo; Matrullo, Emanuela; Lomax, Antony; Zollo, Aldo

    2015-12-01

    A non-linear, global-search, probabilistic, double-difference earthquake location technique is illustrated. The main advantages of this method are the determination of comprehensive and complete solutions through the probability density function (PDF), the use of differential arrival times as data and the possibility to use a 3-D velocity model both for absolute and double-difference locations, all of which help to obtain accurate differential locations in structurally complex geological media. The joint use of this methodology and an accurate differential time data set allowed us to carry out a high-resolution, earthquake location analysis, which helps to characterize the active fault geometries in the studied region. We investigated the recent microseismicity occurring at the Campanian-Lucanian Apennines in the crustal volume embedding the fault system that generated the 1980 MS 6.9 earthquake in Irpinia. In order to obtain highly accurate seismicity locations, we applied the method to the P and S arrival time data set from 1312 events (ML < 3.1) that occurred from August 2005 to April 2011 and used the 3-D P- and S-wave velocity models optimized for the area under study. Both manually refined and cross-correlation refined absolute arrival times have been used. The refined seismicity locations show that the events occur in a volume delimited by the faults activated during the 1980 MS 6.9 Irpinia earthquake on subparallel, predominantly normal faults. We find an abrupt interruption of the seismicity across an SW-NE oriented structural discontinuity corresponding to a contact zone between different rheology rock formations (carbonate platform and basin residuals). This `barrier' appears to be located in the area bounded by the fault segments activated during the first (0 s) and the second (18 s) rupture episodes of the 1980s Irpinia earthquake. We hypothesize that this geometrical barrier could have played a key role during the 1980 Irpinia event, and possibly

  8. Metamemory appraisals in autobiographical event recall.

    PubMed

    Scoboria, Alan; Talarico, Jennifer M; Pascal, Lisa

    2015-03-01

    Two studies examined whether belief in the occurrence of events, recollecting events, and belief in the accuracy of recollections are distinct aspects of autobiographical remembering. In Study 1, 299 student participants received a cue to recall five childhood events, after which they rated each event on these constructs and other characteristics associated with remembering. Structural equation modelling revealed that variance in ratings was best explained by the three anticipated latent variables. In Study 2, an online sample of 1026 adults recalled and rated a childhood event and an event about which they were somehow uncertain. Confirmatory modelling replicated the three latent variables. The relationship of key predictors (perceptual detail, spatial detail, re-experiencing, and event plausibility) to the latent variables confirmed the distinction. These studies demonstrate that belief in occurrence and belief in accuracy appraisals are distinct, the former indexing the truth status of the event and the latter the degree to which the event representation accurately reflects prior experience. Further, they suggest that belief in accuracy indexes the monitoring of the quality of recollections.

  9. Grid-Search Location Methods for Ground-Truth Collection from Local and Regional Seismic Networks

    SciTech Connect

    Schultz, C A; Rodi, W; Myers, S C

    2003-07-24

    The objective of this project is to develop improved seismic event location techniques that can be used to generate more and better quality reference events using data from local and regional seismic networks. Their approach is to extend existing methods of multiple-event location with more general models of the errors affecting seismic arrival time data, including picking errors and errors in model-based travel-times (path corrections). Toward this end, they are integrating a grid-search based algorithm for multiple-event location (GMEL) with a new parameterization of travel-time corrections and new kriging method for estimating the correction parameters from observed travel-time residuals. Like several other multiple-event location algorithms, GMEL currently assumes event-independent path corrections and is thus restricted to small event clusters. The new parameterization assumes that travel-time corrections are a function of both the event and station location, and builds in source-receiver reciprocity and correlation between the corrections from proximate paths as constraints. The new kriging method simultaneously interpolates travel-time residuals from multiple stations and events to estimate the correction parameters as functions of position. They are currently developing the algorithmic extensions to GMEL needed to combine the new parameterization and kriging method with the simultaneous location of events. The result will be a multiple-event location method which is applicable to non-clustered, spatially well-distributed events. They are applying the existing components of the new multiple-event location method to a data set of regional and local arrival times from Nevada Test Site (NTS) explosions with known origin parameters. Preliminary results show the feasibility and potential benefits of combining the location and kriging techniques. They also show some preliminary work on generalizing of the error model used in GMEL with the use of mixture

  10. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  11. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  12. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  13. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  14. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  15. Micron Accurate Absolute Ranging System: Range Extension

    NASA Technical Reports Server (NTRS)

    Smalley, Larry L.; Smith, Kely L.

    1999-01-01

    The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.

  16. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  17. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  18. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  19. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293

  20. Accurate Telescope Mount Positioning with MEMS Accelerometers

    NASA Astrophysics Data System (ADS)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  1. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  2. [Rare locations of hydatid disease].

    PubMed

    Tocchi, A; Mazzoni, G; Lepre, L; Liotta, G; Costa, G; Maggiolini, F; Miccini, M

    1999-04-01

    The authors report their experience with uncommon hydatid cyst locations. Between 1970 and 1995 a total of 16 patients suffering from hydatid cysts located in various organs other than liver and lungs were observed. There were 7 women and 9 men with a mean of 53.3 years. In 10 cases uncommon locations were found to be isolated and in 6 associated to contemporary or previously treated hepatic cystic disease. Pathogenesis of these uncommon locations, whether being primary or secondary, as well as specific items of diagnosis and surgery are discussed.

  3. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  4. Accurate Weather Forecasting for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  5. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  6. Evaluation of workplace air monitoring locations

    SciTech Connect

    Stoetzel, G.A.; Cicotte, G.R.; Lynch, T.P. ); Aldrich, L.K. )

    1991-10-01

    Current federal guidance on occupational radiation protection recognizes the importance of conducting air flow studies to assist in the placement of air sampling and monitoring equipment. In support of this, Pacific Northwest Laboratory has provided technical assistance to Westinghouse Hanford Company for the purpose of evaluating the adequacy of air sampling and monitoring locations at selected Hanford facilities. Qualitative air flow studies were performed using smoke aerosols to visually determine air movement. Three examples are provided of how air flow studies results, along with information on the purpose of the air sample being collected, were used as a guide in placing the air samplers and monitors. Preparatory steps in conducting an air flow study should include: (1) identifying type of work performed in the work area including any actual or potential release points; (2) determining the amounts of radioactive material available for release and its chemical and physical form; (3) obtaining accurate work area descriptions and diagrams; (4) identifying the location of existing air samplers and monitors; (5) documenting physical and ventilation configurations; (6) notifying appropriate staff of the test; and (7) obtaining necessary equipment and supplies. The primary steps in conducting an air flow study are measurements of air velocities in the work area, release of the smoke aerosol at selected locations in the work area and the observation of air flow patterns, and finally evaluation and documentation of the results. 2 refs., 3 figs.

  7. Cobalt processing - flask positioner location sensing system

    SciTech Connect

    Braun, P.F.

    1986-01-01

    Canada deuterium uranium (CANDU) reactors offer unique opportunities for economical production of /sup 60/Co in the adjuster rods used for xenon override and maximization of core output. Cobalt is effectively a by-product in CANDU reactors with the standards stainless steel adjuster rods replaced with cobalt adjuster rods. The Flask Positioner unit is a part of the cobalt adjuster element processing system (CAEPS) equipment which is used for removing irradiated cobalt adjuster elements from the reactor and safely transporting them to the irradiated fuel bay, where they are dismantled and prepared for shipment. The flask positioner equipment is similar to a crane, carries the CAEPS flask and locates it in an accurate position concentric with any adjuster site centerline. This enables the required operations for safe transfer of the irradiated adjuster element into the flask. The positioner is located above the reactivity mechanism deck. The CAEPS system has been made operational on several CANDU reactors. The location sensing system has been demonstrated to work very satisfactorily on all installations.

  8. Episodes, events, and models

    PubMed Central

    Khemlani, Sangeet S.; Harrison, Anthony M.; Trafton, J. Gregory

    2015-01-01

    We describe a novel computational theory of how individuals segment perceptual information into representations of events. The theory is inspired by recent findings in the cognitive science and cognitive neuroscience of event segmentation. In line with recent theories, it holds that online event segmentation is automatic, and that event segmentation yields mental simulations of events. But it posits two novel principles as well: first, discrete episodic markers track perceptual and conceptual changes, and can be retrieved to construct event models. Second, the process of retrieving and reconstructing those episodic markers is constrained and prioritized. We describe a computational implementation of the theory, as well as a robotic extension of the theory that demonstrates the processes of online event segmentation and event model construction. The theory is the first unified computational account of event segmentation and temporal inference. We conclude by demonstrating now neuroimaging data can constrain and inspire the construction of process-level theories of human reasoning. PMID:26578934

  9. Identification of rupture locations in patient-specific abdominal aortic aneurysms using experimental and computational techniques.

    PubMed

    Doyle, Barry J; Cloonan, Aidan J; Walsh, Michael T; Vorp, David A; McGloughlin, Timothy M

    2010-05-01

    In the event of abdominal aortic aneurysm (AAA) rupture, the outcome is often death. This paper aims to experimentally identify the rupture locations of in vitro AAA models and validate these rupture sites using finite element analysis (FEA). Silicone rubber AAA models were manufactured using two different materials (Sylgard 160 and Sylgard 170, Dow Corning) and imaged using computed tomography (CT). Experimental models were inflated until rupture with high speed photography used to capture the site of rupture. 3D reconstructions from CT scans and subsequent FEA of these models enabled the wall stress and wall thickness to be determined for each of the geometries. Experimental models ruptured at regions of inflection, not at regions of maximum diameter. Rupture pressures (mean+/-SD) for the Sylgard 160 and Sylgard 170 models were 650.6+/-195.1mmHg and 410.7+/-159.9mmHg, respectively. Computational models accurately predicted the locations of rupture. Peak wall stress for the Sylgard 160 and Sylgard 170 models was 2.15+/-0.26MPa at an internal pressure of 650mmHg and 1.69+/-0.38MPa at an internal pressure of 410mmHg, respectively. Mean wall thickness of all models was 2.19+/-0.40mm, with a mean wall thickness at the location of rupture of 1.85+/-0.33 and 1.71+/-0.29mm for the Sylgard 160 and Sylgard 170 materials, respectively. Rupture occurred at the location of peak stress in 80% (16/20) of cases and at high stress regions but not peak stress in 10% (2/20) of cases. 10% (2/20) of models had defects in the AAA wall which moved the rupture location away from regions of elevated stress. The results presented may further contribute to the understanding of AAA biomechanics and ultimately AAA rupture prediction.

  10. Accurate Focal Depth Determination of Oceanic Earthquakes Using Water-column Reverberation and Some Implications for the Shrinking Plate Hypothesis

    NASA Astrophysics Data System (ADS)

    Niu, F.; Huang, J.; Gordon, R. G.

    2015-12-01

    Investigation of oceanic earthquakes can play an important role in constraining the lateral and depth variations of the stress and strain-rate fields in oceanic lithosphere and of the thickness of the seismogenic layer as a function of lithosphere age, thereby providing us with critical insight into thermal and dynamic processes associated with the cooling and evolution of oceanic lithosphere. With the goal of estimating hypocentral depths more accurately, we observe clear water reverberations after the direct P wave on teleseismic records of oceanic earthquakes and develop a technique to estimate earthquake depths by using these reverberations. The Z-H grid search method allows the simultaneous determination of the sea floor depth (H) and earthquake depth (Z) with an uncertainty less than 1 km, which compares favorably with alternative approaches. We apply this method to two closely located earthquakes beneath the eastern Pacific. These earthquakes occur in ≈25 Ma-old lithosphere and were previously estimated to have very similar depths of ≈10-12 km. We find that the two events actually occurred at dissimilar depths of 2.5 km and 16.8 km beneath the seafloor, respectively within the oceanic crust and lithospheric mantle. The shallow and deep events are determined to be a thrust and normal earthquake, respectively, indicating that the stress field within the oceanic lithosphere changes from horizontal compression to horizontal extension as depth increases, which is consistent with the prediction of the lithospheric cooling model. Furthermore, we show that the P-axis of the newly investigated thrust-faulting earthquake is roughly perpendicular to that of the previously studied thrust event, consistent with the predictions of the shrinking-plate hypothesis.

  11. Accurate focal depth determination of oceanic earthquakes using water-column reverberation and some implications for the shrinking plate hypothesis

    NASA Astrophysics Data System (ADS)

    Huang, Jianping; Niu, Fenglin; Gordon, Richard G.; Cui, Chao

    2015-12-01

    Investigation of oceanic earthquakes is useful for constraining the lateral and depth variations of the stress and strain-rate fields in oceanic lithosphere, and the thickness of the seismogenic layer as a function of lithosphere age, thereby providing us with critical insight into thermal and dynamic processes associated with the cooling and evolution of oceanic lithosphere. With the goal of estimating hypocentral depths more accurately, we observe clear water reverberations after the direct P wave on teleseismic records of oceanic earthquakes and develop a technique to estimate earthquake depths by using these reverberations. The Z-H grid search method allows the simultaneous determination of the sea floor depth (H) and earthquake depth (Z) with an uncertainty less than 1 km, which compares favorably with alternative approaches. We apply this method to two closely located earthquakes beneath the eastern Pacific. These earthquakes occurred in ∼25 Ma-old lithosphere and were previously estimated to have similar depths of ∼10-12 km. We find that the two events actually occurred at dissimilar depths of 2.5 km and 16.8 km beneath the seafloor, respectively, within the oceanic crust and lithospheric mantle. The shallow and deep events are determined to be a thrust and normal earthquake, respectively, indicating that the stress field within the oceanic lithosphere changes from horizontal deviatoric compression to horizontal deviatoric tension as depth increases, which is consistent with the prediction of lithospheric cooling models. Furthermore, we show that the P-axis of the newly investigated thrust-faulting earthquake is perpendicular to that of the previously studied thrust event, consistent with the predictions of the shrinking-plate hypothesis.

  12. Electro-location, tomography and porosity measurements in geotechnical centrifuge models based on electrical resistivity concepts

    NASA Astrophysics Data System (ADS)

    Li, Zhihua

    This research was focused on the development of electrical techniques for soil characterization and soil dynamic behavior assessment. The research carried out mainly includes (1) development of a needle probe tool for assessment of soil spatial variability in terms of porosity with high-resolution in the centrifuge testing; (2) development of an electro-location technique to accurately detect buried objects' movements inside the soil during dynamic events; (3) collaborative development of a new electrode switching system to implement electrical resistivity tomography, and electro-location with high speed and high resolution. To assess soil spatial variability with high-resolution, electrical needle probes with different tip shapes were developed to measure soil electrical resistivity. After normalizing soil resistivity by pore fluid resistivity, this information can be correlated to soil porosity. Calibrations in laboratory prepared soils were conducted. Loosening due to insertion of needle probes was evaluated. A special needle probe tool, along with data acquisition and data processing tools were developed to be operated by the new NEES robot on the centrifuge. The needle probes have great potential to resolve interfaces between soil layers and small local porosity variations with a spatial resolution approximately equal to the spacing between electrodes (about half of the probe diameter). A new electrode switching system was developed to accurately detect buried objects' movements using a new electro-location scheme. The idea was to establish an electromagnetic field in a centrifuge model by injecting low-frequency alternating currents through pairs of boundary electrodes. The locations of buried objects are related to the potentials measured on them. A closed form expression for the electric field in a rectangular specimen with insulated boundaries was obtained based on the method of images. Effects of sampling parameters on spatial resolution and tradeoffs

  13. The global event system

    SciTech Connect

    Winans, J.

    1994-03-02

    The support for the global event system has been designed to allow an application developer to control the APS event generator and receiver boards. This is done by the use of four new record types. These records are customized and are only supported by the device support modules for the APS event generator and receiver boards. The use of the global event system and its associated records should not be confused with the vanilla EPICS events and the associated event records. They are very different.

  14. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  15. Low latency counter event indication

    DOEpatents

    Gara, Alan G.; Salapura, Valentina

    2010-08-24

    A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.

  16. Low latency counter event indication

    DOEpatents

    Gara, Alan G.; Salapura, Valentina

    2008-09-16

    A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.

  17. Mobile Alternative Fueling Station Locator

    SciTech Connect

    Not Available

    2009-04-01

    The Department of Energy's Alternative Fueling Station Locator is available on-the-go via cell phones, BlackBerrys, or other personal handheld devices. The mobile locator allows users to find the five closest biodiesel, electricity, E85, hydrogen, natural gas, and propane fueling sites using Google technology.

  18. Locating Information within Extended Hypermedia

    ERIC Educational Resources Information Center

    Cromley, Jennifer G.; Azevedo, Roger

    2009-01-01

    New literacies researchers have identified a core set of strategies for locating information, one of which is "reading a Web page to locate information that might be present there" (Leu et al. in: Rush, Eakle, Berger (eds) "Secondary school reading and writing: What research reveals for classroom practices," 2007, p. 46). Do middle-school, high…

  19. Precision zero-home locator

    DOEpatents

    Stone, William J.

    1986-01-01

    A zero-home locator includes a fixed phototransistor switch and a moveable actuator including two symmetrical, opposed wedges, each wedge defining a point at which switching occurs. The zero-home location is the average of the positions of the points defined by the wedges.

  20. Cold War Geopolitics: Embassy Locations.

    ERIC Educational Resources Information Center

    Vogeler, Ingolf

    1995-01-01

    Asserts that the geopolitics of the Cold War can be illustrated by the diplomatic ties among countries, particularly the superpowers and their respective allies. Describes a classroom project in which global patterns of embassy locations are examined and compared. Includes five maps and a chart indicating types of embassy locations. (CFR)

  1. Decision support system for managing oil spill events.

    PubMed

    Keramitsoglou, Iphigenia; Cartalis, Constantinos; Kassomenos, Pavlos

    2003-08-01

    The Mediterranean environment is exposed to various hazards, including oil spills, forest fires, and floods, making the development of a decision support system (DSS) for emergency management an objective of utmost importance. The present work presents a complete DSS for managing marine pollution events caused by oil spills. The system provides all the necessary tools for early detection of oil-spills from satellite images, monitoring of their evolution, estimation of the accident consequences and provision of support to responsible Public Authorities during clean-up operations. The heart of the system is an image processing-geographic information system and other assistant individual software tools that perform oil spill evolution simulation and all other necessary numerical calculations as well as cartographic and reporting tasks related to a specific management of the oil spill event. The cartographic information is derived from the extant general maps representing detailed information concerning several regional environmental and land-cover characteristics as well as financial activities of the application area. Early notification of the authorities with up-to-date accurate information on the position and evolution of the oil spill, combined with the detailed coastal maps, is of paramount importance for emergency assessment and effective clean-up operations that would prevent environmental hazard. An application was developed for the Region of Crete, an area particularly vulnerable to oil spills due to its location, ecological characteristics, and local economic activities.

  2. Earthquake Clustering and Triggering of Large Events in Simulated Catalogs

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Dieterich, J. H.; Richards-Dinger, K. B.; Xu, H.

    2013-12-01

    We investigate large event clusters (e.g. earthquake doublets and triplets) wherein secondary events in a cluster are triggered by stress transfer from previous events. We employ the 3D boundary element code RSQSim with a California fault model to generate synthetic catalogs spanning from tens of thousands up to a million years. The simulations incorporate rate-state fault constitutive properties, and the catalogs include foreshocks, aftershocks and occasional clusters of large events. Here we define a large event cluster as two or more M≥7 events within a few years. Most clustered events are closely grouped in space as well as time. Large event clusters show highly productive aftershock sequences where the aftershock locations of the first event in a cluster appear to correlate with the location of the next large event in the cluster. We find that the aftershock productivity of the first events in large event clusters is roughly double that of the unrelated, non-clustered events and that aftershock rate is a proxy for the stress state of the faults. The aftershocks of the first event in a large-event cluster migrate toward the point of nucleation of the next event in a large-event cluster. Furthermore, following a normal aftershock sequence, the average event rate increases prior to the second event in a large-event cluster. These increased event rates prior to the second event in a cluster follow an inverse Omori's law, which is characteristic of foreshocks. Clustering probabilities based on aftershock rates are higher than expected from Omori aftershock and Gutenberg-Richter magnitude frequency laws, which suggests that the high aftershock rates indicate near-critical stresses for failure in a large earthquake.

  3. Accurate masses for dispersion-supported galaxies

    NASA Astrophysics Data System (ADS)

    Wolf, Joe; Martinez, Gregory D.; Bullock, James S.; Kaplinghat, Manoj; Geha, Marla; Muñoz, Ricardo R.; Simon, Joshua D.; Avedo, Frank F.

    2010-08-01

    We derive an accurate mass estimator for dispersion-supported stellar systems and demonstrate its validity by analysing resolved line-of-sight velocity data for globular clusters, dwarf galaxies and elliptical galaxies. Specifically, by manipulating the spherical Jeans equation we show that the mass enclosed within the 3D deprojected half-light radius r1/2 can be determined with only mild assumptions about the spatial variation of the stellar velocity dispersion anisotropy as long as the projected velocity dispersion profile is fairly flat near the half-light radius, as is typically observed. We find M1/2 = 3 G-1< σ2los > r1/2 ~= 4 G-1< σ2los > Re, where < σ2los > is the luminosity-weighted square of the line-of-sight velocity dispersion and Re is the 2D projected half-light radius. While deceptively familiar in form, this formula is not the virial theorem, which cannot be used to determine accurate masses unless the radial profile of the total mass is known a priori. We utilize this finding to show that all of the Milky Way dwarf spheroidal galaxies (MW dSphs) are consistent with having formed within a halo of a mass of approximately 3 × 109 Msolar, assuming a Λ cold dark matter cosmology. The faintest MW dSphs seem to have formed in dark matter haloes that are at least as massive as those of the brightest MW dSphs, despite the almost five orders of magnitude spread in luminosity between them. We expand our analysis to the full range of observed dispersion-supported stellar systems and examine their dynamical I-band mass-to-light ratios ΥI1/2. The ΥI1/2 versus M1/2 relation for dispersion-supported galaxies follows a U shape, with a broad minimum near ΥI1/2 ~= 3 that spans dwarf elliptical galaxies to normal ellipticals, a steep rise to ΥI1/2 ~= 3200 for ultra-faint dSphs and a more shallow rise to ΥI1/2 ~= 800 for galaxy cluster spheroids.

  4. Vaccine Adverse Events

    MedlinePlus

    ... Vaccines, Blood & Biologics Animal & Veterinary Cosmetics Tobacco Products Vaccines, Blood & Biologics Home Vaccines, Blood & Biologics Safety & Availability ( ... Center for Biologics Evaluation & Research Vaccine Adverse Events Vaccine Adverse Events Share Tweet Linkedin Pin it More ...

  5. Location-assured, multifactor authentication on smartphones via LTE communication

    NASA Astrophysics Data System (ADS)

    Kuseler, Torben; Lami, Ihsan A.; Al-Assam, Hisham

    2013-05-01

    With the added security provided by LTE, geographical location has become an important factor for authentication to enhance the security of remote client authentication during mCommerce applications using Smartphones. Tight combination of geographical location with classic authentication factors like PINs/Biometrics in a real-time, remote verification scheme over the LTE layer connection assures the authenticator about the client itself (via PIN/biometric) as well as the client's current location, thus defines the important aspects of "who", "when", and "where" of the authentication attempt without eaves dropping or man on the middle attacks. To securely integrate location as an authentication factor into the remote authentication scheme, client's location must be verified independently, i.e. the authenticator should not solely rely on the location determined on and reported by the client's Smartphone. The latest wireless data communication technology for mobile phones (4G LTE, Long-Term Evolution), recently being rolled out in various networks, can be employed to enhance this location-factor requirement of independent location verification. LTE's Control Plane LBS provisions, when integrated with user-based authentication and independent source of localisation factors ensures secure efficient, continuous location tracking of the Smartphone. This feature can be performed during normal operation of the LTE-based communication between client and network operator resulting in the authenticator being able to verify the client's claimed location more securely and accurately. Trials and experiments show that such algorithm implementation is viable for nowadays Smartphone-based banking via LTE communication.

  6. No effect of diffraction on Pluto-Charon mutual events

    NASA Technical Reports Server (NTRS)

    Tholen, D. J.; Hubbard, W. B.

    1988-01-01

    Mulholland and Gustafson (1987) made the interesting suggestion that observations of Pluto-Charon mutual events might show significant dependence on both wavelength and telescope aperture because of diffraction effects. In this letter, observations are presented that show the predicted effects to be absent and demonstrate that the parameters of the system are such that the events can be accurately analyzed with geometrical optics.

  7. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  8. Accurate free energy calculation along optimized paths.

    PubMed

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  9. Accurate SHAPE-directed RNA structure determination

    PubMed Central

    Deigan, Katherine E.; Li, Tian W.; Mathews, David H.; Weeks, Kevin M.

    2009-01-01

    Almost all RNAs can fold to form extensive base-paired secondary structures. Many of these structures then modulate numerous fundamental elements of gene expression. Deducing these structure–function relationships requires that it be possible to predict RNA secondary structures accurately. However, RNA secondary structure prediction for large RNAs, such that a single predicted structure for a single sequence reliably represents the correct structure, has remained an unsolved problem. Here, we demonstrate that quantitative, nucleotide-resolution information from a SHAPE experiment can be interpreted as a pseudo-free energy change term and used to determine RNA secondary structure with high accuracy. Free energy minimization, by using SHAPE pseudo-free energies, in conjunction with nearest neighbor parameters, predicts the secondary structure of deproteinized Escherichia coli 16S rRNA (>1,300 nt) and a set of smaller RNAs (75–155 nt) with accuracies of up to 96–100%, which are comparable to the best accuracies achievable by comparative sequence analysis. PMID:19109441

  10. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  11. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  12. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  13. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  14. Accurate adiabatic correction in the hydrogen molecule.

    PubMed

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728

  15. MEMS accelerometers in accurate mount positioning systems

    NASA Astrophysics Data System (ADS)

    Mészáros, László; Pál, András.; Jaskó, Attila

    2014-07-01

    In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.

  16. Accurate Memory for Object Location by Individuals with Intellectual Disability: Absolute Spatial Tagging Instead of Configural Processing?

    ERIC Educational Resources Information Center

    Giuliani, Fabienne; Favrod, Jerome; Grasset, Francois; Schenk, Francoise

    2011-01-01

    Using head-mounted eye tracker material, we assessed spatial recognition abilities (e.g., reaction to object permutation, removal or replacement with a new object) in participants with intellectual disabilities. The "Intellectual Disabilities (ID)" group (n = 40) obtained a score totalling a 93.7% success rate, whereas the "Normal Control" group…

  17. Source regions of solar wind disappearance events

    NASA Astrophysics Data System (ADS)

    Janardhan, P.; Fujiki, K.; Sawant, H. S.; Kojima, M.; Hakamada, K.; Krishnan, R.

    2008-03-01

    During the period 1999-2002 there have been three instances, in May 1999, March 2002, and May 2002, respectively, when the solar wind densities at 1 AU dropped to abnormally low values (<0.1 cm-3) for extended periods of time (12-24 h). These long-lasting low-density anomalies observed at 1 AU are referred to as "solar wind disappearance events" and in this paper, we locate the solar sources of the two disappearance events in March and May 2002 and show that like the well-studied disappearance event of 11 May 1999, these events too originate in active region complexes located at central meridian and are characterized by highly nonradial solar wind outflows. We also show that during disappearance events, the interplanetary magnetic field is stable and unipolar and the associated solar wind outflows have extended Alfvén radii. Using the fact that solar wind flows from active regions have higher ratios of O7+/O6+ than wind from coronal holes, we try to pinpoint the solar sources of these very unusual and rare events and show that they represent the dynamic evolution of either active region open fields or small coronal hole boundaries embedded in or near large active region complexes located at or close to central meridian.

  18. Combined Approach to the Analysis of Rainfall Super-Extremes in Locations with Limited Observational Records.

    NASA Astrophysics Data System (ADS)

    Lakshmi, V.; Libertino, A.; Sharma, A.; Claps, P.

    2015-12-01

    The prospect of climatic change and its impacts have brought spatial statistics of extreme events into sharper focus. The so-called "water bombs" are predicted to become more frequent in the extra-tropical regions, and, actually, they raise serious concerns in some regions of the Mediterranean area. However, quantitative statistical methods to properly account for the probability of occurrence of these super-extreme events are still lacking, due to their rare occurrence and to the limited spatial scale at which these events occur. In order to overcome the lack of data, we propose at first to exploit the information derived from remote sensed datasets. Despite the coarser resolution, these databases are able to provide information continuous in space and time, overcoming the problems related to the discontinuous nature of rainfall measurements. We propose to apply such a kind of approach with the adoption of a Bayesian framework, aimed at combining local measurements with climatic regional information, conditioning the exceedance probability on the large and mesoscale characteristics of the system. The case study refers to an area located in the North-West of Italy, historically affected by extraordinary precipitation events. We use a dataset of daily at-gauge rainfall measurements extracted from the NOAA GHCN-Daily dataset, combined with the ones provided by some local Environmental Agencies. Daily estimations from the TRMM are adopted too. First, we identify the most intense events occurred in the area, combining the information from the different datasets. Analysing the related synoptic conditions with the reanalysis of the ECMWF, we then define the conditional variables and the hierarchical relationships between the events and their type. Different climatic configurations that combined with the local morphology and the seasonal condition of the Mediterranean Sea can triggers very intense precipitation events are identified. The results, compared with those

  19. Experiences with information locator services

    USGS Publications Warehouse

    Christian, E.

    1999-01-01

    Over the last few years, governments and other organizations have been using new technologies to create networked Information Locator Services that help people find information resources. These services not only enhance access to information, but also are designed to support fundamental information policy principles. This article relates experiences in developing and promoting services interoperable with the Global Information Locator Service standard that has now been adopted and promoted in many forums worldwide. The article describes sample implementations and touches on the strategic choices made in public policy, standards, and technology. Ten recommendations are offered for successful implementation of an Information Locator Service. Published by Elsevier Science Ltd. All rights reserved.

  20. Pluto-Charon mutual event predictions for 1986

    NASA Technical Reports Server (NTRS)

    Tholen, D. J.

    1985-01-01

    Circumstances are tabulated for 81-Pluto-Charon mutual events occurring during the 1986 opposition. The deepest and longest events will occur in February and reach a depth of about 0.15 mag. Observations of these events will lead to an accurate determination of the satellite's orbit, the diameters of the two bodies, the mean density of the system, and crude albedo maps of one hemisphere on each object.

  1. Absolute Locations of Repeating Mw 5.5 - 6.0 Earthquakes on Discovery Transform Fault, EPR

    NASA Astrophysics Data System (ADS)

    Wolfson, M. L.; Boettcher, M. S.; McGuire, J. J.; Collins, J. A.

    2011-12-01

    this study. To compare absolute locations of the repeating events to the high-resolution bathymetric data, it was necessary to perform a relative relocation of the earthquake centroids. We used events from the NOAA hydroacoustic catalog, which have positional accuracies of ~2 km [Fox et al., 2001], to determine accurate absolute locations for the Mw 5.5 - 6.0 events. Initial results show that there are at least 5 distinct rupture patches on Discovery, including the 4 repeating patches found previously, with a mean spacing of 13 km. The repeat time between events in each repeating rupture patch is 5 - 6 years. The relocation technique placed each group within ~7 km of the fault trace. Three of the rupture groups locate on the western segment of Discovery; one on the edge of the lozenge-shaped valley, one in the splay zone, and one near the ITSC. Using the catalog of over 24,000 0≤Mw≤4.6 events recorded during our 2008 ocean bottom seismometer deployment on Discovery, we find that microseismicity on Discovery roughly clusters in the areas between the large-events, suggesting that the frictional properties differ significantly between the large-event rupture patches and the regions of abundant microseismicity.

  2. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  3. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  4. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  5. Accurate Thermal Conductivities from First Principles

    NASA Astrophysics Data System (ADS)

    Carbogno, Christian

    2015-03-01

    In spite of significant research efforts, a first-principles determination of the thermal conductivity at high temperatures has remained elusive. On the one hand, Boltzmann transport techniques that include anharmonic effects in the nuclear dynamics only perturbatively become inaccurate or inapplicable under such conditions. On the other hand, non-equilibrium molecular dynamics (MD) methods suffer from enormous finite-size artifacts in the computationally feasible supercells, which prevent an accurate extrapolation to the bulk limit of the thermal conductivity. In this work, we overcome this limitation by performing ab initio MD simulations in thermodynamic equilibrium that account for all orders of anharmonicity. The thermal conductivity is then assessed from the auto-correlation function of the heat flux using the Green-Kubo formalism. Foremost, we discuss the fundamental theory underlying a first-principles definition of the heat flux using the virial theorem. We validate our approach and in particular the techniques developed to overcome finite time and size effects, e.g., by inspecting silicon, the thermal conductivity of which is particularly challenging to converge. Furthermore, we use this framework to investigate the thermal conductivity of ZrO2, which is known for its high degree of anharmonicity. Our calculations shed light on the heat resistance mechanism active in this material, which eventually allows us to discuss how the thermal conductivity can be controlled by doping and co-doping. This work has been performed in collaboration with R. Ramprasad (University of Connecticut), C. G. Levi and C. G. Van de Walle (University of California Santa Barbara).

  6. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  7. How flatbed scanners upset accurate film dosimetry.

    PubMed

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  8. Extreme Events and Sea Waves along Italian Coasts

    NASA Astrophysics Data System (ADS)

    Morucci, Sara; Nardone, Gabriele; Picone, Marco

    2014-05-01

    The Italian Wind Waves Measurement Network (RON) is composed by 15 directional buoys uniformly distributed all around the Italian coasts and provides data since 1989. Data, collected on these 15 locations represent one of the most accurate and complete oceanographic database in the Central Mediterranean Sea for many environmental issues such as studies on climate changes and variability and assessment of marine environments. In this framework the study on wind waves extreme events and storm surges is here presented. The first step in the extreme events analysis consists in extracting a set of independent wave events using different methodologies, such as Peak Over Threshold and annual maxima methods. Attention has been focused on the determination of the historical storms in terms of the return times and the expected values of the wave heights over several decades. For this purpose an investigation on several statistical distribution has been carried out for each measurement station. As well known there are many distribution function candidates for extreme wave analysis. In this study, extreme events analysis has been made through the GPD (and GEV) that provides different kind of distributions such as Gumbel, Frechet and Weibull, depending on the values of the estimated parameters. These distributions have been used in order to evaluate the wave height return level and return times up to 50 years. Even though the series are 25 years long, the analysis gives valuable information about the spatial distribution of the storms and their variability on a decadal time scale in the Central Mediterranean Sea. For each time series, a set of statistical parameters has been evaluated, such as the average number of storms per year, the return period corresponding to the maximum value of observed Hs, the return level corresponding to a period of 20 to 50 years depending on the availability of data. Finally, storm surge events have been considered highlighting the correlation

  9. Determinants of first practice location

    PubMed Central

    Raghavan, Malathi; Fleisher, William; Downs, Allan; Martin, Bruce; Sandham, J. Dean

    2012-01-01

    Abstract Objective To help understand physician movement out of Manitoba by determining the factors that influence Manitoba medical graduates’ choices about practice locations. Design Cross-sectional, within-stage, mixed-model survey. Setting Manitoba. Participants All University of Manitoba medical graduates from classes 1998 to 2009 for whom we had valid contact information (N = 912 of 943 graduates) were invited in August 2009 to participate in a survey. Main outcome measures Demographic information; ratings, on a 5-point scale, of the importance when choosing first practice locations of 12 practice characteristics, 3 recruitment strategies, and 4 location characteristics listed in the survey; free-text narratives on unlisted factors; and estimates of likely practice location upon completion of training for recent graduates still in residency training. Results Completed surveys were received from 331 (35.1%) graduates of the surveyed classes, 162 (53.3%) of whom chose Manitoba for their first practice location. Multiple regression analyses indicated that graduates choosing Manitoba for their first practice location were significantly more likely to have done their residency training in Manitoba (P < .05), whether or not they gave a high rating to the importance of being near family and friends. Also, graduates choosing Manitoba were significantly more likely to be recent graduates (P = .007) and less likely to be members of a visible minority (P = .018). These associations were robust even when analyses were restricted to responses from practitioners without cause to estimate practice locations. Early self-selection of graduates during entry into specific residency programs, results of the residency match process, and “putting down roots” during residency years were 3 important interrelated themes identified through qualitative analyses. Conclusion Residency education in Manitoba is the overwhelming factor influencing graduates’ choice of Manitoba as

  10. Pleasant events, unpleasant events, and depression.

    PubMed

    Sweeney, P D; Shaeffer, D E; Golin, S

    1982-07-01

    A review of previous research on Lewinsohn's model of depression shows that the causal link between a lack of response-contingent positive reinforcement and subsequent depression remains unsubstantiated. The present study was designed to explicitly test this causal relationship through the use of cross-lagged panel correlation. Measures of depression and pleasant events were taken at two different points in time separated by 1 month. The results revealed that the null hypothesis of spuriousness could not be rejected, indicating the relation often found between a lack of pleasant events and depression is probably due to some unmeasured third variable. The results also indicated that there is no causal relation between unpleasant events and depression. In summary, the causal assumptions in Lewinsohn's theory of depression were not supported by the data. Possible third-variable explanations of the data and their implications are discussed.

  11. Automatic classification and accurate size measurement of blank mask defects

    NASA Astrophysics Data System (ADS)

    Bhamidipati, Samir; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2015-07-01

    A blank mask and its preparation stages, such as cleaning or resist coating, play an important role in the eventual yield obtained by using it. Blank mask defects' impact analysis directly depends on the amount of available information such as the number of defects observed, their accurate locations and sizes. Mask usability qualification at the start of the preparation process, is crudely based on number of defects. Similarly, defect information such as size is sought to estimate eventual defect printability on the wafer. Tracking of defect characteristics, specifically size and shape, across multiple stages, can further be indicative of process related information such as cleaning or coating process efficiencies. At the first level, inspection machines address the requirement of defect characterization by detecting and reporting relevant defect information. The analysis of this information though is still largely a manual process. With advancing technology nodes and reducing half-pitch sizes, a large number of defects are observed; and the detailed knowledge associated, make manual defect review process an arduous task, in addition to adding sensitivity to human errors. Cases where defect information reported by inspection machine is not sufficient, mask shops rely on other tools. Use of CDSEM tools is one such option. However, these additional steps translate into increased costs. Calibre NxDAT based MDPAutoClassify tool provides an automated software alternative to the manual defect review process. Working on defect images generated by inspection machines, the tool extracts and reports additional information such as defect location, useful for defect avoidance[4][5]; defect size, useful in estimating defect printability; and, defect nature e.g. particle, scratch, resist void, etc., useful for process monitoring. The tool makes use of smart and elaborate post-processing algorithms to achieve this. Their elaborateness is a consequence of the variety and

  12. High-precision source location of the 1978 November 19 gamma-ray burst

    NASA Technical Reports Server (NTRS)

    Cline, T. L.; Desai, U. D.; Teegarden, B. J.; Pizzichini, G.; Evans, W. D.; Klebesadel, R. W.; Laros, J. G.; Barat, C.; Hurley, K.; Niel, M.

    1981-01-01

    The celestial source location of the November 19, 1978, intense gamma ray burst has been determined from data obtained with the interplanetary gamma-ray sensor network by means of long-baseline wave front timing instruments. Each of the instruments was designed for studying events with observable spectra of approximately greater than 100 keV, and each provides accurate event profile timing in the several millisecond range. The data analysis includes the following: the triangulated region is centered at (gamma, delta) 1950 = (1h16m32s, -28 deg 53 arcmin), at -84 deg galactic latitude, where the star density is very low and the obscuration negligible. The gamma-ray burst source region, consistent with that of a highly polarized radio source described by Hjellming and Ewald (1981), may assist in the source modeling and may facilitate the understanding of the source process. A marginally identifiable X-ray source was also found by an Einstein Observatory investigation. It is concluded that the burst contains redshifted positron annihilation and nuclear first-excited iron lines, which is consistent with a neutron star origin.

  13. The Challenges from Extreme Climate Events for Sustainable Development in Amazonia: the Acre State Experience

    NASA Astrophysics Data System (ADS)

    Araújo, M. D. N. M.

    2015-12-01

    In the past ten years Acre State, located in Brazil´s southwestern Amazonia, has confronted sequential and severe extreme events in the form of droughts and floods. In particular, the droughts and forest fires of 2005 and 2010, the 2012 flood within Acre, the 2014 flood of the Madeira River which isolated Acre for two months from southern Brazil, and the most severe flooding throughout the state in 2015 shook the resilience of Acrean society. The accumulated costs of these events since 2005 have exceeded 300 million dollars. For the last 17 years, successive state administrations have been implementing a socio-environmental model of development that strives to link sustainable economic production with environmental conservation, particularly for small communities. In this context, extreme climate events have interfered significantly with this model, increasing the risks of failure. The impacts caused by these events on development in the state have been exacerbated by: a) limitations in monitoring; b) extreme events outside of Acre territory (Madeira River Flood) affecting transportation systems; c) absence of reliable information for decision-making; and d) bureaucratic and judicial impediments. Our experience in these events have led to the following needs for scientific input to reduce the risk of disasters: 1) better monitoring and forecasting of deforestation, fires, and hydro-meteorological variables; 2) ways to increase risk perception in communities; 3) approaches to involve more effectively local and regional populations in the response to disasters; 4) more accurate measurements of the economic and social damages caused by these disasters. We must improve adaptation to and mitigation of current and future extreme climate events and implement a robust civil defense, adequate to these new challenges.

  14. Stonehenge: A Simple and Accurate Predictor of Lunar Eclipses

    NASA Astrophysics Data System (ADS)

    Challener, S.

    1999-12-01

    Over the last century, much has been written about the astronomical significance of Stonehenge. The rage peaked in the mid to late 1960s when new computer technology enabled astronomers to make the first complete search for celestial alignments. Because there are hundreds of rocks or holes at Stonehenge and dozens of bright objects in the sky, the quest was fraught with obvious statistical problems. A storm of controversy followed and the subject nearly vanished from print. Only a handful of these alignments remain compelling. Today, few astronomers and still fewer archaeologists would argue that Stonehenge served primarily as an observatory. Instead, Stonehenge probably served as a sacred meeting place, which was consecrated by certain celestial events. These would include the sun's risings and settings at the solstices and possibly some lunar risings as well. I suggest that Stonehenge was also used to predict lunar eclipses. While Hawkins and Hoyle also suggested that Stonehenge was used in this way, their methods are complex and they make use of only early, minor, or outlying areas of Stonehenge. In contrast, I suggest a way that makes use of the imposing, central region of Stonehenge; the area built during the final phase of activity. To predict every lunar eclipse without predicting eclipses that do not occur, I use the less familiar lunar cycle of 47 lunar months. By moving markers about the Sarsen Circle, the Bluestone Circle, and the Bluestone Horseshoe, all umbral lunar eclipses can be predicted accurately.

  15. Progress in fast, accurate multi-scale climate simulations

    SciTech Connect

    Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  16. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  17. Progress in Fast, Accurate Multi-scale Climate Simulations

    SciTech Connect

    Collins, William D; Johansen, Hans; Evans, Katherine J; Woodward, Carol S.; Caldwell, Peter

    2015-01-01

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  18. Detrecting and Locating Partial Discharges in Transformers

    SciTech Connect

    Shourbaji, A.; Richards, R.; Kisner, R. A.; Hardy, J.

    2005-02-04

    A collaborative research between the Oak Ridge National Laboratory (ORNL), the American Electric Power (AEP), the Tennessee Valley Authority (TVA), and the State of Ohio Energy Office (OEO) has been formed to conduct a feasibility study to detect and locate partial discharges (PDs) inside large transformers. The success of early detection of the PDs is necessary to avoid costly catastrophic failures that can occur if the process of PD is ignored. The detection method under this research is based on an innovative technology developed by ORNL researchers using optical methods to sense the acoustical energy produced by the PDs. ORNL researchers conducted experimental studies to detect PD using an optical fiber as an acoustic sensor capable of detecting acoustical disturbances at any point along its length. This technical approach also has the potential to locate the point at which the PD was sensed within the transformer. Several optical approaches were experimentally investigated, including interferometric detection of acoustical disturbances along the sensing fiber, light detection and ranging (LIDAR) techniques using frequency modulation continuous wave (FMCW), frequency modulated (FM) laser with a multimode fiber, FM laser with a single mode fiber, and amplitude modulated (AM) laser with a multimode fiber. The implementation of the optical fiber-based acoustic measurement technique would include installing a fiber inside a transformer allowing real-time detection of PDs and determining their locations. The fibers are nonconductive and very small (core plus cladding are diameters of 125 μm for single-mode fibers and 230 μm for multimode fibers). The research identified the capabilities and limitations of using optical technology to detect and locate sources of acoustical disturbances such as in PDs in large transformers. Amplitude modulation techniques showed the most promising results and deserve further research to better quantify the technique’s sensitivity

  19. Accurate Position Calibrations for Charged Fragments

    NASA Astrophysics Data System (ADS)

    Russell, Autumn; Finck, Joseph E.; Spyrou, Artemis; Thoennessen, Michael

    2009-10-01

    The Modular Neutron Array (MoNA), located at the National Superconducting Laboratory at Michigan State University, is used in conjunction with the MSU/FSU Sweeper Magnet to study the breakup of neutron-rich nuclei. Fragmentation reactions create particle-unstable nuclei near the neutron dripline which spontaneously break up by the decay of one or two neutrons with energies that reflect the nuclear structure of unbound excited and ground states. The neutrons continue forward into MoNA where their position and time of flight are recorded, and the charged fragments' position and energy are measured by an array of detectors following the Sweeper Magnet. In such experiments the identification of the fragment of interest is done through energy loss and time-of-flight measurements using plastic scintillators. The emitted angles of the fragments are determined with the use of CRDCs. The purpose of the present work was the calibration of the CRDCs in the X and Y axis (where Z is the beam axis) using specially designed masks. This calibration was also used for the correction of the signal of the plastic scintillators, which is position dependent. The results of this work are used for the determination of the ground state of the neutron-unbound ^24N.

  20. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  1. The Structure of Evolution LOCBURST: The BATSE Burst Location Algorithm

    NASA Technical Reports Server (NTRS)

    Pendleton, Geoffrey N.; Briggs, Michael S.; Kippen, R. March; Paciesas, William S.; Stollberg, Mark; Woods, Pete; Meegan, C. A.; Fishman, G. J.; McCollough, M. L.; Connaughton, V.

    1998-01-01

    The gamma-ray bursts (GRB) location algorithm used to produce the BATSE GRB locations is described. The general flow of control of the current location algorithm is presented and the significant properties of the various physical inputs required are identified. The development of the burst location algorithm during the releases of the BATSE 1B, 2B, and 3B gamma-ray burst catalogs is presented so that the reasons for the differences in the positions and error estimates between the catalogs can be understood. In particular, differences between the 2B and 3B locations are discussed for events that have moved significantly and the reasons for the changes explained. The locations of bursts located independently by the interplanetary network are used to illustrate the effect on burst location accuracy of various components of the algorithm. IPN data as well as locations from other gamma-ray instruments are used to calculate estimates of the systematic errors on BATSE burst locations.

  2. Locating the LCROSS Impact Craters

    NASA Technical Reports Server (NTRS)

    Marshall, William; Shirley, Mark; Moratto, Zachary; Colaprete, Anthony; Neumann, Gregory A.; Smith, David E.; Hensley, Scott; Wilson, Barbara; Slade, Martin; Kennedy, Brian; Gurrola, Eric; Harcke, Leif

    2012-01-01

    The Lunar CRater Observations and Sensing Satellite (LCROSS) mission impacted a spent Centaur rocket stage into a permanently shadowed region near the lunar south pole. The Sheperding Spacecraft (SSC) separated approx. 9 hours before impact and performed a small braking maneuver in order to observe the Centaur impact plume, looking for evidence of water and other volatiles, before impacting itself. This paper describes the registration of imagery of the LCROSS impact region from the mid- and near-infrared cameras onboard the SSC, as well as from the Goldstone radar. We compare the Centaur impact features, positively identified in the first two, and with a consistent feature in the third, which are interpreted as a 20 m diameter crater surrounded by a 160 m diameter ejecta region. The images are registered to Lunar Reconnaisance Orbiter (LRO) topographical data which allows determination of the impact location. This location is compared with the impact location derived from ground-based tracking and propagation of the spacecraft's trajectory and with locations derived from two hybrid imagery/trajectory methods. The four methods give a weighted average Centaur impact location of -84.6796 deg, -48.7093 deg, with a 1s uncertainty of 115 m along latitude, and 44 m along longitude, just 146 m from the target impact site. Meanwhile, the trajectory-derived SSC impact location is -84.719 deg, -49.61 deg, with a 1 alpha uncertainty of 3 m along the Earth vector and 75 m orthogonal to that, 766 m from the target location and 2.803 km south-west of the Centaur impact. We also detail the Centaur impact angle and SSC instrument pointing errors. Six high-level LCROSS mission requirements are shown to be met by wide margins. We hope that these results facilitate further analyses of the LCROSS experiment data and follow-up observations of the impact region

  3. Extracting semantically enriched events from biomedical literature

    PubMed Central

    2012-01-01

    can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare. PMID:22621266

  4. Method of fan sound mode structure determination computer program user's manual: Microphone location program

    NASA Technical Reports Server (NTRS)

    Pickett, G. F.; Wells, R. A.; Love, R. A.

    1977-01-01

    A computer user's manual describing the operation and the essential features of the microphone location program is presented. The Microphone Location Program determines microphone locations that ensure accurate and stable results from the equation system used to calculate modal structures. As part of the computational procedure for the Microphone Location Program, a first-order measure of the stability of the equation system was indicated by a matrix 'conditioning' number.

  5. Factors affecting the accurate determination of cerebrovascular blood flow using high-speed droplet imaging

    NASA Astrophysics Data System (ADS)

    Rudin, Stephen; Divani, Afshin; Wakhloo, Ajay K.; Lieber, Baruch B.; Granger, William; Bednarek, Daniel R.; Yang, Chang-Ying J.

    1998-07-01

    Detailed cerebrovascular blood flow can be more accurately determined radiographically from the new droplet tracking method previously introduced by the authors than from standard soluble contrast techniques. For example, arteriovenous malformation (AVM) transit times which are crucial for proper glue embolization treatments, were shown to be about half when using droplets compared to those measured using soluble contrast techniques. In this work, factors such as x-ray pulse duration, frame rate, system spatial resolution (focal spot size), droplet size, droplet and system contrast parameters, and system noise are considered in relation to their affect on the accurate determination of droplet location and velocity.

  6. Algorithms for Accurate and Fast Plotting of Contour Surfaces in 3D Using Hexahedral Elements

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Saini, Jaswinder Singh

    2016-07-01

    In the present study, Fast and accurate algorithms for the generation of contour surfaces in 3D are described using hexahedral elements which are popular in finite element analysis. The contour surfaces are described in the form of groups of boundaries of contour segments and their interior points are derived using the contour equation. The locations of contour boundaries and the interior points on contour surfaces are as accurate as the interpolation results obtained by hexahedral elements and thus there are no discrepancies between the analysis and visualization results.

  7. Multi-event universal kriging (MEUK)

    NASA Astrophysics Data System (ADS)

    Tonkin, Matthew J.; Kennel, Jonathan; Huber, William; Lambie, John M.

    2016-01-01

    Multi-event universal kriging (MEUK) is a method of interpolation that creates a series of maps, each corresponding to a specific sampling "event", which exhibit spatial relationships that persist over time. MEUK is computed using minimum-variance unbiased linear prediction from data obtained via a sequence of events. MEUK assumes multi-event data can be described by a sum of (a) spatial trends that vary over time, (b) spatial trends that are invariant over time, and (c) spatially- and temporally-stationary correlation among the residuals from the combination of these trends. The fundamental advance made by MEUK versus traditional universal kriging (UK) lies with the generalized least squares (GLS) model and the multi-event capability it facilitates, rather than in the geostatistics, although it is shown how use of MEUK can greatly reduce predictive variances versus UK. For expediency, MEUK assumes a spatial covariance that does not change over time - although it does not have to - which is an advantage over space-time methods that employ a full space-time covariance function. MEUK can be implemented with large multi-event datasets, as demonstrated by application to a large water level dataset. Often, MEUK enables the stable solution of multiple events for similar computational effort as for a single event. MEUK provides an efficient basis for developing "wheel-and-axle" monitoring strategies [32] that combines frequently sampled locations used monitor changes over time with many more locations sampled periodically to provide synoptic depictions. MEUK can aid in the identification of the core monitoring locations, allowing for reduced sampling frequency elsewhere. Although MEUK can incorporate longitudinal variograms as in other space-time methods, doing so reduces the computational advantages of MEUK.

  8. Automated detection of instantaneous gait events using time frequency analysis and manifold embedding.

    PubMed

    Aung, Min S H; Thies, Sibylle B; Kenney, Laurence P J; Howard, David; Selles, Ruud W; Findlow, Andrew H; Goulermas, John Y

    2013-11-01

    Accelerometry is a widely used sensing modality in human biomechanics due to its portability, non-invasiveness, and accuracy. However, difficulties lie in signal variability and interpretation in relation to biomechanical events. In walking, heel strike and toe off are primary gait events where robust and accurate detection is essential for gait-related applications. This paper describes a novel and generic event detection algorithm applicable to signals from tri-axial accelerometers placed on the foot, ankle, shank or waist. Data from healthy subjects undergoing multiple walking trials on flat and inclined, as well as smooth and tactile paving surfaces is acquired for experimentation. The benchmark timings at which heel strike and toe off occur, are determined using kinematic data recorded from a motion capture system. The algorithm extracts features from each of the acceleration signals using a continuous wavelet transform over a wide range of scales. A locality preserving embedding method is then applied to reduce the high dimensionality caused by the multiple scales while preserving salient features for classification. A simple Gaussian mixture model is then trained to classify each of the time samples into heel strike, toe off or no event categories. Results show good detection and temporal accuracies for different sensor locations and different walking terrains. PMID:23322764

  9. Dialogue on private events

    PubMed Central

    Palmer, David C.; Eshleman, John; Brandon, Paul; Layng, T. V. Joe; McDonough, Christopher; Michael, Jack; Schoneberger, Ted; Stemmer, Nathan; Weitzman, Ray; Normand, Matthew

    2004-01-01

    In the fall of 2003, the authors corresponded on the topic of private events on the listserv of the Verbal Behavior Special Interest Group. Extracts from that correspondence raised questions about the role of response amplitude in determining units of analysis, whether private events can be investigated directly, and whether covert behavior differs from other behavior except in amplitude. Most participants took a cautious stance, noting not only conceptual pitfalls and empirical difficulties in the study of private events, but doubting the value of interpretive exercises about them. Others argued that despite such obstacles, in domains where experimental analyses cannot be done, interpretation of private events in the light of laboratory principles is the best that science can offer. One participant suggested that the notion that private events can be behavioral in nature be abandoned entirely; as an alternative, the phenomena should be reinterpreted only as physiological events. PMID:22477293

  10. Codes for sound-source location in nontonotopic auditory cortex.

    PubMed

    Middlebrooks, J C; Xu, L; Eddins, A C; Green, D M

    1998-08-01

    We evaluated two hypothetical codes for sound-source location in the auditory cortex. The topographical code assumed that single neurons are selective for particular locations and that sound-source locations are coded by the cortical location of small populations of maximally activated neurons. The distributed code assumed that the responses of individual neurons can carry information about locations throughout 360 degrees of azimuth and that accurate sound localization derives from information that is distributed across large populations of such panoramic neurons. We recorded from single units in the anterior ectosylvian sulcus area (area AES) and in area A2 of alpha-chloralose-anesthetized cats. Results obtained in the two areas were essentially equivalent. Noise bursts were presented from loudspeakers spaced in 20 degrees intervals of azimuth throughout 360 degrees of the horizontal plane. Spike counts of the majority of units were modulated >50% by changes in sound-source azimuth. Nevertheless, sound-source locations that produced greater than half-maximal spike counts often spanned >180 degrees of azimuth. The spatial selectivity of units tended to broaden and, often, to shift in azimuth as sound pressure levels (SPLs) were increased to a moderate level. We sometimes saw systematic changes in spatial tuning along segments of electrode tracks as long as 1.5 mm but such progressions were not evident at higher sound levels. Moderate-level sounds presented anywhere in the contralateral hemifield produced greater than half-maximal activation of nearly all units. These results are not consistent with the hypothesis of a topographic code. We used an artificial-neural-network algorithm to recognize spike patterns and, thereby, infer the locations of sound sources. Network input consisted of spike density functions formed by averages of responses to eight stimulus repetitions. Information carried in the responses of single units permitted reasonable estimates of sound

  11. Wireless Damage Location Sensing System

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E. (Inventor); Taylor, Bryant Douglas (Inventor)

    2012-01-01

    A wireless damage location sensing system uses a geometric-patterned wireless sensor that resonates in the presence of a time-varying magnetic field to generate a harmonic response that will experience a change when the sensor experiences a change in its geometric pattern. The sensing system also includes a magnetic field response recorder for wirelessly transmitting the time-varying magnetic field and for wirelessly detecting the harmonic response. The sensing system compares the actual harmonic response to a plurality of predetermined harmonic responses. Each predetermined harmonic response is associated with a severing of the sensor at a corresponding known location thereof so that a match between the actual harmonic response and one of the predetermined harmonic responses defines the known location of the severing that is associated therewith.

  12. A Satellite Interference Location System

    NASA Astrophysics Data System (ADS)

    Smith, William Whitfield, Jr.

    1990-01-01

    This dissertation describes the design and development of a system for inferring the position of terrestrial satellite uplink stations using existing domestic satellites with minimal disruption to normal satellite operation. Two methods are presented by which a quantity measured at a terrestrial receiving site is mapped into a curve of possible uplink locations on the Earth's surface. One method involves measuring differential time delays of a single uplink signal observed through two adjacent spacecraft. Another method uses a short baseline interferometer composed of the two cross-polarized and spatially separated antenna feeds aboard an affected satellite. A unique location or two dimensional solution is obtained by employing an appropriate combination of the two presented methods. A system for measurement of the required differential delays and phases is described in addition to the experimental work performed to demonstrate the feasibility of these location methods.

  13. Effects of heterogeneity on earthquake location at ISC

    NASA Astrophysics Data System (ADS)

    Adams, R. D.

    1992-12-01

    Earthquake location at the International Seismological Centre is carried out by routine least-squares analysis using Jeffreys-Bullen travel times. It is impossible to examine every earthquake in detail, but when obvious discrepancies in location become apparent, adjustments can be made by analysts, usually in phase identification or the restraint of depth. Such discrepancies often result from inappropriateness of the Jeffreys-Bullen model. The effect is most apparent in subduction zones, where it is often difficult to reconcile local and teleseismic observations, and differences from the standard model can result in substantial mislocations. Large events, located by steeply descending teleseismic phases, may be only slightly misplaced, with large residuals at close stations giving a true indication of velocity anomalies. Small events, however, are often significantly misplaced, although giving small residuals at a few close stations. These apparently well located events give compensating misinformation about velocities and location. In other areas, especially mid-oceanic ridges, difficulties in depth determination are likely to be related to deviations from a laterally homogeneous velocity model.

  14. Neuroanatomical correlates of locative prepositions.

    PubMed

    Tranel, Daniel; Kemmerer, David

    2004-10-01

    Very little research has explored which neural systems may be important for retrieving the meanings of locative prepositions (e.g., in, on, around). To begin to address this knowledge gap, we conducted a lesion study in which we tested the hypothesis that processing the meanings of locative prepositions depends on neural structures in the left inferior prefrontal cortex and left inferior parietal cortex. Seventy-eight subjects with focal, stable lesions to various parts of the telencephalon and a comparison group of 60 normal participants were studied with tasks that require production, comprehension, and semantic analysis of locative prepositions. In support of our hypothesis, we found that in subjects with impaired knowledge of locative prepositions, the highest region of lesion overlap was in the left frontal operculum and the left supramarginal gyrus, and in the white matter subjacent to these two areas. In a second study, focused on six subjects who had pervasive defects for locative preposition knowledge, we confirmed that such defects were associated specifically with damage to the posterior left frontal operculum, white matter subjacent to this region, and white matter underneath the inferior parietal operculum. These subjects did not have basic impairments in spatial processing or working memory, and they had relatively well-preserved processing of conceptual knowledge for actions and various categories of concrete entities (e.g., persons, animals, tools). All six subjects, however, had defects in naming actions, and some of them also had defective naming of some categories of concrete entities. Overall, the findings converge nicely with recent results from functional imaging approaches, and with classic studies from the aphasia-based literature, and suggest that the left inferior prefrontal and left inferior parietal regions have crucial-albeit not exclusive-roles in processing knowledge associated with locative prepositions. PMID:21038229

  15. Very Fast and Accurate Azimuth Disambiguation of Vector Magnetograms

    NASA Astrophysics Data System (ADS)

    Rudenko, G. V.; Anfinogentov, S. A.

    2014-05-01

    We present a method for fast and accurate azimuth disambiguation of vector magnetogram data regardless of the location of the analyzed region on the solar disk. The direction of the transverse field is determined with the principle of minimum deviation of the field from the reference (potential) field. The new disambiguation (NDA) code is examined on the well-known models of Metcalf et al. ( Solar Phys. 237, 267, 2006) and Leka et al. ( Solar Phys. 260, 83, 2009), and on an artificial model based on the observed magnetic field of AR 10930 (Rudenko, Myshyakov, and Anfinogentov, Astron. Rep. 57, 622, 2013). We compare Hinode/SOT-SP vector magnetograms of AR 10930 disambiguated with three codes: the NDA code, the nonpotential magnetic-field calculation (NPFC: Georgoulis, Astrophys. J. Lett. 629, L69, 2005), and the spherical minimum-energy method (Rudenko, Myshyakov, and Anfinogentov, Astron. Rep. 57, 622, 2013). We then illustrate the performance of NDA on SDO/HMI full-disk magnetic-field observations. We show that our new algorithm is more than four times faster than the fastest algorithm that provides the disambiguation with a satisfactory accuracy (NPFC). At the same time, its accuracy is similar to that of the minimum-energy method (a very slow algorithm). In contrast to other codes, the NDA code maintains high accuracy when the region to be analyzed is very close to the limb.

  16. Accurate Reading with Sequential Presentation of Single Letters

    PubMed Central

    Price, Nicholas S. C.; Edwards, Gemma L.

    2012-01-01

    Rapid, accurate reading is possible when isolated, single words from a sentence are sequentially presented at a fixed spatial location. We investigated if reading of words and sentences is possible when single letters are rapidly presented at the fovea under user-controlled or automatically controlled rates. When tested with complete sentences, trained participants achieved reading rates of over 60 wpm and accuracies of over 90% with the single letter reading (SLR) method and naive participants achieved average reading rates over 30 wpm with greater than 90% accuracy. Accuracy declined as individual letters were presented for shorter periods of time, even when the overall reading rate was maintained by increasing the duration of spaces between words. Words in the lexicon that occur more frequently were identified with higher accuracy and more quickly, demonstrating that trained participants have lexical access. In combination, our data strongly suggest that comprehension is possible and that SLR is a practicable form of reading under conditions in which normal scanning of text is not possible, or for scenarios with limited spatial and temporal resolution such as patients with low vision or prostheses. PMID:23115548

  17. Record-breaking events during the compressive failure of porous materials.

    PubMed

    Pál, Gergő; Raischel, Frank; Lennartz-Sassinek, Sabine; Kun, Ferenc; Main, Ian G

    2016-03-01

    An accurate understanding of the interplay between random and deterministic processes in generating extreme events is of critical importance in many fields, from forecasting extreme meteorological events to the catastrophic failure of materials and in the Earth. Here we investigate the statistics of record-breaking events in the time series of crackling noise generated by local rupture events during the compressive failure of porous materials. The events are generated by computer simulations of the uniaxial compression of cylindrical samples in a discrete element model of sedimentary rocks that closely resemble those of real experiments. The number of records grows initially as a decelerating power law of the number of events, followed by an acceleration immediately prior to failure. The distribution of the size and lifetime of records are power laws with relatively low exponents. We demonstrate the existence of a characteristic record rank k(*), which separates the two regimes of the time evolution. Up to this rank deceleration occurs due to the effect of random disorder. Record breaking then accelerates towards macroscopic failure, when physical interactions leading to spatial and temporal correlations dominate the location and timing of local ruptures. The size distribution of records of different ranks has a universal form independent of the record rank. Subsequences of events that occur between consecutive records are characterized by a power-law size distribution, with an exponent which decreases as failure is approached. High-rank records are preceded by smaller events of increasing size and waiting time between consecutive events and they are followed by a relaxation process. As a reference, surrogate time series are generated by reshuffling the event times. The record statistics of the uncorrelated surrogates agrees very well with the corresponding predictions of independent identically distributed random variables, which confirms that temporal and spatial

  18. Intensity, magnitude, location and attenuation in India for felt earthquakes since 1762

    USGS Publications Warehouse

    Szeliga, Walter; Hough, Susan; Martin, Stacey; Bilham, Roger

    2010-01-01

    A comprehensive, consistently interpreted new catalog of felt intensities for India (Martin and Szeliga, 2010, this issue) includes intensities for 570 earthquakes; instrumental magnitudes and locations are available for 100 of these events. We use the intensity values for 29 of the instrumentally recorded events to develop new intensity versus attenuation relations for the Indian subcontinent and the Himalayan region. We then use these relations to determine the locations and magnitudes of 234 historical events, using the method of Bakun and Wentworth (1997). For the remaining 336 events, intensity distributions are too sparse to determine magnitude or location. We evaluate magnitude and location accuracy of newly located events by comparing the instrumental- with the intensity-derived location for 29 calibration events, for which more than 15 intensity observations are available. With few exceptions, most intensity-derived locations lie within a fault length of the instrumentally determined location. For events in which the azimuthal distribution of intensities is limited, we conclude that the formal error bounds from the regression of Bakun and Wentworth (1997) do not reflect the true uncertainties. We also find that the regression underestimates the uncertainties of the location and magnitude of the 1819 Allah Bund earthquake, for which a location has been inferred from mapped surface deformation. Comparing our inferred attenuation relations to those developed for other regions, we find that attenuation for Himalayan events is comparable to intensity attenuation in California (Bakun and Wentworth, 1997), while intensity attenuation for cratonic events is higher than intensity attenuation reported for central/eastern North America (Bakun et al., 2003). Further, we present evidence that intensities of intraplate earthquakes have a nonlinear dependence on magnitude such that attenuation relations based largely on small-to-moderate earthquakes may significantly

  19. Intensity, magnitude, location, and attenuation in India for felt earthquakes since 1762

    USGS Publications Warehouse

    Szeliga, W.; Hough, S.; Martin, S.; Bilham, R.

    2010-01-01

    A comprehensive, consistently interpreted new catalog of felt intensities for India (Martin and Szeliga, 2010, this issue) includes intensities for 570 earth-quakes; instrumental magnitudes and locations are available for 100 of these events. We use the intensity values for 29 of the instrumentally recorded events to develop new intensity versus attenuation relations for the Indian subcontinent and the Himalayan region. We then use these relations to determine the locations and magnitudes of 234 historical events, using the method of Bakun and Wentworth (1997). For the remaining 336 events, intensity distributions are too sparse to determine magnitude or location. We evaluate magnitude and location accuracy of newly located events by comparing the instrumental-with the intensity-derived location for 29 calibration events, for which more than 15 intensity observations are available. With few exceptions, most intensity-derived locations lie within a fault length of the instrumentally determined location. For events in which the azimuthal distribution of intensities is limited, we conclude that the formal error bounds from the regression of Bakun and Wentworth (1997) do not reflect the true uncertainties. We also find that the regression underestimates the uncertainties of the location and magnitude of the 1819 Allah Bund earthquake, for which a location has been inferred from mapped surface deformation. Comparing our inferred attenuation relations to those developed for other regions, we find that attenuation for Himalayan events is comparable to intensity attenuation in California (Bakun and Wentworth, 1997), while intensity attenuation for cratonic events is higher than intensity attenuation reported for central/eastern North America (Bakun et al., 2003). Further, we present evidence that intensities of intraplate earth-quakes have a nonlinear dependence on magnitude such that attenuation relations based largely on small-to-moderate earthquakes may significantly

  20. 77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ... COMMISSION Accurate NDE & Inspection, LLC; Confirmatory Order In the Matter of Accurate NDE & Docket: 150... request ADR with the NRC in an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28,...

  1. Backwater controls of avulsion location on deltas

    NASA Astrophysics Data System (ADS)

    Chatanantavet, Phairot; Lamb, Michael P.; Nittrouer, Jeffrey A.

    2012-01-01

    River delta complexes are built in part through repeated river-channel avulsions, which often occur about a persistent spatial node creating delta lobes that form a fan-like morphology. Predicting the location of avulsions is poorly understood, but it is essential for wetland restoration, hazard mitigation, reservoir characterization, and delta morphodynamics. Following previous work, we show that the upstream distance from the river mouth where avulsions occur is coincident with the backwater length, i.e., the upstream extent of river flow that is affected by hydrodynamic processes in the receiving basin. To explain this observation we formulate a fluvial morphodynamic model that is coupled to an offshore spreading river plume and subject it to a range of river discharges. Results show that avulsion is less likely in the downstream portion of the backwater zone because, during high-flow events, the water surface is drawn down near the river mouth to match that of the offshore plume, resulting in river-bed scour and a reduced likelihood of overbank flow. Furthermore, during low-discharge events, flow deceleration near the upstream extent of backwater causes enhanced deposition locally and a reduced channel-fill timescale there. Both mechanisms favor preferential avulsion in the upstream part of the backwater zone. These dynamics are fundamentally due to variable river discharges and a coupled offshore river plume, with implications for predicting delta response to climate and sea level change, and fluvio-deltaic stratigraphy.

  2. An Impact-Location Estimation Algorithm for Subsonic Uninhabited Aircraft

    NASA Technical Reports Server (NTRS)

    Bauer, Jeffrey E.; Teets, Edward

    1997-01-01

    An impact-location estimation algorithm is being used at the NASA Dryden Flight Research Center to support range safety for uninhabited aerial vehicle flight tests. The algorithm computes an impact location based on the descent rate, mass, and altitude of the vehicle and current wind information. The predicted impact location is continuously displayed on the range safety officer's moving map display so that the flightpath of the vehicle can be routed to avoid ground assets if the flight must be terminated. The algorithm easily adapts to different vehicle termination techniques and has been shown to be accurate to the extent required to support range safety for subsonic uninhabited aerial vehicles. This paper describes how the algorithm functions, how the algorithm is used at NASA Dryden, and how various termination techniques are handled by the algorithm. Other approaches to predicting the impact location and the reasons why they were not selected for real-time implementation are also discussed.

  3. An Accurate Timing Alignment Method with Time-to-Digital Converter Linearity Calibration for High-Resolution TOF PET

    PubMed Central

    Li, Hongdi; Wang, Chao; An, Shaohui; Lu, Xingyu; Dong, Yun; Liu, Shitao; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio; Wong, Wai-Hoi

    2015-01-01

    Accurate PET system timing alignment minimizes the coincidence time window and therefore reduces random events and improves image quality. It is also critical for time-of-flight (TOF) image reconstruction. Here, we use a thin annular cylinder (shell) phantom filled with a radioactive source and located axially and centrally in a PET camera for the timing alignment of a TOF PET system. This timing alignment method involves measuring the time differences between the selected coincidence detector pairs, calibrating the differential and integral nonlinearity of the time-to-digital converter (TDC) with the same raw data and deriving the intrinsic time biases for each detector using an iterative algorithm. The raw time bias for each detector is downloaded to the front-end electronics and the residual fine time bias can be applied during the TOF list-mode reconstruction. Our results showed that a timing alignment accuracy of better than ±25 ps can be achieved, and a preliminary timing resolution of 473 ps (full width at half maximum) was measured in our prototype TOF PET/CT system. PMID:26543243

  4. AESOP: Adaptive Event detection SOftware using Programming by example

    NASA Astrophysics Data System (ADS)

    Thangali, Ashwin; Prasad, Harsha; Kethamakka, Sai; Demirdjian, David; Checka, Neal

    2015-05-01

    This paper presents AESOP, a software tool for automatic event detection in video. AESOP employs a super- vised learning approach for constructing event models, given training examples from different event classes. A trajectory-based formulation is used for modeling events with an aim towards incorporating invariance to changes in the camera location and orientation parameters. The proposed formulation is designed to accommodate events that involve interactions between two or more entities over an extended period of time. AESOP's event models are formulated as HMMs to improve the event detection algorithm's robustness to noise in input data and to achieve computationally efficient algorithms for event model training and event detection. AESOP's performance is demonstrated on a wide range of different scenarios, including stationary camera surveillance and aerial video footage captured in land and maritime environments.

  5. LOCATING LEAKS WITH ACOUSTIC TECHNOLOGY

    EPA Science Inventory

    Many water distribution systems in this country are almost 100 years old. About 26 percent of piping in these systems is made of unlined cast iron or steel and is in poor condition. Many methods that locate leaks in these pipes are time-consuming, costly, disruptive to operations...

  6. Features, Events, and Processes: Disruptive Events

    SciTech Connect

    J. King

    2004-03-31

    The primary purpose of this analysis is to evaluate seismic- and igneous-related features, events, and processes (FEPs). These FEPs represent areas of natural system processes that have the potential to produce disruptive events (DE) that could impact repository performance and are related to the geologic processes of tectonism, structural deformation, seismicity, and igneous activity. Collectively, they are referred to as the DE FEPs. This evaluation determines which of the DE FEPs are excluded from modeling used to support the total system performance assessment for license application (TSPA-LA). The evaluation is based on the data and results presented in supporting analysis reports, model reports, technical information, or corroborative documents that are cited in the individual FEP discussions in Section 6.2 of this analysis report.

  7. An event database for rotational seismology

    NASA Astrophysics Data System (ADS)

    Salvermoser, Johannes; Hadziioannou, Celine; Hable, Sarah; Chow, Bryant; Krischer, Lion; Wassermann, Joachim; Igel, Heiner

    2016-04-01

    The ring laser sensor (G-ring) located at Wettzell, Germany, routinely observes earthquake-induced rotational ground motions around a vertical axis since its installation in 2003. Here we present results from a recently installed event database which is the first that will provide ring laser event data in an open access format. Based on the GCMT event catalogue and some search criteria, seismograms from the ring laser and the collocated broadband seismometer are extracted and processed. The ObsPy-based processing scheme generates plots showing waveform fits between rotation rate and transverse acceleration and extracts characteristic wavefield parameters such as peak ground motions, noise levels, Love wave phase velocities and waveform coherence. For each event, these parameters are stored in a text file (json dictionary) which is easily readable and accessible on the website. The database contains >10000 events starting in 2007 (Mw>4.5). It is updated daily and therefore provides recent events at a time lag of max. 24 hours. The user interface allows to filter events for epoch, magnitude, and source area, whereupon the events are displayed on a zoomable world map. We investigate how well the rotational motions are compatible with the expectations from the surface wave magnitude scale. In addition, the website offers some python source code examples for downloading and processing the openly accessible waveforms.

  8. An automatic procedure for high-resolution earthquake locations: a case study from the TABOO near fault observatory (Northern Apennines, Italy)

    NASA Astrophysics Data System (ADS)

    Valoroso, Luisa; Chiaraluce, Lauro; Di Stefano, Raffaele; Latorre, Diana; Piccinini, Davide

    2014-05-01

    The characterization of the geometry, kinematics and rheology of fault zones by seismological data depends on our capability of accurately locate the largest number of low-magnitude seismic events. To this aim, we have been working for the past three years to develop an advanced modular earthquake location procedure able to automatically retrieve high-resolution earthquakes catalogues directly from continuous waveforms data. We use seismograms recorded at about 60 seismic stations located both at surface and at depth. The network covers an area of about 80x60 km with a mean inter-station distance of 6 km. These stations are part of a Near fault Observatory (TABOO; http://taboo.rm.ingv.it/), consisting of multi-sensor stations (seismic, geodetic, geochemical and electromagnetic). This permanent scientific infrastructure managed by the INGV is devoted to studying the earthquakes preparatory phase and the fast/slow (i.e., seismic/aseismic) deformation process active along the Alto Tiberina fault (ATF) located in the northern Apennines (Italy). The ATF is potentially one of the rare worldwide examples of active low-angle (< 15°) normal fault accommodating crustal extension and characterized by a regular occurrence of micro-earthquakes. The modular procedure combines: i) a sensitive detection algorithm optimized to declare low-magnitude events; ii) an accurate picking procedure that provides consistently weighted P- and S-wave arrival times, P-wave first motion polarities and the maximum waveform amplitude for local magnitude calculation; iii) both linearized iterative and non-linear global-search earthquake location algorithms to compute accurate absolute locations of single-events in a 3D geological model (see Latorre et al. same session); iv) cross-correlation and double-difference location methods to compute high-resolution relative event locations. This procedure is now running off-line with a delay of 1 week to the real-time. We are now implementing this

  9. Location and identification of radioactive waste in Massachusetts Bay

    SciTech Connect

    Colton, D.P.; Louft, H.L.

    1993-12-31

    The accurate location and identification of hazardous waste materials dumped in the world`s oceans are becoming an increasing concern. For years, the oceans have been viewed as a convenient and economical place to dispose of all types of waste. In all but a few cases, major dump sites have been closed leaving behind years of accumulated debris. The extent of past environmental damage, the possibility of continued environmental damage, and the possibility of hazardous substances reaching the human food chain need to be carefully investigated. This paper reports an attempt to accurately locate and identify the radioactive component of the waste material. The Department of Energy`s Remote Sensing Laboratory (RSL), in support of the US Environmental Protection Agency (EPA), provided the precision navigation system and prototype underwater radiological monitoring equipment that were used during this project. The paper also describes the equipment used, presents the data obtained, and discusses future equipment development.

  10. Two-dimensional location and direction estimating method.

    PubMed

    Haga, Teruhiro; Tsukamoto, Sosuke; Hoshino, Hiroshi

    2008-01-01

    In this paper, a method of estimating both the position and the rotation angle of an object on a measurement stage was proposed. The system utilizes the radio communication technology and the directivity of an antenna. As a prototype system, a measurement stage (a circle 240mm in diameter) with 36 antennas that placed in each 10 degrees was developed. Two transmitter antennas are settled in a right angle on the stage as the target object, and the position and the rotation angle is estimated by measuring efficiency of the radio communication of each 36 antennas. The experimental result revealed that even when the estimated location is not so accurate (about a 30 mm error), the rotation angle is accurately estimated (about 2.33 degree error on average). The result suggests that the proposed method will be useful for estimating the location and the direction of an object.

  11. Features, Events, and Processes: Disruptive Events

    SciTech Connect

    P. Sanchez

    2004-11-08

    The purpose of this analysis report is to evaluate and document the inclusion or exclusion of the disruptive events features, events, and processes (FEPs) with respect to modeling used to support the total system performance assessment for license application (TSPA-LA). A screening decision, either ''Included'' or ''Excluded,'' is given for each FEP, along with the technical basis for screening decisions. This information is required by the U.S. Nuclear Regulatory Commission (NRC) at 10 CFR 63.114 (d), (e), and (f) [DIRS 156605]. The FEPs addressed in this report deal with both seismic and igneous disruptive events, such as fault displacements through the repository and an igneous intrusion into the repository. For included FEPs, this analysis summarizes the implementation of the FEP in TSPA-LA (i.e., how the FEP is included). For excluded FEPs, this analysis provides the technical basis for exclusion from TSPA-LA (i.e., why the FEP is excluded). Previous versions of this report were developed to support the total system performance assessments (TSPA) for various prior repository designs. This revision addresses the repository design for the license application (LA).

  12. Committed Sport Event Volunteers

    ERIC Educational Resources Information Center

    Han, Keunsu; Quarterman, Jerome; Strigas, Ethan; Ha, Jaehyun; Lee, Seungbum

    2013-01-01

    The purpose of this study was to investigate the relationships among selected demographic characteristics (income, education and age), motivation and commitment of volunteers at a sporting event. Three-hundred and five questionnaires were collected from volunteers in a marathon event and analyzed using structural equation modeling (SEM). Based on…

  13. Activating Event Knowledge

    ERIC Educational Resources Information Center

    Hare, Mary; Jones, Michael; Thomson, Caroline; Kelly, Sarah; McRae, Ken

    2009-01-01

    An increasing number of results in sentence and discourse processing demonstrate that comprehension relies on rich pragmatic knowledge about real-world events, and that incoming words incrementally activate such knowledge. If so, then even outside of any larger context, nouns should activate knowledge of the generalized events that they denote or…

  14. Traumatic events and children

    MedlinePlus

    ... a one-time traumatic event or a repeated trauma that happens over and over again. Examples of one-time traumatic events are: Natural disasters, such as a tornado, hurricane, fire, or flood Rape Witness shooting or stabbing of a person Sudden ...

  15. Contrasting Large Solar Events

    NASA Astrophysics Data System (ADS)

    Lanzerotti, Louis J.

    2010-10-01

    After an unusually long solar minimum, solar cycle 24 is slowly beginning. A large coronal mass ejection (CME) from sunspot 1092 occurred on 1 August 2010, with effects reaching Earth on 3 August and 4 August, nearly 38 years to the day after the huge solar event of 4 August 1972. The prior event, which those of us engaged in space research at the time remember well, recorded some of the highest intensities of solar particles and rapid changes of the geomagnetic field measured to date. What can we learn from the comparisons of these two events, other than their essentially coincident dates? One lesson I took away from reading press coverage and Web reports of the August 2010 event is that the scientific community and the press are much more aware than they were nearly 4 decades ago that solar events can wreak havoc on space-based technologies.

  16. How close can we approach the event horizon of the Kerr black hole from the detection of gravitational quasinormal modes?

    NASA Astrophysics Data System (ADS)

    Nakamura, Takashi; Nakano, Hiroyuki

    2016-04-01

    Using the Wentzel-Kramers-Brillouin method, we show that the peak location (r_peak) of the potential, which determines the quasinormal mode frequency of the Kerr black hole, obeys an accurate empirical relation as a function of the specific angular momentum a and the gravitational mass M. If the quasinormal mode with a/M ˜ 1 is observed by gravitational wave detectors, we can confirm the black-hole space-time around the event horizon, r_peak=r_+ +O(√ {1-q}), where r_+ is the event horizon radius. However, if the quasinormal mode is different from that of general relativity, we are forced to seek the true theory of gravity and/or face the existence of the naked singularity.

  17. The Locations of Gamma-Ray Bursts Measured by Comptel

    NASA Technical Reports Server (NTRS)

    Kippen, R. Marc; Ryan, James M.; Connors, Alanna; Hartmann, Dieter H.; Winkler, Christoph; Kuiper, Lucien; Varendorff, Martin; McConnell, Mark L.; Hurley, Kevin; Hermsen, Wim; Schoenfelder, Volker

    1998-01-01

    The COMPTEL instrument on the Compton Gamma Ray Observatory is used to measure the locations of gamma-ray bursts through direct imaging of MeV photons. In a comprehensive search, we have detected and localized 29 bursts observed between 1991 April 19 and 1995 May 31. The average location accuracy of these events is 1.25 deg (1 sigma), including a systematic error of approx. 0.5 deg, which is verified through comparison with Interplanetary Network (IPN) timing annuli. The combination of COMPTEL and IPN measurements results in locations for 26 of the bursts with an average "error box" area of only approx. 0.3 deg (1 sigma). We find that the angular distribution of COMPTEL burst locations is consistent with large-scale isotropy and that there is no statistically significant evidence of small-angle autocorrelations. We conclude that there is no compelling evidence for burst repetition since no more than two of the events (or approx. 7% of the 29 bursts) could possibly have come from the same source. We also find that there is no significant correlation between the burst locations and either Abell clusters of galaxies or radio-quiet quasars. Agreement between individual COMPTEL locations and IPN annuli places a lower limit of approx. 100 AU (95% confidence) on the distance to the stronger bursts.

  18. Giant African pouched rats (Cricetomys gambianus) that work on tilled soil accurately detect land mines.

    PubMed

    Edwards, Timothy L; Cox, Christophe; Weetjens, Bart; Tewelde, Tesfazghi; Poling, Alan

    2015-09-01

    Pouched rats were employed as mine-detection animals in a quality-control application where they searched for mines in areas previously processed by a mechanical tiller. The rats located 58 mines and fragments in this 28,050-m(2) area with a false indication rate of 0.4 responses per 100 m(2) . Humans with metal detectors found no mines that were not located by the rats. These findings indicate that pouched rats can accurately detect land mines in disturbed soil and suggest that they can play multiple roles in humanitarian demining. PMID:25962550

  19. Giant African pouched rats (Cricetomys gambianus) that work on tilled soil accurately detect land mines.

    PubMed

    Edwards, Timothy L; Cox, Christophe; Weetjens, Bart; Tewelde, Tesfazghi; Poling, Alan

    2015-09-01

    Pouched rats were employed as mine-detection animals in a quality-control application where they searched for mines in areas previously processed by a mechanical tiller. The rats located 58 mines and fragments in this 28,050-m(2) area with a false indication rate of 0.4 responses per 100 m(2) . Humans with metal detectors found no mines that were not located by the rats. These findings indicate that pouched rats can accurately detect land mines in disturbed soil and suggest that they can play multiple roles in humanitarian demining.

  20. [Nursing iatrogenic events in hospitalized elderly patients].

    PubMed

    dos Santos, Jussara Carvalho; Ceolim, Maria Filomena

    2009-12-01

    The purpose of this cross-sectional quantitative study was to identify iatrogenic nursing events involving elderly patients hospitalized in two nursing wards of a university hospital (Campinas, São Paulo, Brazil). Data was collected among 100 patient records (50 men, 50 women) using an instrument created by the authors. Data analysis was performed using descriptive statistics in addition to Mann-Whitney and Kruskal-Wallis tests. Results were significant at p < 0.05. Latrogenic events in the 26 files included: loss of intravenous site (14), pressure ulcers (8) and falls (2), among others. Reports were not detailed and failed to indicate interventions to prevent new occurrences. The findings suggest the importance of creating ways to encourage nursing professionals to accurately report iatrogenic events, as well as creating wards specifically for the elderly population.

  1. Testing the ability of different seismic detections approaches to monitor aftershocks following a moderate magnitude event.

    NASA Astrophysics Data System (ADS)

    Romero, Paula; Díaz, Jordi; Ruiz, Mario; Cantavella, Juan Vicente; Gomez-García, Clara

    2016-04-01

    The detection and picking of seismic events is a permanent concern for seismic surveying, in particular when dealing with aftershocks of moderate magnitude events. Many efforts have been done to find the balance between computer efficiency and the robustness of the detection methods. In this work, data recorded by a high density seismic network deployed following a 5.2 magnitude event located close to Albacete, SE Spain, is used to test the ability of classical and recently proposed detection methodologies. Two days after the main shock, occurred the 23th February, a network formed by 11 stations from ICTJA-CSIC and 2 stations from IGN were deployed over the region, with inter-station distances ranging between 5 and 10 km. The network remained in operation until April 6th, 2015 and allowed to manually identify up to 552 events with magnitudes from 0.2 to 3.5 located in an area of just 25 km2 inside the network limits. The detection methods here studied applied are the classical STA/LTA, a power spectral method, a detector based in the Benford's law and a waveform similarity method. The STA/LTA method, based in the comparison of background noise and seismic signal amplitudes, is taken as a reference to evaluate the results arising from the other approaches. The power spectral density method is based in the inspection of the characteristic frequency pattern associated to seismic events. The Benford's Law detector analyses the distribution of the first-digit of displacement count in the histogram of a seismic waveform, considering that only the windows containing seismic wave arrivals will match the logarithmic law. Finally, the waveform similarity method is based in the analysis of the normalized waveform amplitude, detecting those events with waveform similar to a previously defined master event. The aim of this contribution is to inspect the ability of the different approaches to accurately detect the aftershocks events for this kind of seismic crisis and to

  2. Asynchronous event-based binocular stereo matching.

    PubMed

    Rogister, Paul; Benosman, Ryad; Ieng, Sio-Hoi; Lichtsteiner, Patrick; Delbruck, Tobi

    2012-02-01

    We present a novel event-based stereo matching algorithm that exploits the asynchronous visual events from a pair of silicon retinas. Unlike conventional frame-based cameras, recent artificial retinas transmit their outputs as a continuous stream of asynchronous temporal events, in a manner similar to the output cells of the biological retina. Our algorithm uses the timing information carried by this representation in addressing the stereo-matching problem on moving objects. Using the high temporal resolution of the acquired data stream for the dynamic vision sensor, we show that matching on the timing of the visual events provides a new solution to the real-time computation of 3-D objects when combined with geometric constraints using the distance to the epipolar lines. The proposed algorithm is able to filter out incorrect matches and to accurately reconstruct the depth of moving objects despite the low spatial resolution of the sensor. This brief sets up the principles for further event-based vision processing and demonstrates the importance of dynamic information and spike timing in processing asynchronous streams of visual events. PMID:24808513

  3. Location, Location, Location: How Would a High-Performing Charter School Network Fare in Different States?

    ERIC Educational Resources Information Center

    Lozier, Chris; Rotherham, Andrew J.

    2011-01-01

    In this paper the authors do not examine different operating strategies for charter schools or analyze the impact of their often educationally intensive models on finance. Instead, because public charter schools are funded predominantly by public dollars, they simply ask what impact location--and its associated variances in public funding and the…

  4. How accurate is accident data in road safety research? An application of vehicle black box data regarding pedestrian-to-taxi accidents in Korea.

    PubMed

    Chung, Younshik; Chang, IlJoon

    2015-11-01

    Recently, the introduction of vehicle black box systems or in-vehicle video event data recorders enables the driver to use the system to collect more accurate crash information such as location, time, and situation at the pre-crash and crash moment, which can be analyzed to find the crash causal factors more accurately. This study presents the vehicle black box system in brief and its application status in Korea. Based on the crash data obtained from the vehicle black box system, this study analyzes the accuracy of the crash data collected from existing road crash data recording method, which has been recorded by police officers based on accident parties' statements or eyewitness's account. The analysis results show that the crash data observed by the existing method have an average of 84.48m of spatial difference and standard deviation of 157.75m as well as average 29.05min of temporal error and standard deviation of 19.24min. Additionally, the average and standard deviation of crash speed errors were found to be 9.03km/h and 7.21km/h, respectively. PMID:26298271

  5. Antarctic Meteorite Location Map Series

    NASA Technical Reports Server (NTRS)

    Schutt, John (Editor); Fessler, Brian (Editor); Cassidy, William (Editor)

    1989-01-01

    Antarctica has been a prolific source of meteorites since meteorite concentrations were discovered in 1969. The Antarctic Search For Meteorites (ANSMET) project has been active over much of the Trans-Antarctic Mountain Range. The first ANSMET expedition (a joint U.S.-Japanese effort) discovered what turned out to be a significant concentration of meteorites at the Allan Hills in Victoria Land. Later reconnaissance in this region resulted in the discovery of meteorite concentrations on icefields to the west of the Allan Hills, at Reckling Moraine, and Elephant Moraine. Antarctic meteorite location maps (reduced versions) of the Allan Hills main, near western, middle western, and far western icefields and the Elephant Moraine icefield are presented. Other Antarctic meteorite location maps for the specimens found by the ANSMET project are being prepared.

  6. Cholesterol's location in lipid bilayers

    DOE PAGES

    Marquardt, Drew; Kučerka, Norbert; Wassall, Stephen R.; Harroun, Thad A.; Katsaras, John

    2016-04-04

    It is well known that cholesterol modifies the physical properties of lipid bilayers. For example, the much studied liquid-ordered Lo phase contains rapidly diffusing lipids with their acyl chains in the all trans configuration, similar to gel phase bilayers. Moreover, the Lo phase is commonly associated with cholesterol-enriched lipid rafts, which are thought to serve as platforms for signaling proteins in the plasma membrane. Cholesterol's location in lipid bilayers has been studied extensively, and it has been shown – at least in some bilayers – to align differently from its canonical upright orientation, where its hydroxyl group is in themore » vicinity of the lipid–water interface. In this study we review recent works describing cholesterol's location in different model membrane systems with emphasis on results obtained from scattering, spectroscopic and molecular dynamics studies.« less

  7. Location Privacy in RFID Applications

    NASA Astrophysics Data System (ADS)

    Sadeghi, Ahmad-Reza; Visconti, Ivan; Wachsmann, Christian

    RFID-enabled systems allow fully automatic wireless identification of objects and are rapidly becoming a pervasive technology with various applications. However, despite their benefits, RFID-based systems also pose challenging risks, in particular concerning user privacy. Indeed, improvident use of RFID can disclose sensitive information about users and their locations allowing detailed user profiles. Hence, it is crucial to identify and to enforce appropriate security and privacy requirements of RFID applications (that are also compliant to legislation). This chapter first discusses security and privacy requirements for RFID-enabled systems, focusing in particular on location privacy issues. Then it explores the advances in RFID applications, stressing the security and privacy shortcomings of existing proposals. Finally, it presents new promising directions for privacy-preserving RFID systems, where as a case study we focus electronic tickets (e-tickets) for public transportation.

  8. Event Coverage Detection and Event Source Determination in Underwater Wireless Sensor Networks

    PubMed Central

    Zhou, Zhangbing; Xing, Riliang; Duan, Yucong; Zhu, Yueqin; Xiang, Jianming

    2015-01-01

    With the advent of the Internet of Underwater Things, smart things are deployed in the ocean space and establish underwater wireless sensor networks for the monitoring of vast and dynamic underwater environments. When events are found to have possibly occurred, accurate event coverage should be detected, and potential event sources should be determined for the enactment of prompt and proper responses. To address this challenge, a technique that detects event coverage and determines event sources is developed in this article. Specifically, the occurrence of possible events corresponds to a set of neighboring sensor nodes whose sensory data may deviate from a normal sensing range in a collective fashion. An appropriate sensor node is selected as the relay node for gathering and routing sensory data to sink node(s). When sensory data are collected at sink node(s), the event coverage is detected and represented as a weighted graph, where the vertices in this graph correspond to sensor nodes and the weight specified upon the edges reflects the extent of sensory data deviating from a normal sensing range. Event sources are determined, which correspond to the barycenters in this graph. The results of the experiments show that our technique is more energy efficient, especially when the network topology is relatively steady. PMID:26694394

  9. Event Coverage Detection and Event Source Determination in Underwater Wireless Sensor Networks.

    PubMed

    Zhou, Zhangbing; Xing, Riliang; Duan, Yucong; Zhu, Yueqin; Xiang, Jianming

    2015-01-01

    With the advent of the Internet of Underwater Things, smart things are deployed in the ocean space and establish underwater wireless sensor networks for the monitoring of vast and dynamic underwater environments. When events are found to have possibly occurred, accurate event coverage should be detected, and potential event sources should be determined for the enactment of prompt and proper responses. To address this challenge, a technique that detects event coverage and determines event sources is developed in this article. Specifically, the occurrence of possible events corresponds to a set of neighboring sensor nodes whose sensory data may deviate from a normal sensing range in a collective fashion. An appropriate sensor node is selected as the relay node for gathering and routing sensory data to sink node(s). When sensory data are collected at sink node(s), the event coverage is detected and represented as a weighted graph, where the vertices in this graph correspond to sensor nodes and the weight specified upon the edges reflects the extent of sensory data deviating from a normal sensing range. Event sources are determined, which correspond to the barycenters in this graph. The results of the experiments show that our technique is more energy efficient, especially when the network topology is relatively steady. PMID:26694394

  10. Event Coverage Detection and Event Source Determination in Underwater Wireless Sensor Networks.

    PubMed

    Zhou, Zhangbing; Xing, Riliang; Duan, Yucong; Zhu, Yueqin; Xiang, Jianming

    2015-12-15

    With the advent of the Internet of Underwater Things, smart things are deployed in the ocean space and establish underwater wireless sensor networks for the monitoring of vast and dynamic underwater environments. When events are found to have possibly occurred, accurate event coverage should be detected, and potential event sources should be determined for the enactment of prompt and proper responses. To address this challenge, a technique that detects event coverage and determines event sources is developed in this article. Specifically, the occurrence of possible events corresponds to a set of neighboring sensor nodes whose sensory data may deviate from a normal sensing range in a collective fashion. An appropriate sensor node is selected as the relay node for gathering and routing sensory data to sink node(s). When sensory data are collected at sink node(s), the event coverage is detected and represented as a weighted graph, where the vertices in this graph correspond to sensor nodes and the weight specified upon the edges reflects the extent of sensory data deviating from a normal sensing range. Event sources are determined, which correspond to the barycenters in this graph. The results of the experiments show that our technique is more energy efficient, especially when the network topology is relatively steady.

  11. Computer Model Locates Environmental Hazards

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.

  12. ACCURATE TEMPERATURE MEASUREMENTS IN A NATURALLY-ASPIRATED RADIATION SHIELD

    SciTech Connect

    Kurzeja, R.

    2009-09-09

    Experiments and calculations were conducted with a 0.13 mm fine wire thermocouple within a naturally-aspirated Gill radiation shield to assess and improve the accuracy of air temperature measurements without the use of mechanical aspiration, wind speed or radiation measurements. It was found that this thermocouple measured the air temperature with root-mean-square errors of 0.35 K within the Gill shield without correction. A linear temperature correction was evaluated based on the difference between the interior plate and thermocouple temperatures. This correction was found to be relatively insensitive to shield design and yielded an error of 0.16 K for combined day and night observations. The correction was reliable in the daytime when the wind speed usually exceeds 1 m s{sup -1} but occasionally performed poorly at night during very light winds. Inspection of the standard deviation in the thermocouple wire temperature identified these periods but did not unambiguously locate the most serious events. However, estimates of sensor accuracy during these periods is complicated by the much larger sampling volume of the mechanically-aspirated sensor compared with the naturally-aspirated sensor and the presence of significant near surface temperature gradients. The root-mean-square errors therefore are upper limits to the aspiration error since they include intrinsic sensor differences and intermittent volume sampling differences.

  13. Event Screening Using a Cepstral F-Statistic Technique to Identify Depth Phases

    NASA Astrophysics Data System (ADS)

    Bonner, J. L.; Reiter, D. T.; Shumway, R. H.

    2001-05-01

    The depth of a seismic event is one of the most important criteria for screening events as either explosions or earthquakes. Unfortunately, the depth is also notoriously difficult to accurately determine. Some of the methods used to determine focal depth include waveform modeling, beamforming and cepstral methods for detecting depth phases such as pP and sP. To improve depth estimation using cepstral methods we focused on three primary goals: (1) formulating a method for determining the statistical significance of peaks in the cepstrum, (2) testing the method on synthetic data as well as earthquake data with well-determined hypocenters, and (3) evaluating the method as an operational analysis tool for determining event depths using varied datasets at both teleseismic and regional distances. We have formulated a cepstral F-statistic by using a classical approach to detecting a signal in a number of stationarily correlated time series. The method is particularly suited for regional array analysis; however, the method can also be applied to three-component data. Tests on synthetic data show the method works best when the P wave arrival has a signal-to-noise ratio (SNR) greater than between 8 and 10 with the depth phase exhibiting a SNR greater than between 2 and 4. These requirements in SNR were validated using events from the Hindu Kush region of Afghanistan with well-determined depths as recorded on arrays at teleseismic distances. To test the operational capabilities of this method as a tool for event screening at a data center, we analyzed 61 events located by the pIDC and/or the National Earthquake Information Center (NEIC). Our method determined statistically significant depths for 41 of 61 events with 10 of the events having low SNR at the recording arrays, while another 10 were either too shallow for analysis or did not exhibit depth phases. The method determined depths between 12 and 90 km for 7 of 17 events, which the pIDC had fixed to 0 km. The scatter

  14. Locating buildings in aerial photos

    NASA Technical Reports Server (NTRS)

    Green, James S.

    1994-01-01

    Algorithms and techniques for use in the identification and location of large buildings in digitized copies of aerial photographs are developed and tested. The building data would be used in the simulation of objects located in the vicinity of an airport that may be detected by aircraft radar. Two distinct approaches are considered. Most building footprints are rectangular in form. The first approach studied is to search for right-angled corners that characterize rectangular objects and then to connect these corners to complete the building. This problem is difficult because many nonbuilding objects, such as street corners, parking lots, and ballparks often have well defined corners which are often difficult to distinguish from rooftops. Furthermore, rooftops come in a number of shapes, sizes, shadings, and textures which also limit the discrimination task. The strategy used linear sequences of different samples to detect straight edge segments at multiple angles and to determine when these segments meet at approximately right-angles with respect to each other. This technique is effective in locating corners. The test image used has a fairly rectangular block pattern oriented about thirty degrees clockwise from a vertical alignment, and the overall measurement data reflect this. However, this technique does not discriminate between buildings and other objects at an operationally suitable rate. In addition, since multiple paths are tested for each image pixel, this is a time consuming task. The process can be speeded up by preprocessing the image to locate the more optimal sampling paths. The second approach is to rely on a human operator to identify and select the building objects and then to have the computer determine the outline and location of the selected structures. When presented with a copy of a digitized aerial photograph, the operator uses a mouse and cursor to select a target building. After a button on the mouse is pressed, with the cursor fully within

  15. Radar Location Equipment Development Program: Phase I

    SciTech Connect

    Sandness, G.A.; Davis, K.C.

    1985-06-01

    The work described in this report represents the first phase of a planned three-phase project designed to develop a radar system for monitoring waste canisters stored in a thick layer of bedded salt at the Waste Isolation Pilot Plant near Carlsbad, New Mexico. The canisters will be contained in holes drilled into the floor of the underground waste storage facility. It is hoped that these measurements can be made to accuracies of +-5 cm and +-2/sup 0/, respectively. The initial phase of this project was primarily a feasibility study. Its principal objective was to evaluate the potential effectiveness of the radar method in the planned canister monitoring application. Its scope included an investigation of the characteristics of radar signals backscattered from waste canisters, a test of preliminary data analysis methods, an assessment of the effects of salt and bentonite (a proposed backfill material) on the propagation of the radar signals, and a review of current ground-penetrating radar technology. A laboratory experiment was performed in which radar signals were backscattered from simulated waste canisters. The radar data were recorded by a digital data acquisition system and were subsequently analyzed by three different computer-based methods to extract estimates of canister location and tilt. Each of these methods yielded results that were accurate within a few centimeters in canister location and within 1/sup 0/ in canister tilt. Measurements were also made to determine the signal propagation velocities in salt and bentonite (actually a bentonite/sand mixture) and to estimate the signal attenuation rate in the bentonite. Finally, a product survey and a literature search were made to identify available ground-penetrating radar systems and alternative antenna designs that may be particularly suitable for this unique application. 10 refs., 21 figs., 4 tabs.

  16. Selection of monitoring locations for storm water quality assessment.

    PubMed

    Langeveld, J G; Boogaard, F; Liefting, H J; Schilperoort, R P S; Hof, A; Nijhof, H; de Ridder, A C; Kuiper, M W

    2014-01-01

    Storm water runoff is a major contributor to the pollution of receiving waters. Storm water characteristics may vary significantly between locations and events. Hence, for each given location, this necessitates a well-designed monitoring campaign prior to selection of an appropriate storm water management strategy. The challenge for the design of a monitoring campaign with a given budget is to balance detailed monitoring at a limited number of locations versus less detailed monitoring at a large number of locations. This paper proposes a methodology for the selection of monitoring locations for storm water quality monitoring, based on (pre-)screening, a quick scan monitoring campaign, and final selection of locations and design of the monitoring setup. The main advantage of the method is the ability to prevent the selection of monitoring locations that turn out to be inappropriate. In addition, in this study, the quick scan resulted in a first useful dataset on storm water quality and a strong indication of illicit connections at one of the monitoring locations.

  17. Waveform Cross-Correlation for Improved North Texas Earthquake Locations

    NASA Astrophysics Data System (ADS)

    Phillips, M.; DeShon, H. R.; Oldham, H. R.; Hayward, C.

    2014-12-01

    In November 2013, a sequence of earthquakes began in Reno and Azle, TX, two communities located northwest of Fort Worth in an area of active oil and gas extraction. Only one felt earthquake had been reported within the area before the occurrence of probable injection-induced earthquakes at the Dallas-Fort Worth airport in 2008. The USGS National Earthquakes Information Center (NEIC) has reported 27 felt earthquakes in the Reno-Azle area through January 28, 2014. A temporary seismic network was installed beginning in December 2013 to acquire data to improve location and magnitude estimates and characterize the earthquake sequence. Here, we present high-resolution relative earthquake locations derived using differential time data from waveform cross-correlation. Cross-correlation is computed using the GISMO software suite and event relocation is done using double difference relocation techniques. Waveform cross-correlation of the local data indicates high (>70%) similarity between 4 major swarms of events lasting between 18 and 24 hours. These swarms are temporal zones of high event frequency; 1.4% of the time series data accounts for 42.1% of the identified local earthquakes. Local earthquakes are occurring along the Newark East Fault System, a NE-SW striking normal fault system previously thought inactive at depths between 2 and 8 km in the Ellenburger limestone formation and underlying Precambrian basement. Data analysis is ongoing and continued characterization of the associated fault will provide improved location estimates.

  18. Identification of the "minimal triangle" and other common event-to-event transitions in conflict and containment incidents.

    PubMed

    Bowers, Len; James, Karen; Quirk, Alan; Wright, Steve; Williams, Hilary; Stewart, Duncan

    2013-07-01

    Although individual conflict and containment events among acute psychiatric inpatients have been studied in some detail, the relationship of these events to each other has not. In particular, little is known about the temporal order of events for individual patients. This study aimed to identify the most common pathways from event to event. A sample of 522 patients was recruited from 84 acute psychiatric wards in 31 hospital locations in London and the surrounding areas during 2009-2010. Data on the order of conflict and containment events were collected for the first two weeks of admission from patients' case notes. Event-to-event transitions were tabulated and depicted diagrammatically. Event types were tested for their most common temporal placing in sequences of events. Most conflict and containment occurs within and between events of the minimal triangle (verbal aggression, de-escalation, and PRN medication), and the majority of these event sequences conclude in no further events; a minority transition to other, more severe, events. Verbal abuse and medication refusal were more likely to start sequences of disturbed behaviour. Training in the prevention and management of violence needs to acknowledge that a gradual escalation of patient behaviour does not always occur. Verbal aggression is a critical initiator of conflict events, and requires more detailed and sustained research on optimal management and prevention strategies. Similar research is required into medication refusal by inpatients.

  19. Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method

    NASA Astrophysics Data System (ADS)

    Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben

    2010-05-01

    Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux

  20. Locating Local Earthquakes Using Single 3-Component Broadband Seismological Data

    NASA Astrophysics Data System (ADS)

    Das, S. B.; Mitra, S.

    2015-12-01

    We devised a technique to locate local earthquakes using single 3-component broadband seismograph and analyze the factors governing the accuracy of our result. The need for devising such a technique arises in regions of sparse seismic network. In state-of-the-art location algorithms, a minimum of three station recordings are required for obtaining well resolved locations. However, the problem arises when an event is recorded by less than three stations. This may be because of the following reasons: (a) down time of stations in a sparse network; (b) geographically isolated regions with limited logistic support to setup large network; (c) regions of insufficient economy for financing multi-station network and (d) poor signal-to-noise ratio for smaller events at most stations, except the one in its closest vicinity. Our technique provides a workable solution to the above problematic scenarios. However, our methodology is strongly dependent on the velocity model of the region. Our method uses a three step processing: (a) ascertain the back-azimuth of the event from the P-wave particle motion recorded on the horizontal components; (b) estimate the hypocentral distance using the S-P time; and (c) ascertain the emergent angle from the vertical and radial components. Once this is obtained, one can ray-trace through the 1-D velocity model to estimate the hypocentral location. We test our method on synthetic data, which produces results with 99% precision. With observed data, the accuracy of our results are very encouraging. The precision of our results depend on the signal-to-noise ratio (SNR) and choice of the right band-pass filter to isolate the P-wave signal. We used our method on minor aftershocks (3 < mb < 4) of the 2011 Sikkim earthquake using data from the Sikkim Himalayan network. Location of these events highlight the transverse strike-slip structure within the Indian plate, which was observed from source mechanism study of the mainshock and larger aftershocks.

  1. Comparison of the 26 May 2012 SEP Event with the 3 November 2011 SEP Event

    NASA Astrophysics Data System (ADS)

    Makela, P. A.; Gopalswamy, N.; Thakur, N.; Xie, H.

    2015-12-01

    We compare the solar and interplanetary events associated with two large solar energetic particle (SEP) events on 26 May 2012 and 3 November 2011. Both SEP events were detected at three longitudinally widely separated locations by STEREO A and B spacecraft (more than 100 deg away from Earth) and the Wind and SOHO spacecraft near Earth. In Earth view, the November 2011 eruption occurred far behind the east limb at N09E154, whereas the May 2012 eruption occurred closer to the west limb at N15W121, suggesting that SEPs accelerated during the 2012 event might have easier access to Earth. Even though the 2012 event was more intense in the GOES >10 MeV proton channel (peak intensity 14 pfu) than the 2011 event (peak intensity 4 pfu), we find that the latter event was more intense at higher energies (> 40 MeV). Also, the initial rise at lower energies was slightly faster for the 2011 event as measured by SOHO/ERNE. In addition, the CME associated with the May 2012 event was faster with an estimated space speed of ~2029 km/s than that in the November 2011 event (1188 km/s). STEREO/EUVI images of the associated post-eruption arcades (PEAs) indicate that their orientations were different: the PEA of the May 2012 event had a high inclination (north-south), while the inclination of the PEA of the 2011 event was more moderate. Differences in the flux rope orientation may also have effect on the longitudinal extent of the SEP events. These observations suggest that the dependence of solar proton intensities on the observer's longitudinal distance from the solar source is more complex than traditionally assumed.

  2. Children's Strategies and Difficulties while Using a Map to Record Locations in an Outdoor Environment

    ERIC Educational Resources Information Center

    Kastens, Kim A.; Liben, Lynn S.

    2010-01-01

    A foundational skill for all field-based sciences is the ability to accurately record the location of field observations onto a map. To investigate the strategies children use when recording their observations during field-based inquiries, fourth graders were asked to indicate the location of colored flags by placing similarly colored stickers on…

  3. Event shape sorting

    NASA Astrophysics Data System (ADS)

    Kopečná, Renata; Tomášik, Boris

    2016-04-01

    We propose a novel method for sorting events of multiparticle production according to the azimuthal anisotropy of their momentum distribution. Although the method is quite general, we advocate its use in analysis of ultra-relativistic heavy-ion collisions where a large number of hadrons is produced. The advantage of our method is that it can automatically sort out samples of events with histograms that indicate similar distributions of hadrons. It takes into account the whole measured histograms with all orders of anisotropy instead of a specific observable ( e.g., v_2 , v_3 , q_2 . It can be used for more exclusive experimental studies of flow anisotropies which are then more easily compared to theoretical calculations. It may also be useful in the construction of mixed-events background for correlation studies as it allows to select events with similar momentum distribution.

  4. Special Event Production.

    ERIC Educational Resources Information Center

    Currents, 2002

    2002-01-01

    Offers a descriptive table of software that helps higher education institutions orchestrate events. Information includes vendor, contact, software, price, database engine/server platform, specific features, and client type. (EV)

  5. CCG - News & Events

    Cancer.gov

    NCI's Center for Cancer Genomics (CCG) has been widely recognized for its research efforts to facilitiate advances in cancer genomic research and improve patient outcomes. Find the latest news about and events featuring CCG.

  6. Holter and Event Monitors

    MedlinePlus

    ... Holter and event monitors are similar to an EKG (electrocardiogram). An EKG is a simple test that detects and records ... for diagnosing heart rhythm problems. However, a standard EKG only records the heartbeat for a few seconds. ...

  7. RAS Initiative - Events

    Cancer.gov

    The NCI RAS Initiative has organized multiple events with outside experts to discuss how the latest scientific and technological breakthroughs can be applied to discover vulnerabilities in RAS-driven cancers.

  8. "Universe" event at AIMS

    NASA Astrophysics Data System (ADS)

    2008-06-01

    Report of event of 11 May 2008 held at the African Institute of Mathematical Sciences (Muizenberg, Cape), with speakers Michael Griffin (Administrator of NASA), Stephen Hawking (Cambridge), David Gross (Kavli Institute, Santa Barbara) and George Smoot (Berkeley).

  9. QCD (&) event generators

    SciTech Connect

    Skands, Peter Z.; /Fermilab

    2005-07-01

    Recent developments in QCD phenomenology have spurred on several improved approaches to Monte Carlo event generation, relative to the post-LEP state of the art. In this brief review, the emphasis is placed on approaches for (1) consistently merging fixed-order matrix element calculations with parton shower descriptions of QCD radiation, (2) improving the parton shower algorithms themselves, and (3) improving the description of the underlying event in hadron collisions.

  10. When the Sky Falls: Performing Initial Assessments of Bright Atmospheric Events

    NASA Technical Reports Server (NTRS)

    Cooke, William J.; Brown, Peter; Blaauw, Rhiannon; Kingery, Aaron; Moser, Danielle

    2015-01-01

    The 2013 Chelyabinsk super bolide was the first "significant" impact event to occur in the age of social media and 24 hour news. Scientists, used to taking many days or weeks to analyze fireball events, were hard pressed to meet the immediate demands (within hours) for answers from the media, general public, and government officials. Fulfilling these requests forced many researchers to exploit information available from various Internet sources - videos were downloaded from sites like Youtube, geolocated via Google Street View, and quickly analyzed with improvised software; Twitter and Facebook were scoured for eyewitness accounts of the fireball and reports of meteorites. These data, combined with infrasound analyses, enabled a fairly accurate description of the Chelyabinsk event to be formed within a few hours; in particular, any relationship to 2012 DA14 (which passed near Earth later that same day) was eliminated. Results of these analyses were quickly disseminated to members of the NEO community for press conferences and media interviews. Despite a few minor glitches, the rapid initial assessment of Chelyabinsk was a triumph, permitting the timely conveyance of accurate information to the public and the incorporation of social media into fireball analyses. Beginning in 2008, the NASA Meteoroid Environments Office, working in cooperation with Western's Meteor Physics Group, developed processes and software that permit quick characterization - mass, trajectory, and orbital properties - of fireball events. These tools include automated monitoring of Twitter to establish the time of events (the first tweet is usually no more than a few seconds after the fireball), mining of Youtube and all sky camera web archives to locate videos suitable for analyses, use of Google Earth and Street View to geolocate the video locations, and software to determine the fireball trajectory and object orbital parameters, including generation of animations suitable for popular media

  11. Depth and source mechanism estimation for special event analysis, event screening, and regional calibration

    SciTech Connect

    Goldstein, P; Dodge, D; Ichinose, Rodgers, A; Bhattacharyya, B; Leach, R

    1999-07-23

    We have summarized the advantages and disadvantages of a variety of techniques for depth and mechanism estimation and suggest that significant work remains to be done for events with magnitudes of interest for test ban monitoring. We also describe a new, waveform modeling-based tool for fast and accurate, high-resolution depth and mechanism estimation. Significant features of this tool include its speed and accuracy and its applicability at relatively high frequencies. These features allow a user to rapidly determine accurate, high-resolution depth estimates and constraints on source mechanism for relatively small magnitude (mb-4.5) events. Based on the accuracy of depth estimates obtained with this tool, we conclude it is useful for both the analysis of unusual or suspect events and for event screening. We also find that this tool provides significant constraints on source mechanism and have used it to develop ''ground-truth'' estimates of depth and mechanism for a set of events in the Middle East and North Africa. These ''ground-truth'' depths and mechanisms should be useful for regional calibration.

  12. Complex Event Recognition Architecture

    NASA Technical Reports Server (NTRS)

    Fitzgerald, William A.; Firby, R. James

    2009-01-01

    Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.

  13. Seismic event classification system

    DOEpatents

    Dowla, Farid U.; Jarpe, Stephen P.; Maurer, William

    1994-01-01

    In the computer interpretation of seismic data, the critical first step is to identify the general class of an unknown event. For example, the classification might be: teleseismic, regional, local, vehicular, or noise. Self-organizing neural networks (SONNs) can be used for classifying such events. Both Kohonen and Adaptive Resonance Theory (ART) SONNs are useful for this purpose. Given the detection of a seismic event and the corresponding signal, computation is made of: the time-frequency distribution, its binary representation, and finally a shift-invariant representation, which is the magnitude of the two-dimensional Fourier transform (2-D FFT) of the binary time-frequency distribution. This pre-processed input is fed into the SONNs. These neural networks are able to group events that look similar. The ART SONN has an advantage in classifying the event because the types of cluster groups do not need to be pre-defined. The results from the SONNs together with an expert seismologist's classification are then used to derive event classification probabilities.

  14. Seismic event classification system

    DOEpatents

    Dowla, F.U.; Jarpe, S.P.; Maurer, W.

    1994-12-13

    In the computer interpretation of seismic data, the critical first step is to identify the general class of an unknown event. For example, the classification might be: teleseismic, regional, local, vehicular, or noise. Self-organizing neural networks (SONNs) can be used for classifying such events. Both Kohonen and Adaptive Resonance Theory (ART) SONNs are useful for this purpose. Given the detection of a seismic event and the corresponding signal, computation is made of: the time-frequency distribution, its binary representation, and finally a shift-invariant representation, which is the magnitude of the two-dimensional Fourier transform (2-D FFT) of the binary time-frequency distribution. This pre-processed input is fed into the SONNs. These neural networks are able to group events that look similar. The ART SONN has an advantage in classifying the event because the types of cluster groups do not need to be pre-defined. The results from the SONNs together with an expert seismologist's classification are then used to derive event classification probabilities. 21 figures.

  15. Kaposi sarcoma in unusual locations

    PubMed Central

    Pantanowitz, Liron; Dezube, Bruce J

    2008-01-01

    Kaposi sarcoma (KS) is a multifocal, vascular lesion of low-grade malignant potential that presents most frequently in mucocutaneous sites. KS also commonly involves lymph nodes and visceral organs. This article deals with the manifestation of KS in unusual anatomic regions. Unusual locations of KS involvement include the musculoskeletal system, central and peripheral nervous system, larynx, eye, major salivary glands, endocrine organs, heart, thoracic duct, urinary system and breast. The development of KS within wounds and blood clots is also presented. KS in these atypical sites may prove difficult to diagnose, resulting in patient mismanagement. Theories to explain the rarity and development of KS in these unusual sites are discussed. PMID:18605999

  16. New Location Improves Efficiency | Poster

    Cancer.gov

    By Nancy Parrish, Staff Writer The physical proximity of the SAIC-Frederick Intellectual Property (IP) Office to the NCI Technology Transfer Center (NCI-TTC) is one of the many benefits of being at the Advanced Technology Research Facility (ATRF), according to Courtney Silverthorn, Ph.D. Being in one location “has increased the effectiveness of both informal communication and formal meetings. We have already brainstormed solutions for several issues in the hallway during an informal chat,” said Silverthorn, an SAIC-Frederick IP specialist.

  17. Locating nuclear power plants underground.

    PubMed

    Scott, F M

    1975-01-01

    This paper reviews some of the questions that have been asked by experts and others as to why nuclear power plants are not located or placed underground. While the safeguards and present designs make such installations unnecessary, there are some definite advantages that warrant the additional cost involved. First of all, such an arrangement does satisfy the psychological concern of a number of people and, in so doing, might gain the acceptance of the public so that such plants could be constructed in urban areas of load centers. The results of these studies are presented and some of the requirements necessary for underground installations described, including rock conditions, depth of facilities, and economics.

  18. The Earth Observatory Natural Event Tracker (EONET): An API for Matching Natural Events to GIBS Imagery

    NASA Astrophysics Data System (ADS)

    Ward, K.

    2015-12-01

    Hidden within the terabytes of imagery in NASA's Global Imagery Browse Services (GIBS) collection are hundreds of daily natural events. Some events are newsworthy, devastating, and visibly obvious at a global scale, others are merely regional curiosities. Regardless of the scope and significance of any one event, it is likely that multiple GIBS layers can be viewed to provide a multispectral, dataset-based view of the event. To facilitate linking between the discrete event and the representative dataset imagery, NASA's Earth Observatory Group has developed a prototype application programming interface (API): the Earth Observatory Natural Event Tracker (EONET). EONET supports an API model that allows users to retrieve event-specific metadata--date/time, location, and type (wildfire, storm, etc.)--and web service layer-specific metadata which can be used to link to event-relevant dataset imagery in GIBS. GIBS' ability to ingest many near real time datasets, combined with its growing archive of past imagery, means that API users will be able to develop client applications that not only show ongoing events but can also look at imagery from before and after. In our poster, we will present the API and show examples of its use.

  19. Chronology: MSFC Space Station program, 1982 - present. Major events

    NASA Technical Reports Server (NTRS)

    Whalen, Jessie E. (Compiler); Mckinley, Sarah L. (Compiler); Gates, Thomas G. (Compiler)

    1988-01-01

    The Marshall Space Flight Center (MSFC) maintains an active program to capture historical information and documentation on the MSFC's roles regarding Space Shuttle and Space Station. Marshall History Report 12, called Chronology: MSFC Space Station Program, 1982-Present, is presented. It contains synopses of major events listed according to the dates of their occurrence. Indices follow the synopses and provide additional data concerning the events listed. The Event Index provides a brief listing of all the events without synopses. The Element Index lists the specific elements of the Space Station Program under consideration in the events. The Location Index lists the locations where the events took place. The indices and synopses may be cross-referenced by using dates.

  20. Accurate description of calcium solvation in concentrated aqueous solutions.

    PubMed

    Kohagen, Miriam; Mason, Philip E; Jungwirth, Pavel

    2014-07-17

    Calcium is one of the biologically most important ions; however, its accurate description by classical molecular dynamics simulations is complicated by strong electrostatic and polarization interactions with surroundings due to its divalent nature. Here, we explore the recently suggested approach for effectively accounting for polarization effects via ionic charge rescaling and develop a new and accurate parametrization of the calcium dication. Comparison to neutron scattering and viscosity measurements demonstrates that our model allows for an accurate description of concentrated aqueous calcium chloride solutions. The present model should find broad use in efficient and accurate modeling of calcium in aqueous environments, such as those encountered in biological and technological applications.