Sample records for earthquake location algorithm

  1. eqMAXEL: A new automatic earthquake location algorithm implementation for Earthworm

    NASA Astrophysics Data System (ADS)

    Lisowski, S.; Friberg, P. A.; Sheen, D. H.

    2017-12-01

    A common problem with automated earthquake location systems for a local to regional scale seismic network is false triggering and false locations inside the network caused by larger regional to teleseismic distance earthquakes. This false location issue also presents a problem for earthquake early warning systems where societal impacts of false alarms can be very expensive. Towards solving this issue, Sheen et al. (2016) implemented a robust maximum-likelihood earthquake location algorithm known as MAXEL. It was shown with both synthetics and real-data for a small number of arrivals, that large regional events were easily identifiable through metrics in the MAXEL algorithm. In the summer of 2017, we collaboratively implemented the MAXEL algorithm into a fully functional Earthworm module and tested it in regions of the USA where false detections and alarming are observed. We show robust improvement in the ability of the Earthworm system to filter out regional and teleseismic events that would have falsely located inside the network using the traditional Earthworm hypoinverse solution. We also explore using different grid sizes in the implementation of the MAXEL algorithm, which was originally designed with South Korea as the target network size.

  2. An Improved Source-Scanning Algorithm for Locating Earthquake Clusters or Aftershock Sequences

    NASA Astrophysics Data System (ADS)

    Liao, Y.; Kao, H.; Hsu, S.

    2010-12-01

    The Source-scanning Algorithm (SSA) was originally introduced in 2004 to locate non-volcanic tremors. Its application was later expanded to the identification of earthquake rupture planes and the near-real-time detection and monitoring of landslides and mud/debris flows. In this study, we further improve SSA for the purpose of locating earthquake clusters or aftershock sequences when only a limited number of waveform observations are available. The main improvements include the application of a ground motion analyzer to separate P and S waves, the automatic determination of resolution based on the grid size and time step of the scanning process, and a modified brightness function to utilize constraints from multiple phases. Specifically, the improved SSA (named as ISSA) addresses two major issues related to locating earthquake clusters/aftershocks. The first one is the massive amount of both time and labour to locate a large number of seismic events manually. And the second one is to efficiently and correctly identify the same phase across the entire recording array when multiple events occur closely in time and space. To test the robustness of ISSA, we generate synthetic waveforms consisting of 3 separated events such that individual P and S phases arrive at different stations in different order, thus making correct phase picking nearly impossible. Using these very complicated waveforms as the input, the ISSA scans all model space for possible combination of time and location for the existence of seismic sources. The scanning results successfully associate various phases from each event at all stations, and correctly recover the input. To further demonstrate the advantage of ISSA, we apply it to the waveform data collected by a temporary OBS array for the aftershock sequence of an offshore earthquake southwest of Taiwan. The overall signal-to-noise ratio is inadequate for locating small events; and the precise arrival times of P and S phases are difficult to

  3. A Double-difference Earthquake location algorithm: Method and application to the Northern Hayward Fault, California

    USGS Publications Warehouse

    Waldhauser, F.; Ellsworth, W.L.

    2000-01-01

    We have developed an efficient method to determine high-resolution hypocenter locations over large distances. The location method incorporates ordinary absolute travel-time measurements and/or cross-correlation P-and S-wave differential travel-time measurements. Residuals between observed and theoretical travel-time differences (or double-differences) are minimized for pairs of earthquakes at each station while linking together all observed event-station pairs. A least-squares solution is found by iteratively adjusting the vector difference between hypocentral pairs. The double-difference algorithm minimizes errors due to unmodeled velocity structure without the use of station corrections. Because catalog and cross-correlation data are combined into one system of equations, interevent distances within multiplets are determined to the accuracy of the cross-correlation data, while the relative locations between multiplets and uncorrelated events are simultaneously determined to the accuracy of the absolute travel-time data. Statistical resampling methods are used to estimate data accuracy and location errors. Uncertainties in double-difference locations are improved by more than an order of magnitude compared to catalog locations. The algorithm is tested, and its performance is demonstrated on two clusters of earthquakes located on the northern Hayward fault, California. There it colapses the diffuse catalog locations into sharp images of seismicity and reveals horizontal lineations of hypocenter that define the narrow regions on the fault where stress is released by brittle failure.

  4. Convolutional neural network for earthquake detection and location

    PubMed Central

    Perol, Thibaut; Gharbi, Michaël; Denolle, Marine

    2018-01-01

    The recent evolution of induced seismicity in Central United States calls for exhaustive catalogs to improve seismic hazard assessment. Over the last decades, the volume of seismic data has increased exponentially, creating a need for efficient algorithms to reliably detect and locate earthquakes. Today’s most elaborate methods scan through the plethora of continuous seismic records, searching for repeating seismic signals. We leverage the recent advances in artificial intelligence and present ConvNetQuake, a highly scalable convolutional neural network for earthquake detection and location from a single waveform. We apply our technique to study the induced seismicity in Oklahoma, USA. We detect more than 17 times more earthquakes than previously cataloged by the Oklahoma Geological Survey. Our algorithm is orders of magnitude faster than established methods. PMID:29487899

  5. Three-dimensional Probabilistic Earthquake Location Applied to 2002-2003 Mt. Etna Eruption

    NASA Astrophysics Data System (ADS)

    Mostaccio, A.; Tuve', T.; Zuccarello, L.; Patane', D.; Saccorotti, G.; D'Agostino, M.

    2005-12-01

    Recorded seismicity for the Mt. Etna volcano, occurred during the 2002-2003 eruption, has been relocated using a probabilistic, non-linear, earthquake location approach. We used the software package NonLinLoc (Lomax et al., 2000) adopting the 3D velocity model obtained by Cocina et al., 2005. We applied our data through different algorithms: (1) via a grid-search; (2) via a Metropolis-Gibbs; and (3) via an Oct-tree. The Oct-Tree algorithm gives efficient, faster and accurate mapping of the PDF (Probability Density Function) of the earthquake location problem. More than 300 seismic events were analyzed in order to compare non-linear location results with the ones obtained by using traditional, linearized earthquake location algorithm such as Hypoellipse, and a 3D linearized inversion (Thurber, 1983). Moreover, we compare 38 focal mechanisms, chosen following stricta criteria selection, with the ones obtained by the 3D and 1D results. Although the presented approach is more of a traditional relocation application, probabilistic earthquake location could be used in routinely survey.

  6. ConvNetQuake: Convolutional Neural Network for Earthquake Detection and Location

    NASA Astrophysics Data System (ADS)

    Denolle, M.; Perol, T.; Gharbi, M.

    2017-12-01

    Over the last decades, the volume of seismic data has increased exponentially, creating a need for efficient algorithms to reliably detect and locate earthquakes. Today's most elaborate methods scan through the plethora of continuous seismic records, searching for repeating seismic signals. In this work, we leverage the recent advances in artificial intelligence and present ConvNetQuake, a highly scalable convolutional neural network for probabilistic earthquake detection and location from single stations. We apply our technique to study two years of induced seismicity in Oklahoma (USA). We detect 20 times more earthquakes than previously cataloged by the Oklahoma Geological Survey. Our algorithm detection performances are at least one order of magnitude faster than other established methods.

  7. Near-real time 3D probabilistic earthquakes locations at Mt. Etna volcano

    NASA Astrophysics Data System (ADS)

    Barberi, G.; D'Agostino, M.; Mostaccio, A.; Patane', D.; Tuve', T.

    2012-04-01

    Automatic procedure for locating earthquake in quasi-real time must provide a good estimation of earthquakes location within a few seconds after the event is first detected and is strongly needed for seismic warning system. The reliability of an automatic location algorithm is influenced by several factors such as errors in picking seismic phases, network geometry, and velocity model uncertainties. On Mt. Etna, the seismic network is managed by INGV and the quasi-real time earthquakes locations are performed by using an automatic-picking algorithm based on short-term-average to long-term-average ratios (STA/LTA) calculated from an approximate squared envelope function of the seismogram, which furnish a list of P-wave arrival times, and the location algorithm Hypoellipse, with a 1D velocity model. The main purpose of this work is to investigate the performances of a different automatic procedure to improve the quasi-real time earthquakes locations. In fact, as the automatic data processing may be affected by outliers (wrong picks), the use of a traditional earthquake location techniques based on a least-square misfit function (L2-norm) often yield unstable and unreliable solutions. Moreover, on Mt. Etna, the 1D model is often unable to represent the complex structure of the volcano (in particular the strong lateral heterogeneities), whereas the increasing accuracy in the 3D velocity models at Mt. Etna during recent years allows their use today in routine earthquake locations. Therefore, we selected, as reference locations, all the events occurred on Mt. Etna in the last year (2011) which was automatically detected and located by means of the Hypoellipse code. By using this dataset (more than 300 events), we applied a nonlinear probabilistic earthquake location algorithm using the Equal Differential Time (EDT) likelihood function, (Font et al., 2004; Lomax, 2005) which is much more robust in the presence of outliers in the data. Successively, by using a probabilistic

  8. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  9. Re-evaluation Of The Shallow Seismicity On Mt Etna Applying Probabilistic Earthquake Location Algorithms.

    NASA Astrophysics Data System (ADS)

    Tuve, T.; Mostaccio, A.; Langer, H. K.; di Grazia, G.

    2005-12-01

    A recent research project carried out together with the Italian Civil Protection concerns the study of amplitude decay laws in various areas on the Italian territory, including Mt Etna. A particular feature of seismic activity is the presence of moderate magnitude earthquakes causing frequently considerable damage in the epicentre areas. These earthquakes are supposed to occur at rather shallow depth, no more than 5 km. Given the geological context, however, these shallow earthquakes would origin in rather weak sedimentary material. In this study we check the reliability of standard earthquake location, in particular with respect to the calculated focal depth, using standard location methods as well as more advanced approaches such as the NONLINLOC software proposed by Lomax et al. (2000) using it with its various options (i.e., Grid Search, Metropolis-Gibbs and Oct-Tree) and 3D velocity model (Cocina et al., 2005). All three options of NONLINLOC gave comparable results with respect to hypocenter locations and quality. Compared to standard locations we note a significant improve of location quality and, in particular a considerable difference of focal depths (in the order of 1.5 - 2 km). However, we cannot find a clear bias towards greater or lower depth. Further analyses concern the assessment of the stability of locations. For this purpose we carry out various Monte Carlo experiments perturbing travel time reading randomly. Further investigations are devoted to possible biases which may arise from the use of an unsuitable velocity model.

  10. Improvements of the offshore earthquake locations in the Earthquake Early Warning System

    NASA Astrophysics Data System (ADS)

    Chen, Ta-Yi; Hsu, Hsin-Chih

    2017-04-01

    Since 2014 the Earthworm Based Earthquake Alarm Reporting (eBEAR) system has been operated and been used to issue warnings to schools. In 2015 the system started to provide warnings to the public in Taiwan via television and the cell phone. Online performance of the eBEAR system indicated that the average reporting times afforded by the system are approximately 15 and 28 s for inland and offshore earthquakes, respectively. The eBEAR system in average can provide more warning time than the current EEW system (3.2 s and 5.5 s for inland and offshore earthquakes, respectively). However, offshore earthquakes were usually located poorly because only P-wave arrivals were used in the eBEAR system. Additionally, in the early stage of the earthquake early warning system, only fewer stations are available. The poor station coverage may be a reason to answer why offshore earthquakes are difficult to locate accurately. In the Geiger's inversion procedure of earthquake location, we need to put an initial hypocenter and origin time into the location program. For the initial hypocenter, we defined some test locations on the offshore area instead of using the average of locations from triggered stations. We performed 20 programs concurrently running the Geiger's method with different pre-defined initial position to locate earthquakes. We assume that if the program with the pre-defined initial position is close to the true earthquake location, during the iteration procedure of the Geiger's method the processing time of this program should be less than others. The results show that using pre-defined locations for trial-hypocenter in the inversion procedure is able to improve the accurate of offshore earthquakes. Especially for EEW system, in the initial stage of the EEW system, only use 3 or 5 stations to locate earthquakes may lead to bad results because of poor station coverage. In this study, the pre-defined trial-locations provide a feasible way to improve the estimations of

  11. Evaluating the Real-time and Offline Performance of the Virtual Seismologist Earthquake Early Warning Algorithm

    NASA Astrophysics Data System (ADS)

    Cua, G.; Fischer, M.; Heaton, T.; Wiemer, S.

    2009-04-01

    The Virtual Seismologist (VS) algorithm is a Bayesian approach to regional, network-based earthquake early warning (EEW). Bayes' theorem as applied in the VS algorithm states that the most probable source estimates at any given time is a combination of contributions from relatively static prior information that does not change over the timescale of earthquake rupture and a likelihood function that evolves with time to take into account incoming pick and amplitude observations from the on-going earthquake. Potentially useful types of prior information include network topology or station health status, regional hazard maps, earthquake forecasts, and the Gutenberg-Richter magnitude-frequency relationship. The VS codes provide magnitude and location estimates once picks are available at 4 stations; these source estimates are subsequently updated each second. The algorithm predicts the geographical distribution of peak ground acceleration and velocity using the estimated magnitude and location and appropriate ground motion prediction equations; the peak ground motion estimates are also updated each second. Implementation of the VS algorithm in California and Switzerland is funded by the Seismic Early Warning for Europe (SAFER) project. The VS method is one of three EEW algorithms whose real-time performance is being evaluated and tested by the California Integrated Seismic Network (CISN) EEW project. A crucial component of operational EEW algorithms is the ability to distinguish between noise and earthquake-related signals in real-time. We discuss various empirical approaches that allow the VS algorithm to operate in the presence of noise. Real-time operation of the VS codes at the Southern California Seismic Network (SCSN) began in July 2008. On average, the VS algorithm provides initial magnitude, location, origin time, and ground motion distribution estimates within 17 seconds of the earthquake origin time. These initial estimate times are dominated by the time for 4

  12. Earthquake location in island arcs

    USGS Publications Warehouse

    Engdahl, E.R.; Dewey, J.W.; Fujita, K.

    1982-01-01

    A comprehensive data set of selected teleseismic P-wave arrivals and local-network P- and S-wave arrivals from large earthquakes occurring at all depths within a small section of the central Aleutians is used to examine the general problem of earthquake location in island arcs. Reference hypocenters for this special data set are determined for shallow earthquakes from local-network data and for deep earthquakes from combined local and teleseismic data by joint inversion for structure and location. The high-velocity lithospheric slab beneath the central Aleutians may displace hypocenters that are located using spherically symmetric Earth models; the amount of displacement depends on the position of the earthquakes with respect to the slab and on whether local or teleseismic data are used to locate the earthquakes. Hypocenters for trench and intermediate-depth events appear to be minimally biased by the effects of slab structure on rays to teleseismic stations. However, locations of intermediate-depth events based on only local data are systematically displaced southwards, the magnitude of the displacement being proportional to depth. Shallow-focus events along the main thrust zone, although well located using only local-network data, are severely shifted northwards and deeper, with displacements as large as 50 km, by slab effects on teleseismic travel times. Hypocenters determined by a method that utilizes seismic ray tracing through a three-dimensional velocity model of the subduction zone, derived by thermal modeling, are compared to results obtained by the method of joint hypocenter determination (JHD) that formally assumes a laterally homogeneous velocity model over the source region and treats all raypath anomalies as constant station corrections to the travel-time curve. The ray-tracing method has the theoretical advantage that it accounts for variations in travel-time anomalies within a group of events distributed over a sizable region of a dipping, high

  13. Observing Triggered Earthquakes Across Iran with Calibrated Earthquake Locations

    NASA Astrophysics Data System (ADS)

    Karasozen, E.; Bergman, E.; Ghods, A.; Nissen, E.

    2016-12-01

    We investigate earthquake triggering phenomena in Iran by analyzing patterns of aftershock activity around mapped surface ruptures. Iran has an intense level of seismicity (> 40,000 events listed in the ISC Bulletin since 1960) due to it accommodating a significant portion of the continental collision between Arabia and Eurasia. There are nearly thirty mapped surface ruptures associated with earthquakes of M 6-7.5, mostly in eastern and northwestern Iran, offering a rich potential to study the kinematics of earthquake nucleation, rupture propagation, and subsequent triggering. However, catalog earthquake locations are subject to up to 50 km of location bias from the combination of unknown Earth structure and unbalanced station coverage, making it challenging to assess both the rupture directivity of larger events and the spatial patterns of their aftershocks. To overcome this limitation, we developed a new two-tiered multiple-event relocation approach to obtain hypocentral parameters that are minimally biased and have realistic uncertainties. In the first stage, locations of small clusters of well-recorded earthquakes at local spatial scales (100s of events across 100 km length scales) are calibrated either by using near-source arrival times or independent location constraints (e.g. local aftershock studies, InSAR solutions), using an implementation of the Hypocentroidal Decomposition relocation technique called MLOC. Epicentral uncertainties are typically less than 5 km. Then, these events are used as prior constraints in the code BayesLoc, a Bayesian relocation technique that can handle larger datasets, to yield region-wide calibrated hypocenters (1000s of events over 1000 km length scales). With locations and errors both calibrated, the pattern of aftershock activity can reveal the type of the earthquake triggering: dynamic stress changes promote an increase in the seismicity rate in the direction of unilateral propagation, whereas static stress changes should

  14. Earthquake forecasts for the CSEP Japan experiment based on the RI algorithm

    NASA Astrophysics Data System (ADS)

    Nanjo, K. Z.

    2011-03-01

    An earthquake forecast testing experiment for Japan, the first of its kind, is underway within the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) under a controlled environment. Here we give an overview of the earthquake forecast models, based on the RI algorithm, which we have submitted to the CSEP Japan experiment. Models have been submitted to a total of 9 categories, corresponding to 3 testing classes (3 years, 1 year, and 3 months) and 3 testing regions. The RI algorithm is originally a binary forecast system based on the working assumption that large earthquakes are more likely to occur in the future at locations of higher seismicity in the past. It is based on simple counts of the number of past earthquakes, which is called the Relative Intensity (RI) of seismicity. To improve its forecast performance, we first expand the RI algorithm by introducing spatial smoothing. We then convert the RI representation from a binary system to a CSEP-testable model that produces forecasts for the number of earthquakes of predefined magnitudes. We use information on past seismicity to tune the parameters. The final submittal consists of 36 executable computer codes: 4 variants corresponding to different smoothing parameters for each of the 9 categories. They will help to elucidate which categories and which smoothing parameters are the most meaningful for the RI hypothesis. The main purpose of our participation in the experiment is to better understand the significance of the relative intensity of seismicity for earthquake forecastability in Japan.

  15. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  16. A test to evaluate the earthquake prediction algorithm, M8

    USGS Publications Warehouse

    Healy, John H.; Kossobokov, Vladimir G.; Dewey, James W.

    1992-01-01

    A test of the algorithm M8 is described. The test is constructed to meet four rules, which we propose to be applicable to the test of any method for earthquake prediction:  1. An earthquake prediction technique should be presented as a well documented, logical algorithm that can be used by  investigators without restrictions. 2. The algorithm should be coded in a common programming language and implementable on widely available computer systems. 3. A test of the earthquake prediction technique should involve future predictions with a black box version of the algorithm in which potentially adjustable parameters are fixed in advance. The source of the input data must be defined and ambiguities in these data must be resolved automatically by the algorithm. 4. At least one reasonable null hypothesis should be stated in advance of testing the earthquake prediction method, and it should be stated how this null hypothesis will be used to estimate the statistical significance of the earthquake predictions. The M8 algorithm has successfully predicted several destructive earthquakes, in the sense that the earthquakes occurred inside regions with linear dimensions from 384 to 854 km that the algorithm had identified as being in times of increased probability for strong earthquakes. In addition, M8 has successfully "post predicted" high percentages of strong earthquakes in regions to which it has been applied in retroactive studies. The statistical significance of previous predictions has not been established, however, and post-prediction studies in general are notoriously subject to success-enhancement through hindsight. Nor has it been determined how much more precise an M8 prediction might be than forecasts and probability-of-occurrence estimates made by other techniques. We view our test of M8 both as a means to better determine the effectiveness of M8 and as an experimental structure within which to make observations that might lead to improvements in the algorithm

  17. Testing an Earthquake Prediction Algorithm: The 2016 New Zealand and Chile Earthquakes

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.

    2017-05-01

    The 13 November 2016, M7.8, 54 km NNE of Amberley, New Zealand and the 25 December 2016, M7.6, 42 km SW of Puerto Quellon, Chile earthquakes happened outside the area of the on-going real-time global testing of the intermediate-term middle-range earthquake prediction algorithm M8, accepted in 1992 for the M7.5+ range. Naturally, over the past two decades, the level of registration of earthquakes worldwide has grown significantly and by now is sufficient for diagnosis of times of increased probability (TIPs) by the M8 algorithm on the entire territory of New Zealand and Southern Chile as far as below 40°S. The mid-2016 update of the M8 predictions determines TIPs in the additional circles of investigation (CIs) where the two earthquakes have happened. Thus, after 50 semiannual updates in the real-time prediction mode, we (1) confirm statistically approved high confidence of the M8-MSc predictions and (2) conclude a possibility of expanding the territory of the Global Test of the algorithms M8 and MSc in an apparently necessary revision of the 1992 settings.

  18. Preliminary earthquake locations in the Kenai Peninsula recorded by the MOOS Array and their relationship to structure in the 1964 great earthquake zone

    NASA Astrophysics Data System (ADS)

    Li, J.; Abers, G. A.; Christensen, D. H.; Kim, Y.; Calkins, J. A.

    2011-12-01

    Earthquakes in subduction zones are mostly generated at the interface between the subducting and overlying plates. In 2006-2009, the MOOS (Multidisciplinary Observations Of Subduction) seismic array was deployed around the Kenai Peninsula, Alaska, consisting of 34 broadband seismometers recording for 1-3 years. This region spans the eastern end of the Aleutian megathrust that ruptured in the 1964 Mw 9.2 great earthquake, the second largest recorded earthquake, and ongoing seismicity is abundant. Here, we report an initial analysis of seismicity recorded by MOOS, in the context of preliminary imaging. There were 16,462 events detected in one year from initial STA/LTA signal detections and subsequent event associations from the MOOS Array. We manually reviewed them to eliminate distant earthquakes and noise, leaving 11,879 local earthquakes. To refine this catalog, an adaptive auto-regressive onset estimation algorithm was applied, doubling the original dataset and producing 20,659 P picks and 22,999 S picks for one month (September 2007). Inspection shows that this approach lead to almost negligible false alarms and many more events than hand picking. Within the well-sampled part of the array, roughly 200 km by 300 km, we locate 250% more earthquakes for one month than the permanent network catalog, or 10 earthquakes per day on this patch of the megathrust. Although the preliminary locations of earthquakes still show some scatter, we can see a concentration of events in a ~20-km-wide belt, part of which can be interpreted as seismogenic thrust zone. In conjunction with the seismicity study, we are imaging the plate interface with receiver functions. The main seismicity zone corresponds to the top of a low-velocity layer imaged in receiver functions, nominally attributed to the top of the downgoing plate. As we refine velocity models and apply relative relocation algorithms, we expect to improve the precision of the locations substantially. When combined with image

  19. Optimizing correlation techniques for improved earthquake location

    USGS Publications Warehouse

    Schaff, D.P.; Bokelmann, G.H.R.; Ellsworth, W.L.; Zanzerkia, E.; Waldhauser, F.; Beroza, G.C.

    2004-01-01

    Earthquake location using relative arrival time measurements can lead to dramatically reduced location errors and a view of fault-zone processes with unprecedented detail. There are two principal reasons why this approach reduces location errors. The first is that the use of differenced arrival times to solve for the vector separation of earthquakes removes from the earthquake location problem much of the error due to unmodeled velocity structure. The second reason, on which we focus in this article, is that waveform cross correlation can substantially reduce measurement error. While cross correlation has long been used to determine relative arrival times with subsample precision, we extend correlation measurements to less similar waveforms, and we introduce a general quantitative means to assess when correlation data provide an improvement over catalog phase picks. We apply the technique to local earthquake data from the Calaveras Fault in northern California. Tests for an example streak of 243 earthquakes demonstrate that relative arrival times with normalized cross correlation coefficients as low as ???70%, interevent separation distances as large as to 2 km, and magnitudes up to 3.5 as recorded on the Northern California Seismic Network are more precise than relative arrival times determined from catalog phase data. Also discussed are improvements made to the correlation technique itself. We find that for large time offsets, our implementation of time-domain cross correlation is often more robust and that it recovers more observations than the cross spectral approach. Longer time windows give better results than shorter ones. Finally, we explain how thresholds and empirical weighting functions may be derived to optimize the location procedure for any given region of interest, taking advantage of the respective strengths of diverse correlation and catalog phase data on different length scales.

  20. Detection and location of earthquakes along the west coast of Chile: Examining seismicity in the 2010 M 8.8 Maule and 2014 M 8.1 Iquique earthquake rupture zones.

    NASA Astrophysics Data System (ADS)

    Diniakos, R. S.; Bilek, S. L.; Rowe, C. A.; Draganov, D.

    2015-12-01

    The subduction of the Nazca Plate beneath the South American Plate along Chile has led to some of the largest earthquakes recorded on modern seismic instrumentation. These include the 1960 M 9.5 Valdivia, 2010 M 8.8 Maule, and 2014 M 8.1 Iquique earthquakes. Slip heterogeneity for both the 2010 and 2014 earthquakes has been noted in various studies. In order to explore both spatial variations in the continued aftershocks of the 2010 event, and also seismicity to the north along Iquique prior to the 2014 earthquake relative to the high slip regions, we are expanding the catalog of small earthquakes using template matching algorithms to find other small earthquakes in the region. We start with an earthquake catalog developed from regional and local array data; these events provide the templates used to search through waveform data from a temporary seismic array in Malargue, Argentina, located ~300 km west of the Maule region, which operated in 2012. Our template events are first identified on the array stations, and we use a 10-s window around the P-wave arrival as the template. We then use a waveform cross-correlation algorithm to compare the template with day-long seismograms from Malargue stations. The newly detected events are then located using the HYPOINVERSE2000 program. Initial results for 103 templates on 19 of the array stations show that we find 275 new events ,with an average of three new events for each template correlated. For these preliminary results, events from the Maule region appear to provide the most new detections, with an average of ten new events. We will present our locations for the detected events and we will compare them to patterns of high slip along the 2010 rupture zone of the M 8.8 Maule earthquake and the 2014 M 8.1 Iquique event.

  1. Foreshocks and aftershocks locations of the 2014 Pisagua, N. Chile earthquake: history of a megathrust earthquake nucleation

    NASA Astrophysics Data System (ADS)

    Fuenzalida Velasco, Amaya; Rietbrock, Andreas; Tavera, Hernando; Ryder, Isabelle; Ruiz, Sergio; Thomas, Reece; De Angelis, Silvio; Bondoux, Francis

    2015-04-01

    The April 2014 Mw 8.1 Pisagua earthquake occurred in the Northern Chile seismic gap: a region of the South American subduction zone lying between Arica city and the Mejillones Peninsula. It is believed that this part of the subduction zone has not experienced a large earthquake since 1877. Thanks to the identification of this seismic gap, the north of Chile was well instrumented before the Pisagua earthquake, including the Integrated Plate boundary Observatory Chile (IPOC) network and the Chilean local network installed by the Centro Sismologico Nacional (CSN). These instruments were able to record the full foreshock and aftershock sequences, allowing a unique opportunity to study the nucleation process of large megathrust earthquakes. To improve azimuthal coverage of the Pisagua seismic sequence, after the earthquake, in collaboration with the Instituto Geofisico del Peru (IGP) we installed a temporary seismic network in south of Peru. The network comprised 12 short-period stations located in the coastal area between Moquegua and Tacna and they were operative from 1st May 2014. We also installed three stations on the slopes of the Ticsiani volcano to monitor any possible change in volcanic activity following the Pisagua earthquake. In this work we analysed the continuous seismic data recorded by CSN and IPOC networks from 1 March to 30 June to obtain the catalogue of the sequence, including foreshocks and aftershocks. Using an automatic algorithm based in STA/LTA we obtained the picks for P and S waves. Association in time and space defined the events and computed an initial location using Hypo71 and the 1D local velocity model. More than 11,000 events were identified with this method for the whole period, but we selected the best resolved events that include more than 7 observed arrivals with at least 2 S picks of them, to relocate these events using NonLinLoc software. For the main events of the sequence we carefully estimate event locations and we obtained

  2. Controls of repeating earthquakes' location from a- and b- values imaging

    NASA Astrophysics Data System (ADS)

    Chen, K. H.; Kawamura, M.

    2017-12-01

    The locations where creeping and locked fault areas abut have commonly found to be delineated by the foci of small repeating earthquakes (REs). REs not only represent the finer structure of high creep-rate location, they also function as fault slip-rate indicators. Knowledge of the expected location of REs therefore, is crucial for fault deformation monitoring and assessment of earthquake potential. However, a precise description of factors determining REs locations is lacking. To explore where earthquakes tend to recur, we statistically investigated repeating earthquake catalogs and background seismicity from different regions including six fault segments in California and Taiwan. We show that the location of repeating earthquakes can be mapped using the spatial distribution of the seismic a- and b-values obtained from the background seismicity. Molchan's error diagram statistically confirmed that repeating earthquakes occur within areas with high a-values (2.8-3.8) and high b-values (0.9-1.1) on both strike-slip and thrust fault segments. However, no significant association held true for fault segments with more complicated geometry or for wider areas with a complex fault network. The productivity of small earthquakes responsible for high a- and b-values may thus be the most important factor controlling the location of repeating earthquakes. We hypothesize that, given that the deformation conditions within a fault zone are suitable for a planar fault plane, the location of repeating earthquakes can be best described by a-value 3 and b-value 1. This feature of a- and b-values may be useful for foresee the location of REs for measuring creep rate at depth. Further investigation of REs-rich areas may allow testing of this hypothesis.

  3. Development a heuristic method to locate and allocate the medical centers to minimize the earthquake relief operation time.

    PubMed

    Aghamohammadi, Hossein; Saadi Mesgari, Mohammad; Molaei, Damoon; Aghamohammadi, Hasan

    2013-01-01

    Location-allocation is a combinatorial optimization problem, and is defined as Non deterministic Polynomial Hard (NP) hard optimization. Therefore, solution of such a problem should be shifted from exact to heuristic or Meta heuristic due to the complexity of the problem. Locating medical centers and allocating injuries of an earthquake to them has high importance in earthquake disaster management so that developing a proper method will reduce the time of relief operation and will consequently decrease the number of fatalities. This paper presents the development of a heuristic method based on two nested genetic algorithms to optimize this location allocation problem by using the abilities of Geographic Information System (GIS). In the proposed method, outer genetic algorithm is applied to the location part of the problem and inner genetic algorithm is used to optimize the resource allocation. The final outcome of implemented method includes the spatial location of new required medical centers. The method also calculates that how many of the injuries at each demanding point should be taken to any of the existing and new medical centers as well. The results of proposed method showed high performance of designed structure to solve a capacitated location-allocation problem that may arise in a disaster situation when injured people has to be taken to medical centers in a reasonable time.

  4. Comparing methods for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Turkaya, Semih; Bodin, Thomas; Sylvander, Matthieu; Parroucau, Pierre; Manchuel, Kevin

    2017-04-01

    There are plenty of methods available for locating small magnitude point source earthquakes. However, it is known that these different approaches produce different results. For each approach, results also depend on a number of parameters which can be separated into two main branches: (1) parameters related to observations (number and distribution of for example) and (2) parameters related to the inversion process (velocity model, weighting parameters, initial location etc.). Currently, the results obtained from most of the location methods do not systematically include quantitative uncertainties. The effect of the selected parameters on location uncertainties is also poorly known. Understanding the importance of these different parameters and their effect on uncertainties is clearly required to better constrained knowledge on fault geometry, seismotectonic processes and at the end to improve seismic hazard assessment. In this work, realized in the frame of the SINAPS@ research program (http://www.institut-seism.fr/projets/sinaps/), we analyse the effect of different parameters on earthquakes location (e.g. type of phase, max. hypocentral separation etc.). We compare several codes available (Hypo71, HypoDD, NonLinLoc etc.) and determine their strengths and weaknesses in different cases by means of synthetic tests. The work, performed for the moment on synthetic data, is planned to be applied, in a second step, on data collected by the Midi-Pyrénées Observatory (OMP).

  5. Probablilistic evaluation of earthquake detection and location capability for Illinois, Indiana, Kentucky, Ohio, and West Virginia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mauk, F.J.; Christensen, D.H.

    1980-09-01

    Probabilistic estimations of earthquake detection and location capabilities for the states of Illinois, Indiana, Kentucky, Ohio and West Virginia are presented in this document. The algorithm used in these epicentrality and minimum-magnitude estimations is a version of the program NETWORTH by Wirth, Blandford, and Husted (DARPA Order No. 2551, 1978) which was modified for local array evaluation at the University of Michigan Seismological Observatory. Estimations of earthquake detection capability for the years 1970 and 1980 are presented in four regional minimum m/sub b/ magnitude contour maps. Regional 90% confidence error ellipsoids are included for m/sub b/ magnitude events from 2.0more » through 5.0 at 0.5 m/sub b/ unit increments. The close agreement between these predicted epicentral 90% confidence estimates and the calculated error ellipses associated with actual earthquakes within the studied region suggest that these error determinations can be used to estimate the reliability of epicenter location. 8 refs., 14 figs., 2 tabs.« less

  6. Novel Algorithms Enabling Rapid, Real-Time Earthquake Monitoring and Tsunami Early Warning Worldwide

    NASA Astrophysics Data System (ADS)

    Lomax, A.; Michelini, A.

    2012-12-01

    We have introduced recently new methods to determine rapidly the tsunami potential and magnitude of large earthquakes (e.g., Lomax and Michelini, 2009ab, 2011, 2012). To validate these methods we have implemented them along with other new algorithms within the Early-est earthquake monitor at INGV-Rome (http://early-est.rm.ingv.it, http://early-est.alomax.net). Early-est is a lightweight software package for real-time earthquake monitoring (including phase picking, phase association and event detection, location, magnitude determination, first-motion mechanism determination, ...), and for tsunami early warning based on discriminants for earthquake tsunami potential. In a simulation using archived broadband seismograms for the devastating M9, 2011 Tohoku earthquake and tsunami, Early-est determines: the epicenter within 3 min after the event origin time, discriminants showing very high tsunami potential within 5-7 min, and magnitude Mwpd(RT) 9.0-9.2 and a correct shallow-thrusting mechanism within 8 min. Real-time monitoring with Early-est givess similar results for most large earthquakes using currently available, real-time seismogram data. Here we summarize some of the key algorithms within Early-est that enable rapid, real-time earthquake monitoring and tsunami early warning worldwide: >>> FilterPicker - a general purpose, broad-band, phase detector and picker (http://alomax.net/FilterPicker); >>> Robust, simultaneous association and location using a probabilistic, global-search; >>> Period-duration discriminants TdT0 and TdT50Ex for tsunami potential available within 5 min; >>> Mwpd(RT) magnitude for very large earthquakes available within 10 min; >>> Waveform P polarities determined on broad-band displacement traces, focal mechanisms obtained with the HASH program (Hardebeck and Shearer, 2002); >>> SeisGramWeb - a portable-device ready seismogram viewer using web-services in a browser (http://alomax.net/webtools/sgweb/info.html). References (see also: http

  7. Precisely locating the Klamath Falls, Oregon, earthquakes

    USGS Publications Warehouse

    Qamar, A.; Meagher, K.L.

    1993-01-01

    In this article we present preliminary results of a close-in, instrumental study of the Klamath Falls earthquake sequence, carried as a cooperative effort by scientists from the U.S Geological Survey (USGS) and universities in Washington, Orgeon, and California. In addition to obtaining much mroe accurate earthquake locations, this study has improved our understanding of the relationship between seismicity and mapped faults in the region. 

  8. Development of double-pair double difference location algorithm and its application to the regular earthquakes and non-volcanic tremors

    NASA Astrophysics Data System (ADS)

    Guo, H.; Zhang, H.

    2016-12-01

    Relocating high-precision earthquakes is a central task for monitoring earthquakes and studying the structure of earth's interior. The most popular location method is the event-pair double-difference (DD) relative location method, which uses the catalog and/or more accurate waveform cross-correlation (WCC) differential times from event pairs with small inter-event separations to the common stations to reduce the effect of the velocity uncertainties outside the source region. Similarly, Zhang et al. [2010] developed a station-pair DD location method which uses the differential times from common events to pairs of stations to reduce the effect of the velocity uncertainties near the source region, to relocate the non-volcanic tremors (NVT) beneath the San Andreas Fault (SAF). To utilize advantages of both DD location methods, we have proposed and developed a new double-pair DD location method to use the differential times from pairs of events to pairs of stations. The new method can remove the event origin time and station correction terms from the inversion system and cancel out the effects of the velocity uncertainties near and outside the source region simultaneously. We tested and applied the new method on the northern California regular earthquakes to validate its performance. In comparison, among three DD location methods, the new double-pair DD method can determine more accurate relative locations and the station-pair DD method can better improve the absolute locations. Thus, we further proposed a new location strategy combining station-pair and double-pair differential times to determine accurate absolute and relative locations at the same time. For NVTs, it is difficult to pick the first arrivals and derive the WCC event-pair differential times, thus the general practice is to measure station-pair envelope WCC differential times. However, station-pair tremor locations are scattered due to the low-precision relative locations. The ability that double-pair data

  9. An Envelope Based Feedback Control System for Earthquake Early Warning: Reality Check Algorithm

    NASA Astrophysics Data System (ADS)

    Heaton, T. H.; Karakus, G.; Beck, J. L.

    2016-12-01

    Earthquake early warning systems are, in general, designed to be open loop control systems in such a way that the output, i.e., the warning messages, only depend on the input, i.e., recorded ground motions, up to the moment when the message is issued in real-time. We propose an algorithm, which is called Reality Check Algorithm (RCA), which would assess the accuracy of issued warning messages, and then feed the outcome of the assessment back into the system. Then, the system would modify its messages if necessary. That is, we are proposing to convert earthquake early warning systems into feedback control systems by integrating them with RCA. RCA works by continuously monitoring and comparing the observed ground motions' envelopes to the predicted envelopes of Virtual Seismologist (Cua 2005). Accuracy of magnitude and location (both spatial and temporal) estimations of the system are assessed separately by probabilistic classification models, which are trained by a Sparse Bayesian Learning technique called Automatic Relevance Determination prior.

  10. ISC-GEM: Global Instrumental Earthquake Catalogue (1900-2009), II. Location and seismicity patterns

    NASA Astrophysics Data System (ADS)

    Bondár, I.; Engdahl, E. Robert; Villaseñor, A.; Harris, James; Storchak, D.

    2015-02-01

    We present the final results of a two-year project sponsored by the Global Earthquake Model (GEM) Foundation. The ISC-GEM global catalogue consists of some 19 thousand instrumentally recorded, moderate to large earthquakes, spanning 110 years of seismicity. We relocated all events in the catalogue using a two-tier approach. The EHB location methodology (Engdahl et al., 1998) was applied first to obtain improved hypocentres with special focus on the depth determination. The locations were further refined in the next step by fixing the depths to those from the EHB analysis and applying the new International Seismological Centre (ISC) location algorithm (Bondár and Storchak, 2011) that reduces location bias by accounting for correlated travel-time prediction error structure. To facilitate the relocation effort, some one million seismic P and S wave arrival-time data were added to the ISC database for the period between 1904 and 1970, either from original station bulletins in the ISC archive or by digitizing the scanned images of the International Seismological Summary (ISS) bulletin (Villaseñor and Engdahl, 2005, 2007). Although no substantial amount of new phase data were acquired for the modern period (1964-2009), the number of phases used in the location has still increased by three millions, owing to fact that both the EHB and ISC locators use most well-recorded ak135 (Kennett et al., 1995) phases in the location. We show that the relocation effort yielded substantially improved locations, especially in the first half of the 20th century; we demonstrate significant improvements in focal depth estimates in subduction zones and other seismically active regions; and we show that the ISC-GEM catalogue provides an improved view of 110 years of global seismicity of the Earth. The ISC-GEM Global Instrumental Earthquake Catalogue represents the final product of one of the ten global components in the GEM program, and is available to researchers at the ISC (http://www.isc.ac.uk).

  11. Revision of earthquake hypocenter locations in GEOFON bulletin data using global source-specific station terms technique

    NASA Astrophysics Data System (ADS)

    Nooshiri, N.; Saul, J.; Heimann, S.; Tilmann, F. J.; Dahm, T.

    2015-12-01

    The use of a 1D velocity model for seismic event location is often associated with significant travel-time residuals. Particularly for regional stations in subduction zones, where the velocity structure strongly deviates from the assumed 1D model, residuals of up to ±10 seconds are observed even for clear arrivals, which leads to strongly biased locations. In fact, due to mostly regional travel-time anomalies, arrival times at regional stations do not match the location obtained with teleseismic picks, and vice versa. If the earthquake is weak and only recorded regionally, or if fast locations based on regional stations are needed, the location may be far off the corresponding teleseismic location. In this case, implementation of travel-time corrections may leads to a reduction of the travel-time residuals at regional stations and, in consequence, significantly improve the relative location accuracy. Here, we have extended the source-specific station terms (SSST) technique to regional and teleseismic distances and adopted the algorithm for probabilistic, non-linear, global-search earthquake location. The method has been applied to specific test regions using P and pP phases from the GEOFON bulletin data for all available station networks. By using this method, a set of timing corrections has been calculated for each station varying as a function of source position. In this way, an attempt is made to correct for the systematic errors, introduced by limitations and inaccuracies in the assumed velocity structure, without solving for a new earth model itself. In this presentation, we draw on examples of the application of this global SSST technique to relocate earthquakes from the Tonga-Fiji subduction zone and from the Chilean margin. Our results have been showing a considerable decrease of the root-mean-square (RMS) residual in earthquake location final catalogs, a major reduction of the median absolute deviation (MAD) of the travel

  12. Magnitude and location of historical earthquakes in Japan and implications for the 1855 Ansei Edo earthquake

    USGS Publications Warehouse

    Bakun, W.H.

    2005-01-01

    Japan Meteorological Agency (JMA) intensity assignments IJMA are used to derive intensity attenuation models suitable for estimating the location and an intensity magnitude Mjma for historical earthquakes in Japan. The intensity for shallow crustal earthquakes on Honshu is equal to -1.89 + 1.42MJMA - 0.00887?? h - 1.66log??h, where MJMA is the JMA magnitude, ??h = (??2 + h2)1/2, and ?? and h are epicentral distance and focal depth (km), respectively. Four earthquakes located near the Japan Trench were used to develop a subducting plate intensity attenuation model where intensity is equal to -8.33 + 2.19MJMA -0.00550??h - 1.14 log ?? h. The IJMA assignments for the MJMA7.9 great 1923 Kanto earthquake on the Philippine Sea-Eurasian plate interface are consistent with the subducting plate model; Using the subducting plate model and 226 IJMA IV-VI assignments, the location of the intensity center is 25 km north of the epicenter, Mjma is 7.7, and MJMA is 7.3-8.0 at the 1?? confidence level. Intensity assignments and reported aftershock activity for the enigmatic 11 November 1855 Ansei Edo earthquake are consistent with an MJMA 7.2 Philippine Sea-Eurasian interplate source or Philippine Sea intraslab source at about 30 km depth. If the 1855 earthquake was a Philippine Sea-Eurasian interplate event, the intensity center was adjacent to and downdip of the rupture area of the great 1923 Kanto earthquake, suggesting that the 1855 and 1923 events ruptured adjoining sections of the Philippine Sea-Eurasian plate interface.

  13. Improving automatic earthquake locations in subduction zones: a case study for GEOFON catalog of Tonga-Fiji region

    NASA Astrophysics Data System (ADS)

    Nooshiri, Nima; Heimann, Sebastian; Saul, Joachim; Tilmann, Frederik; Dahm, Torsten

    2015-04-01

    Automatic earthquake locations are sometimes associated with very large residuals up to 10 s even for clear arrivals, especially for regional stations in subduction zones because of their strongly heterogeneous velocity structure associated. Although these residuals are most likely not related to measurement errors but unmodelled velocity heterogeneity, these stations are usually removed from or down-weighted in the location procedure. While this is possible for large events, it may not be useful if the earthquake is weak. In this case, implementation of travel-time station corrections may significantly improve the automatic locations. Here, the shrinking box source-specific station term method (SSST) [Lin and Shearer, 2005] has been applied to improve relative location accuracy of 1678 events that occurred in the Tonga subduction zone between 2010 and mid-2014. Picks were obtained from the GEOFON earthquake bulletin for all available station networks. We calculated a set of timing corrections for each station which vary as a function of source position. A separate time correction was computed for each source-receiver path at the given station by smoothing the residual field over nearby events. We begin with a very large smoothing radius essentially encompassing the whole event set and iterate by progressively shrinking the smoothing radius. In this way, we attempted to correct for the systematic errors, that are introduced into the locations by the inaccuracies in the assumed velocity structure, without solving for a new velocity model itself. One of the advantages of the SSST technique is that the event location part of the calculation is separate from the station term calculation and can be performed using any single event location method. In this study, we applied a non-linear, probabilistic, global-search earthquake location method using the software package NonLinLoc [Lomax et al., 2000]. The non-linear location algorithm implemented in NonLinLoc is less

  14. Automatic Earthquake Detection and Location by Waveform coherency in Alentejo (South Portugal) Using CatchPy

    NASA Astrophysics Data System (ADS)

    Custodio, S.; Matos, C.; Grigoli, F.; Cesca, S.; Heimann, S.; Rio, I.

    2015-12-01

    Seismic data processing is currently undergoing a step change, benefitting from high-volume datasets and advanced computer power. In the last decade, a permanent seismic network of 30 broadband stations, complemented by dense temporary deployments, covered mainland Portugal. This outstanding regional coverage currently enables the computation of a high-resolution image of the seismicity of Portugal, which contributes to fitting together the pieces of the regional seismo-tectonic puzzle. Although traditional manual inspections are valuable to refine automatic results they are impracticable with the big data volumes now available. When conducted alone they are also less objective since the criteria is defined by the analyst. In this work we present CatchPy, a scanning algorithm to detect earthquakes in continuous datasets. Our main goal is to implement an automatic earthquake detection and location routine in order to have a tool to quickly process large data sets, while at the same time detecting low magnitude earthquakes (i.e. lowering the detection threshold). CatchPY is designed to produce an event database that could be easily located using existing location codes (e.g.: Grigoli et al. 2013, 2014). We use CatchPy to perform automatic detection and location of earthquakes that occurred in Alentejo region (South Portugal), taking advantage of a dense seismic network deployed in the region for two years during the DOCTAR experiment. Results show that our automatic procedure is particularly suitable for small aperture networks. The event detection is performed by continuously computing the short-term-average/long-term-average of two different characteristic functions (CFs). For the P phases we used a CF based on the vertical energy trace while for S phases we used a CF based on the maximum eigenvalue of the instantaneous covariance matrix (Vidale 1991). Seismic event location is performed by waveform coherence analysis, scanning different hypocentral coordinates

  15. An automatic procedure for high-resolution earthquake locations: a case study from the TABOO near fault observatory (Northern Apennines, Italy)

    NASA Astrophysics Data System (ADS)

    Valoroso, Luisa; Chiaraluce, Lauro; Di Stefano, Raffaele; Latorre, Diana; Piccinini, Davide

    2014-05-01

    The characterization of the geometry, kinematics and rheology of fault zones by seismological data depends on our capability of accurately locate the largest number of low-magnitude seismic events. To this aim, we have been working for the past three years to develop an advanced modular earthquake location procedure able to automatically retrieve high-resolution earthquakes catalogues directly from continuous waveforms data. We use seismograms recorded at about 60 seismic stations located both at surface and at depth. The network covers an area of about 80x60 km with a mean inter-station distance of 6 km. These stations are part of a Near fault Observatory (TABOO; http://taboo.rm.ingv.it/), consisting of multi-sensor stations (seismic, geodetic, geochemical and electromagnetic). This permanent scientific infrastructure managed by the INGV is devoted to studying the earthquakes preparatory phase and the fast/slow (i.e., seismic/aseismic) deformation process active along the Alto Tiberina fault (ATF) located in the northern Apennines (Italy). The ATF is potentially one of the rare worldwide examples of active low-angle (< 15°) normal fault accommodating crustal extension and characterized by a regular occurrence of micro-earthquakes. The modular procedure combines: i) a sensitive detection algorithm optimized to declare low-magnitude events; ii) an accurate picking procedure that provides consistently weighted P- and S-wave arrival times, P-wave first motion polarities and the maximum waveform amplitude for local magnitude calculation; iii) both linearized iterative and non-linear global-search earthquake location algorithms to compute accurate absolute locations of single-events in a 3D geological model (see Latorre et al. same session); iv) cross-correlation and double-difference location methods to compute high-resolution relative event locations. This procedure is now running off-line with a delay of 1 week to the real-time. We are now implementing this

  16. Locating Local Earthquakes Using Single 3-Component Broadband Seismological Data

    NASA Astrophysics Data System (ADS)

    Das, S. B.; Mitra, S.

    2015-12-01

    We devised a technique to locate local earthquakes using single 3-component broadband seismograph and analyze the factors governing the accuracy of our result. The need for devising such a technique arises in regions of sparse seismic network. In state-of-the-art location algorithms, a minimum of three station recordings are required for obtaining well resolved locations. However, the problem arises when an event is recorded by less than three stations. This may be because of the following reasons: (a) down time of stations in a sparse network; (b) geographically isolated regions with limited logistic support to setup large network; (c) regions of insufficient economy for financing multi-station network and (d) poor signal-to-noise ratio for smaller events at most stations, except the one in its closest vicinity. Our technique provides a workable solution to the above problematic scenarios. However, our methodology is strongly dependent on the velocity model of the region. Our method uses a three step processing: (a) ascertain the back-azimuth of the event from the P-wave particle motion recorded on the horizontal components; (b) estimate the hypocentral distance using the S-P time; and (c) ascertain the emergent angle from the vertical and radial components. Once this is obtained, one can ray-trace through the 1-D velocity model to estimate the hypocentral location. We test our method on synthetic data, which produces results with 99% precision. With observed data, the accuracy of our results are very encouraging. The precision of our results depend on the signal-to-noise ratio (SNR) and choice of the right band-pass filter to isolate the P-wave signal. We used our method on minor aftershocks (3 < mb < 4) of the 2011 Sikkim earthquake using data from the Sikkim Himalayan network. Location of these events highlight the transverse strike-slip structure within the Indian plate, which was observed from source mechanism study of the mainshock and larger aftershocks.

  17. The Mw=8.8 Maule earthquake aftershock sequence, event catalog and locations

    NASA Astrophysics Data System (ADS)

    Meltzer, A.; Benz, H.; Brown, L.; Russo, R. M.; Beck, S. L.; Roecker, S. W.

    2011-12-01

    The aftershock sequence of the Mw=8.8 Maule earthquake off the coast of Chile in February 2010 is one of the most well-recorded aftershock sequences from a great megathrust earthquake. Immediately following the Maule earthquake, teams of geophysicists from Chile, France, Germany, Great Britain and the United States coordinated resources to capture aftershocks and other seismic signals associated with this significant earthquake. In total, 91 broadband, 48 short period, and 25 accelerometers stations were deployed above the rupture zone of the main shock from 33-38.5°S and from the coast to the Andean range front. In order to integrate these data into a unified catalog, the USGS National Earthquake Information Center develop procedures to use their real-time seismic monitoring system (Bulletin Hydra) to detect, associate, location and compute earthquake source parameters from these stations. As a first step in the process, the USGS has built a seismic catalog of all M3.5 or larger earthquakes for the time period of the main aftershock deployment from March 2010-October 2010. The catalog includes earthquake locations, magnitudes (Ml, Mb, Mb_BB, Ms, Ms_BB, Ms_VX, Mc), associated phase readings and regional moment tensor solutions for most of the M4 or larger events. Also included in the catalog are teleseismic phases and amplitude measures and body-wave MT and CMT solutions for the larger events, typically M5.5 and larger. Tuning of automated detection and association parameters should allow a complete catalog of events to approximately M2.5 or larger for that dataset of more than 164 stations. We characterize the aftershock sequence in terms of magnitude, frequency, and location over time. Using the catalog locations and travel times as a starting point we use double difference techniques to investigate relative locations and earthquake clustering. In addition, phase data from candidate ground truth events and modeling of surface waves can be used to calibrate the

  18. Earthquake location in transversely isotropic media with a tilted symmetry axis

    NASA Astrophysics Data System (ADS)

    Zhao, Aihua; Ding, Zhifeng

    2009-04-01

    The conventional intersection method for earthquake location in isotropic media is developed in the case of transversely isotropic media with a tilted symmetry axis (TTI media). The hypocenter is determined using its loci, which are calculated through a minimum travel time tree algorithm for ray tracing in TTI media. There are no restrictions on the structural complexity of the model or on the anisotropy strength of the medium. The location method is validated by its application to determine the hypocenter and origin time of an event in a complex TTI structure, in accordance with four hypotheses or study cases: (a) accurate model and arrival times, (b) perturbed model with randomly variable elastic parameter, (c) noisy arrival time data, and (d) incomplete set of observations from the seismic stations. Furthermore, several numerical tests demonstrate that the orientation of the symmetry axis has a significant effect on the hypocenter location when the seismic anisotropy is not very weak. Moreover, if the hypocentral determination is based on an isotropic reference model while the real medium is anisotropic, the resultant location errors can be considerable even though the anisotropy strength does not exceed 6.10%.

  19. Intensity, magnitude, location and attenuation in India for felt earthquakes since 1762

    USGS Publications Warehouse

    Szeliga, Walter; Hough, Susan; Martin, Stacey; Bilham, Roger

    2010-01-01

    A comprehensive, consistently interpreted new catalog of felt intensities for India (Martin and Szeliga, 2010, this issue) includes intensities for 570 earthquakes; instrumental magnitudes and locations are available for 100 of these events. We use the intensity values for 29 of the instrumentally recorded events to develop new intensity versus attenuation relations for the Indian subcontinent and the Himalayan region. We then use these relations to determine the locations and magnitudes of 234 historical events, using the method of Bakun and Wentworth (1997). For the remaining 336 events, intensity distributions are too sparse to determine magnitude or location. We evaluate magnitude and location accuracy of newly located events by comparing the instrumental- with the intensity-derived location for 29 calibration events, for which more than 15 intensity observations are available. With few exceptions, most intensity-derived locations lie within a fault length of the instrumentally determined location. For events in which the azimuthal distribution of intensities is limited, we conclude that the formal error bounds from the regression of Bakun and Wentworth (1997) do not reflect the true uncertainties. We also find that the regression underestimates the uncertainties of the location and magnitude of the 1819 Allah Bund earthquake, for which a location has been inferred from mapped surface deformation. Comparing our inferred attenuation relations to those developed for other regions, we find that attenuation for Himalayan events is comparable to intensity attenuation in California (Bakun and Wentworth, 1997), while intensity attenuation for cratonic events is higher than intensity attenuation reported for central/eastern North America (Bakun et al., 2003). Further, we present evidence that intensities of intraplate earthquakes have a nonlinear dependence on magnitude such that attenuation relations based largely on small-to-moderate earthquakes may significantly

  20. Intensity, magnitude, location, and attenuation in India for felt earthquakes since 1762

    USGS Publications Warehouse

    Szeliga, W.; Hough, S.; Martin, S.; Bilham, R.

    2010-01-01

    A comprehensive, consistently interpreted new catalog of felt intensities for India (Martin and Szeliga, 2010, this issue) includes intensities for 570 earth-quakes; instrumental magnitudes and locations are available for 100 of these events. We use the intensity values for 29 of the instrumentally recorded events to develop new intensity versus attenuation relations for the Indian subcontinent and the Himalayan region. We then use these relations to determine the locations and magnitudes of 234 historical events, using the method of Bakun and Wentworth (1997). For the remaining 336 events, intensity distributions are too sparse to determine magnitude or location. We evaluate magnitude and location accuracy of newly located events by comparing the instrumental-with the intensity-derived location for 29 calibration events, for which more than 15 intensity observations are available. With few exceptions, most intensity-derived locations lie within a fault length of the instrumentally determined location. For events in which the azimuthal distribution of intensities is limited, we conclude that the formal error bounds from the regression of Bakun and Wentworth (1997) do not reflect the true uncertainties. We also find that the regression underestimates the uncertainties of the location and magnitude of the 1819 Allah Bund earthquake, for which a location has been inferred from mapped surface deformation. Comparing our inferred attenuation relations to those developed for other regions, we find that attenuation for Himalayan events is comparable to intensity attenuation in California (Bakun and Wentworth, 1997), while intensity attenuation for cratonic events is higher than intensity attenuation reported for central/eastern North America (Bakun et al., 2003). Further, we present evidence that intensities of intraplate earth-quakes have a nonlinear dependence on magnitude such that attenuation relations based largely on small-to-moderate earthquakes may significantly

  1. Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data

    USGS Publications Warehouse

    Bakun, W.H.; Gomez, Capera A.; Stucchi, M.

    2011-01-01

    Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental

  2. A Kinesthetic Demonstration for Locating Earthquake Epicenters

    NASA Astrophysics Data System (ADS)

    Keyantash, J.; Sperber, S.

    2005-12-01

    During Spring 2005, an inquiry-based curriculum for plate tectonics was developed for implementation in sixth-grade classrooms within the Los Angeles Unified School District (LAUSD). Two cohorts of LAUSD teachers received training and orientation to the plate tectonics unit during one week workshops in July 2005. However, during the training workshops, it was observed that there was considerable confusion among the teachers as to how the traditional "textbook" explanation of the time lag between P and S waves on a seismogram could possibly be used to determine the epicenter of an earthquake. One of the State of California science content standards for sixth grade students is that they understand how the epicenters of earthquakes are determined, so it was critical that the teachers themselves grasped the concept. In response to the adult learner difficulties, the classroom explanation of earthquake epicenter location was supplemented with an outdoor kinesthetic activity. Based upon the experience of the kinesthetic model, it was found that the hands-on model greatly cemented the teachers' understanding of the underlying theory. This paper details the steps of the kinesthetic demonstration for earthquake epicenter identification, as well as offering extended options for its classroom implementation.

  3. Revision of earthquake hypocentre locations in global bulletin data sets using source-specific station terms

    NASA Astrophysics Data System (ADS)

    Nooshiri, Nima; Saul, Joachim; Heimann, Sebastian; Tilmann, Frederik; Dahm, Torsten

    2017-02-01

    Global earthquake locations are often associated with very large systematic travel-time residuals even for clear arrivals, especially for regional and near-regional stations in subduction zones because of their strongly heterogeneous velocity structure. Travel-time corrections can drastically reduce travel-time residuals at regional stations and, in consequence, improve the relative location accuracy. We have extended the shrinking-box source-specific station terms technique to regional and teleseismic distances and adopted the algorithm for probabilistic, nonlinear, global-search location. We evaluated the potential of the method to compute precise relative hypocentre locations on a global scale. The method has been applied to two specific test regions using existing P- and pP-phase picks. The first data set consists of 3103 events along the Chilean margin and the second one comprises 1680 earthquakes in the Tonga-Fiji subduction zone. Pick data were obtained from the GEOFON earthquake bulletin, produced using data from all available, global station networks. A set of timing corrections varying as a function of source position was calculated for each seismic station. In this way, we could correct the systematic errors introduced into the locations by the inaccuracies in the assumed velocity structure without explicitly solving for a velocity model. Residual statistics show that the median absolute deviation of the travel-time residuals is reduced by 40-60 per cent at regional distances, where the velocity anomalies are strong. Moreover, the spread of the travel-time residuals decreased by ˜20 per cent at teleseismic distances (>28°). Furthermore, strong variations in initial residuals as a function of recording distance are smoothed out in the final residuals. The relocated catalogues exhibit less scattered locations in depth and sharper images of the seismicity associated with the subducting slabs. Comparison with a high-resolution local catalogue reveals that

  4. Locations and magnitudes of historical earthquakes in the Sierra of Ecuador (1587-1996)

    NASA Astrophysics Data System (ADS)

    Beauval, Céline; Yepes, Hugo; Bakun, William H.; Egred, José; Alvarado, Alexandra; Singaucho, Juan-Carlos

    2010-06-01

    The whole territory of Ecuador is exposed to seismic hazard. Great earthquakes can occur in the subduction zone (e.g. Esmeraldas, 1906, Mw 8.8), whereas lower magnitude but shallower and potentially more destructive earthquakes can occur in the highlands. This study focuses on the historical crustal earthquakes of the Andean Cordillera. Several large cities are located in the Interandean Valley, among them Quito, the capital (~2.5 millions inhabitants). A total population of ~6 millions inhabitants currently live in the highlands, raising the seismic risk. At present, precise instrumental data for the Ecuadorian territory is not available for periods earlier than 1990 (beginning date of the revised instrumental Ecuadorian seismic catalogue); therefore historical data are of utmost importance for assessing seismic hazard. In this study, the Bakun & Wentworth method is applied in order to determine magnitudes, locations, and associated uncertainties for historical earthquakes of the Sierra over the period 1587-1976. An intensity-magnitude equation is derived from the four most reliable instrumental earthquakes (Mw between 5.3 and 7.1). Intensity data available per historical earthquake vary between 10 (Quito, 1587, Intensity >=VI) and 117 (Riobamba, 1797, Intensity >=III). The bootstrap resampling technique is coupled to the B&W method for deriving geographical confidence contours for the intensity centre depending on the data set of each earthquake, as well as confidence intervals for the magnitude. The extension of the area delineating the intensity centre location at the 67 per cent confidence level (+/-1σ) depends on the amount of intensity data, on their internal coherence, on the number of intensity degrees available, and on their spatial distribution. Special attention is dedicated to the few earthquakes described by intensities reaching IX, X and XI degrees. Twenty-five events are studied, and nineteen new epicentral locations are obtained, yielding

  5. Improvements to Earthquake Location with a Fuzzy Logic Approach

    NASA Astrophysics Data System (ADS)

    Gökalp, Hüseyin

    2018-01-01

    In this study, improvements to the earthquake location method were investigated using a fuzzy logic approach proposed by Lin and Sanford (Bull Seismol Soc Am 91:82-93, 2001). The method has certain advantages compared to the inverse methods in terms of eliminating the uncertainties of arrival times and reading errors. In this study, adopting this approach, epicentral locations were determined based on the results of a fuzzy logic space concerning the uncertainties in the velocity models. To map the uncertainties in arrival times into the fuzzy logic space, a trapezoidal membership function was constructed by directly using the travel time difference between the two stations for the P- and S-arrival times instead of the P- and S-wave models to eliminate the need for obtaining information concerning the velocity structure of the study area. The results showed that this method worked most effectively when earthquakes occurred away from a network or when the arrival time data contained phase reading errors. In this study, to resolve the problems related to determining the epicentral locations of the events, a forward modeling method like the grid search technique was used by applying different logical operations (i.e., intersection, union, and their combination) with a fuzzy logic approach. The locations of the events were depended on results of fuzzy logic outputs in fuzzy logic space by searching in a gridded region. The process of location determination with the defuzzification of only the grid points with the membership value of 1 obtained by normalizing all the maximum fuzzy output values of the highest values resulted in more reliable epicentral locations for the earthquakes than the other approaches. In addition, throughout the process, the center-of-gravity method was used as a defuzzification operation.

  6. Two-year survey comparing earthquake activity and injection-well locations in the Barnett Shale, Texas

    PubMed Central

    Frohlich, Cliff

    2012-01-01

    Between November 2009 and September 2011, temporary seismographs deployed under the EarthScope USArray program were situated on a 70-km grid covering the Barnett Shale in Texas, recording data that allowed sensing and locating regional earthquakes with magnitudes 1.5 and larger. I analyzed these data and located 67 earthquakes, more than eight times as many as reported by the National Earthquake Information Center. All 24 of the most reliably located epicenters occurred in eight groups within 3.2 km of one or more injection wells. These included wells near Dallas–Fort Worth and Cleburne, Texas, where earthquakes near injection wells were reported by the media in 2008 and 2009, as well as wells in six other locations, including several where no earthquakes have been reported previously. This suggests injection-triggered earthquakes are more common than is generally recognized. All the wells nearest to the earthquake groups reported maximum monthly injection rates exceeding 150,000 barrels of water per month (24,000 m3/mo) since October 2006. However, while 9 of 27 such wells in Johnson County were near earthquakes, elsewhere no earthquakes occurred near wells with similar injection rates. A plausible hypothesis to explain these observations is that injection only triggers earthquakes if injected fluids reach and relieve friction on a suitably oriented, nearby fault that is experiencing regional tectonic stress. Testing this hypothesis would require identifying geographic regions where there is interpreted subsurface structure information available to determine whether there are faults near seismically active and seismically quiescent injection wells. PMID:22869701

  7. Mexican Seismic Alert System's SAS-I algorithm review considering strong earthquakes felt in Mexico City since 1985

    NASA Astrophysics Data System (ADS)

    Cuellar Martinez, A.; Espinosa Aranda, J.; Suarez, G.; Ibarrola Alvarez, G.; Ramos Perez, S.; Camarillo Barranco, L.

    2013-05-01

    The Seismic Alert System of Mexico (SASMEX) uses three algorithms for alert activation that involve the distance between the seismic sensing field station (FS) and the city to be alerted; and the forecast for earthquake early warning activation in the cities integrated to the system, for example in Mexico City, the earthquakes occurred with the highest accelerations, were originated in the Pacific Ocean coast, whose distance this seismic region and the city, favors the use of algorithm called Algorithm SAS-I. This algorithm, without significant changes since its beginning in 1991, employs the data that generate one or more FS during P wave detection until S wave detection plus a period equal to the time employed to detect these phases; that is the double S-P time, called 2*(S-P). In this interval, the algorithm performs an integration process of quadratic samples from FS which uses a triaxial accelerometer to get two parameters: amplitude and growth rate measured until 2*(S-P) time. The parameters in SAS-I are used in a Magnitude classifier model, which was made from Guerrero Coast earthquakes time series, with reference to Mb magnitude mainly. This algorithm activates a Public or Preventive Alert if the model predicts whether Strong or Moderate earthquake. The SAS-I algorithm has been operating for over 23 years in the subduction zone of the Pacific Coast of Mexico, initially in Guerrero and followed by Oaxaca; and since March 2012 in the seismic region of Pacific covering the coasts among Jalisco, Colima, Michoacan, Guerrero and Oaxaca, where this algorithm has issued 16 Public Alert and 62 Preventive Alerts to the Mexico City where its soil conditions increase damages by earthquake such as the occurred in September 1985. This work shows the review of the SAS-I algorithm and possible alerts that it could generate from major earthquakes recordings detected by FS or seismometers near the earthquakes, coming from Pacific Ocean Coast whose have been felt in Mexico

  8. A phase coherence approach to identifying co-located earthquakes and tremor

    NASA Astrophysics Data System (ADS)

    Hawthorne, J. C.; Ampuero, J.-P.

    2018-05-01

    We present and use a phase coherence approach to identify seismic signals that have similar path effects but different source time functions: co-located earthquakes and tremor. The method used is a phase coherence-based implementation of empirical matched field processing, modified to suit tremor analysis. It works by comparing the frequency-domain phases of waveforms generated by two sources recorded at multiple stations. We first cross-correlate the records of the two sources at a single station. If the sources are co-located, this cross-correlation eliminates the phases of the Green's function. It leaves the relative phases of the source time functions, which should be the same across all stations so long as the spatial extent of the sources are small compared with the seismic wavelength. We therefore search for cross-correlation phases that are consistent across stations as an indication of co-located sources. We also introduce a method to obtain relative locations between the two sources, based on back-projection of interstation phase coherence. We apply this technique to analyse two tremor-like signals that are thought to be composed of a number of earthquakes. First, we analyse a 20 s long seismic precursor to a M 3.9 earthquake in central Alaska. The analysis locates the precursor to within 2 km of the mainshock, and it identifies several bursts of energy—potentially foreshocks or groups of foreshocks—within the precursor. Second, we examine several minutes of volcanic tremor prior to an eruption at Redoubt Volcano. We confirm that the tremor source is located close to repeating earthquakes identified earlier in the tremor sequence. The amplitude of the tremor diminishes about 30 s before the eruption, but the phase coherence results suggest that the tremor may persist at some level through this final interval.

  9. Locations and magnitudes of historical earthquakes in the Sierra of Ecuador (1587–1996)

    USGS Publications Warehouse

    Beauval, Celine; Yepes, Hugo; Bakun, William H.; Egred, Jose; Alvarado, Alexandra; Singaucho, Juan-Carlos

    2010-01-01

    The whole territory of Ecuador is exposed to seismic hazard. Great earthquakes can occur in the subduction zone (e.g. Esmeraldas, 1906, Mw8.8), whereas lower magnitude but shallower and potentially more destructive earthquakes can occur in the highlands. This study focuses on the historical crustal earthquakes of the Andean Cordillera. Several large cities are located in the Interandean Valley, among them Quito, the capital (∼2.5 millions inhabitants). A total population of ∼6 millions inhabitants currently live in the highlands, raising the seismic risk. At present, precise instrumental data for the Ecuadorian territory is not available for periods earlier than 1990 (beginning date of the revised instrumental Ecuadorian seismic catalogue); therefore historical data are of utmost importance for assessing seismic hazard. In this study, the Bakun & Wentworth method is applied in order to determine magnitudes, locations, and associated uncertainties for historical earthquakes of the Sierra over the period 1587–1976. An intensity-magnitude equation is derived from the four most reliable instrumental earthquakes (Mwbetween 5.3 and 7.1). Intensity data available per historical earthquake vary between 10 (Quito, 1587, Intensity ≥VI) and 117 (Riobamba, 1797, Intensity ≥III). The bootstrap resampling technique is coupled to the B&W method for deriving geographical confidence contours for the intensity centre depending on the data set of each earthquake, as well as confidence intervals for the magnitude. The extension of the area delineating the intensity centre location at the 67 per cent confidence level (±1σ) depends on the amount of intensity data, on their internal coherence, on the number of intensity degrees available, and on their spatial distribution. Special attention is dedicated to the few earthquakes described by intensities reaching IX, X and XI degrees. Twenty-five events are studied, and nineteen new epicentral locations are obtained, yielding

  10. Fault structure and mechanics of the Hayward Fault, California from double-difference earthquake locations

    USGS Publications Warehouse

    Waldhauser, F.; Ellsworth, W.L.

    2002-01-01

    The relationship between small-magnitude seismicity and large-scale crustal faulting along the Hayward Fault, California, is investigated using a double-difference (DD) earthquake location algorithm. We used the DD method to determine high-resolution hypocenter locations of the seismicity that occurred between 1967 and 1998. The DD technique incorporates catalog travel time data and relative P and S wave arrival time measurements from waveform cross correlation to solve for the hypocentral separation between events. The relocated seismicity reveals a narrow, near-vertical fault zone at most locations. This zone follows the Hayward Fault along its northern half and then diverges from it to the east near San Leandro, forming the Mission trend. The relocated seismicity is consistent with the idea that slip from the Calaveras Fault is transferred over the Mission trend onto the northern Hayward Fault. The Mission trend is not clearly associated with any mapped active fault as it continues to the south and joins the Calaveras Fault at Calaveras Reservoir. In some locations, discrete structures adjacent to the main trace are seen, features that were previously hidden in the uncertainty of the network locations. The fine structure of the seismicity suggest that the fault surface on the northern Hayward Fault is curved or that the events occur on several substructures. Near San Leandro, where the more westerly striking trend of the Mission seismicity intersects with the surface trace of the (aseismic) southern Hayward Fault, the seismicity remains diffuse after relocation, with strong variation in focal mechanisms between adjacent events indicating a highly fractured zone of deformation. The seismicity is highly organized in space, especially on the northern Hayward Fault, where it forms horizontal, slip-parallel streaks of hypocenters of only a few tens of meters width, bounded by areas almost absent of seismic activity. During the interval from 1984 to 1998, when digital

  11. Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Zhao, A. H.

    2014-12-01

    Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.

  12. Joint probabilistic determination of earthquake location and velocity structure: application to local and regional events

    NASA Astrophysics Data System (ADS)

    Beucler, E.; Haugmard, M.; Mocquet, A.

    2016-12-01

    The most widely used inversion schemes to locate earthquakes are based on iterative linearized least-squares algorithms and using an a priori knowledge of the propagation medium. When a small amount of observations is available for moderate events for instance, these methods may lead to large trade-offs between outputs and both the velocity model and the initial set of hypocentral parameters. We present a joint structure-source determination approach using Bayesian inferences. Monte-Carlo continuous samplings, using Markov chains, generate models within a broad range of parameters, distributed according to the unknown posterior distributions. The non-linear exploration of both the seismic structure (velocity and thickness) and the source parameters relies on a fast forward problem using 1-D travel time computations. The a posteriori covariances between parameters (hypocentre depth, origin time and seismic structure among others) are computed and explicitly documented. This method manages to decrease the influence of the surrounding seismic network geometry (sparse and/or azimuthally inhomogeneous) and a too constrained velocity structure by inferring realistic distributions on hypocentral parameters. Our algorithm is successfully used to accurately locate events of the Armorican Massif (western France), which is characterized by moderate and apparently diffuse local seismicity.

  13. Absolute earthquake locations using 3-D versus 1-D velocity models below a local seismic network: example from the Pyrenees

    NASA Astrophysics Data System (ADS)

    Theunissen, T.; Chevrot, S.; Sylvander, M.; Monteiller, V.; Calvet, M.; Villaseñor, A.; Benahmed, S.; Pauchet, H.; Grimaud, F.

    2018-03-01

    Local seismic networks are usually designed so that earthquakes are located inside them (primary azimuthal gap <<180°) and close to the seismic stations (0-100 km). With these local or near-regional networks (0°-5°), many seismological observatories still routinely locate earthquakes using 1-D velocity models. Moving towards 3-D location algorithms requires robust 3-D velocity models. This work takes advantage of seismic monitoring spanning more than 30 yr in the Pyrenean region. We investigate the influence of a well-designed 3-D model with station corrections including basins structure and the geometry of the Mohorovicic discontinuity on earthquake locations. In the most favourable cases (GAP < 180° and distance to the first station lower than 15 km), results using 1-D velocity models are very similar to 3-D results. The horizontal accuracy in the 1-D case can be higher than in the 3-D case if lateral variations in the structure are not properly resolved. Depth is systematically better resolved in the 3-D model even on the boundaries of the seismic network (GAP > 180° and distance to the first station higher than 15 km). Errors on velocity models and accuracy of absolute earthquake locations are assessed based on a reference data set made of active seismic, quarry blasts and passive temporary experiments. Solutions and uncertainties are estimated using the probabilistic approach of the NonLinLoc (NLLoc) software based on Equal Differential Time. Some updates have been added to NLLoc to better focus on the final solution (outlier exclusion, multiscale grid search, S-phases weighting). Errors in the probabilistic approach are defined to take into account errors on velocity models and on arrival times. The seismicity in the final 3-D catalogue is located with a horizontal uncertainty of about 2.0 ± 1.9 km and a vertical uncertainty of about 3.0 ± 2.0 km.

  14. Repeating Earthquakes on the Queen Charlotte Plate Boundary

    NASA Astrophysics Data System (ADS)

    Hayward, T. W.; Bostock, M. G.

    2015-12-01

    The Queen Charlotte Fault (QCF) is a major plate boundary located off the northwest coast of North America that has produced large earthquakes in 1949 (M8.1) and more recently in October, 2012 (M7.8). The 2012 event was dominated by thrusting despite the fact that plate motions at the boundary are nearly transcurrent. It is now widely believed that the plate boundary comprises the QCF (i.e., a dextral strike-slip fault) as well as an element of subduction of the Pacific Plate beneath the North American Plate. Repeating earthquakes and seismic tremor have been observed in the vicinity of the QCF; providing insight into the spatial and temporal characteristics of repeating earthquakes is the goal of this research. Due to poor station coverage and data quality, traditional methods of locating earthquakes are not applicable to these events. Instead, we have implemented an algorithm to locate local (i.e., < 100 km distance to epicenter) earthquakes using a single, three-component seismogram. This algorithm relies on the P-wave polarization and, through comparison with larger local events in the Geological Survey of Canada catalogue, is shown to yield epicentral locations accurate to within 5-10 km. A total of 24 unique families of repeating earthquakes has been identified, and 4 of these families have been located with high confidence. Their epicenters locate directly on the trace of the QCF and their depths are shallow (i.e., 5-15 km), consistent with the proposed depth of the QCF. Analysis of temporal recurrence leading up to the 2012 M7.8 event reveals a non-random pattern, with an approximately 15 day periodicity. Further analysis is planned to study whether this behaviour persists after the 2012 event and to gain insight into the effects of the 2012 event on the stress field and frictional properties of the plate boundary.

  15. Combining multiple earthquake models in real time for earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  16. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  17. Estimating earthquake location and magnitude from seismic intensity data

    USGS Publications Warehouse

    Bakun, W.H.; Wentworth, C.M.

    1997-01-01

    Analysis of Modified Mercalli intensity (MMI) observations for a training set of 22 California earthquakes suggests a strategy for bounding the epicentral region and moment magnitude M from MMI observations only. We define an intensity magnitude MI that is calibrated to be equal in the mean to M. MI = mean (Mi), where Mi = (MMIi + 3.29 + 0.0206 * ??i)/1.68 and ??i is the epicentral distance (km) of observation MMIi. The epicentral region is bounded by contours of rms [MI] = rms (MI - Mi) - rms0 (MI - Mi-), where rms is the root mean square, rms0 (MI - Mi) is the minimum rms over a grid of assumed epicenters, and empirical site corrections and a distance weighting function are used. Empirical contour values for bounding the epicenter location and empirical bounds for M estimated from MI appropriate for different levels of confidence and different quantities of intensity observations are tabulated. The epicentral region bounds and MI obtained for an independent test set of western California earthquakes are consistent with the instrumental epicenters and moment magnitudes of these earthquakes. The analysis strategy is particularly appropriate for the evaluation of pre-1900 earthquakes for which the only available data are a sparse set of intensity observations.

  18. Precise relative locations for earthquakes in the northeast Pacific region

    DOE PAGES

    Cleveland, K. Michael; VanDeMark, Thomas F.; Ammon, Charles J.

    2015-10-09

    We report that double-difference methods applied to cross-correlation measured Rayleigh wave time shifts are an effective tool to improve epicentroid locations and relative origin time shifts in remote regions. We apply these methods to seismicity offshore of southwestern Canada and the U.S. Pacific Northwest, occurring along the boundaries of the Pacific and Juan de Fuca (including the Explorer Plate and Gorda Block) Plates. The Blanco, Mendocino, Revere-Dellwood, Nootka, and Sovanco fracture zones host the majority of this seismicity, largely consisting of strike-slip earthquakes. The Explorer, Juan de Fuca, and Gorda spreading ridges join these fracture zones and host normal faultingmore » earthquakes. Our results show that at least the moderate-magnitude activity clusters along fault strike, supporting suggestions of large variations in seismic coupling along oceanic transform faults. Our improved relative locations corroborate earlier interpretations of the internal deformation in the Explorer and Gorda Plates. North of the Explorer Plate, improved locations support models that propose northern extension of the Revere-Dellwood fault. Relocations also support interpretations that favor multiple parallel active faults along the Blanco Transform Fault Zone. Seismicity of the western half of the Blanco appears more scattered and less collinear than the eastern half, possibly related to fault maturity. We use azimuthal variations in the Rayleigh wave cross-correlation amplitude to detect and model rupture directivity for a moderate size earthquake along the eastern Blanco Fault. Lastly, the observations constrain the seismogenic zone geometry and suggest a relatively narrow seismogenic zone width of 2 to 4 km.« less

  19. Co-located ionospheric and geomagnetic disturbances caused by great earthquakes

    NASA Astrophysics Data System (ADS)

    Hao, Yongqiang; Zhang, Donghe; Xiao, Zuo

    2016-07-01

    Despite primary energy disturbances from the Sun, oscillations of the Earth surface due to a large earthquake will couple with the atmosphere and therefore the ionosphere, to generate so-called coseismic ionospheric disturbances (CIDs). In the cases of 2008 Wenchuan and 2011 Tohoku earthquakes, infrasonic waves accompanying the propagation of seismic Rayleigh waves were observed in the ionosphere by a combination of techniques, total electron content, HF Doppler, and ground magnetometer. This is the very first report to present CIDs recorded by different techniques at co-located sites and profiled with regard to changes of both ionospheric plasma and current (geomagnetic field) simultaneously. Comparison between the oceanic (2011 Tohoku) and inland (2008 Wenchuan) earthquakes revealed that the main directional lobe of latter case is more distinct which is perpendicular to the direction of the fault rupture. We argue that the different fault slip (inland or submarine) may affect the way of couplings of lithosphere with atmosphere. Zhao, B., and Y. Hao (2015), Ionospheric and geomagnetic disturbances caused by the 2008 Wenchuan earthquake: A revisit, J. Geophys. Res., doi:10.1002/2015JA021035. Hao, Y. Q., et al. (2013), Teleseismic magnetic effects (TMDs) of 2011 Tohoku earthquake, J. Geophys. Res., doi:10.1002/jgra.50326. Hao, Y. Q., et al. (2012), Multi-instrument observation on co-seismic ionospheric effects after great Tohoku earthquake, J. Geophys. Res., doi:10.1029/2011JA017036.

  20. Fine-scale structure of the San Andreas fault zone and location of the SAFOD target earthquakes

    USGS Publications Warehouse

    Thurber, C.; Roecker, S.; Zhang, H.; Baher, S.; Ellsworth, W.

    2004-01-01

    We present results from the tomographic analysis of seismic data from the Parkfield area using three different inversion codes. The models provide a consistent view of the complex velocity structure in the vicinity of the San Andreas, including a sharp velocity contrast across the fault. We use the inversion results to assess our confidence in the absolute location accuracy of a potential target earthquake. We derive two types of accuracy estimates, one based on a consideration of the location differences from the three inversion methods, and the other based on the absolute location accuracy of "virtual earthquakes." Location differences are on the order of 100-200 m horizontally and up to 500 m vertically. Bounds on the absolute location errors based on the "virtual earthquake" relocations are ??? 50 m horizontally and vertically. The average of our locations places the target event epicenter within about 100 m of the SAF surface trace. Copyright 2004 by the American Geophysical Union.

  1. Assessing the location and magnitude of the 20 October 1870 Charlevoix, Quebec, earthquake

    USGS Publications Warehouse

    Ebel, John E.; Dupuy, Megan; Bakun, William H.

    2013-01-01

    The Charlevoix, Quebec, earthquake of 20 October 1870 caused damage to several towns in Quebec and was felt throughout much of southeastern Canada and along the U.S. Atlantic seaboard from Maine to Maryland. Site‐specific damage and felt reports from Canadian and U.S. cities and towns were used in analyses of the location and magnitude of the earthquake. The macroseismic center of the earthquake was very close to Baie‐St‐Paul, where the greatest damage was reported, and the intensity magnitude MI was found to be 5.8, with a 95% probability range of 5.5–6.0. After corrections for epicentral‐distance differences are applied, the modified Mercalli intensity (MMI) data for the 1870 earthquake and for the moment magnitude M 6.2 Charlevoix earthquake of 1925 at common sites show that on average, the MMI readings are about 0.8 intensity units smaller for the 1870 earthquake than for the 1925 earthquake, suggesting that the 1870 earthquake was MI 5.7. A similar comparison of the MMI data for the 1870 earthquake with the corresponding data for the M 5.9 1988 Saguenay event suggests that the 1870 earthquake was MI 6.0. These analyses all suggest that the magnitude of the 1870 Charlevoix earthquake is between MI 5.5 and MI 6.0, with a best estimate of MI 5.8.

  2. Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.

    2015-12-01

    The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.

  3. Testing continuous earthquake detection and location in Alentejo (South Portugal) by waveform coherency analysis

    NASA Astrophysics Data System (ADS)

    Matos, Catarina; Grigoli, Francesco; Cesca, Simone; Custódio, Susana

    2015-04-01

    In the last decade a permanent seismic network of 30 broadband stations, complemented by dense temporary deployments, covered Portugal. This extraordinary network coverage enables now the computation of a high-resolution image of the seismicity of Portugal, which in turn will shed light on the seismotectonics of Portugal. The large data volumes available cannot be analyzed by traditional time-consuming manual location procedures. In this presentation we show first results on the automatic detection and location of earthquakes occurred in a selected region in the south of Portugal Our main goal is to implement an automatic earthquake detection and location routine in order to have a tool to quickly process large data sets, while at the same time detecting low magnitude earthquakes (i.e., lowering the detection threshold). We present a modified version of the automatic seismic event location by waveform coherency analysis developed by Grigoli et al. (2013, 2014), designed to perform earthquake detections and locations in continuous data. The event detection is performed by continuously computing the short-term-average/long-term-average of two different characteristic functions (CFs). For the P phases we used a CF based on the vertical energy trace, while for S phases we used a CF based on the maximum eigenvalue of the instantaneous covariance matrix (Vidale 1991). Seismic event detection and location is obtained by performing waveform coherence analysis scanning different hypocentral coordinates. We apply this technique to earthquakes in the Alentejo region (South Portugal), taking advantage from a small aperture seismic network installed in the south of Portugal for two years (2010 - 2011) during the DOCTAR experiment. In addition to the good network coverage, the Alentejo region was chosen for its simple tectonic setting and also because the relationship between seismicity, tectonics and local lithospheric structure is intriguing and still poorly understood. Inside

  4. An Impact-Location Estimation Algorithm for Subsonic Uninhabited Aircraft

    NASA Technical Reports Server (NTRS)

    Bauer, Jeffrey E.; Teets, Edward

    1997-01-01

    An impact-location estimation algorithm is being used at the NASA Dryden Flight Research Center to support range safety for uninhabited aerial vehicle flight tests. The algorithm computes an impact location based on the descent rate, mass, and altitude of the vehicle and current wind information. The predicted impact location is continuously displayed on the range safety officer's moving map display so that the flightpath of the vehicle can be routed to avoid ground assets if the flight must be terminated. The algorithm easily adapts to different vehicle termination techniques and has been shown to be accurate to the extent required to support range safety for subsonic uninhabited aerial vehicles. This paper describes how the algorithm functions, how the algorithm is used at NASA Dryden, and how various termination techniques are handled by the algorithm. Other approaches to predicting the impact location and the reasons why they were not selected for real-time implementation are also discussed.

  5. Regional intensity attenuation models for France and the estimation of magnitude and location of historical earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Scotti, O.

    2006-01-01

    Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.

  6. Earthquake Relocation in the Middle East with Geodetically-Calibrated Events

    NASA Astrophysics Data System (ADS)

    Brengman, C.; Barnhart, W. D.

    2017-12-01

    Regional and global earthquake catalogs in tectonically active regions commonly contain mislocated earthquakes that impede efforts to address first order characteristics of seismogenic strain release and to monitor anthropogenic seismic events through the Comprehensive Nuclear-Test-Ban Treaty. Earthquake mislocations are particularly limiting in the plate boundary zone between the Arabia and Eurasia plates of Iran, Pakistan, and Turkey where earthquakes are commonly mislocated by 20+ kilometers and hypocentral depths are virtually unconstrained. Here, we present preliminary efforts to incorporate calibrated earthquake locations derived from Interferometric Synthetic Aperture Radar (InSAR) observations into a relocated catalog of seismicity in the Middle East. We use InSAR observations of co-seismic deformation to determine the locations, geometries, and slip distributions of small to moderate magnitude (M4.8+) crustal earthquakes. We incorporate this catalog of calibrated event locations, along with other seismologically-calibrated earthquake locations, as "priors" into a fully Bayesian multi-event relocation algorithm that relocates all teleseismically and regionally recorded earthquakes over the time span 1970-2017, including calibrated and uncalibrated events. Our relocations are conducted using cataloged phase picks and BayesLoc. We present a suite of sensitivity tests for the time span of 2003-2014 to explore the impacts of our input parameters (i.e., how a point source is defined from a finite fault inversion) on the behavior of the event relocations, potential improvements to depth estimates, the ability of the relocation to recover locations outside of the time span in which there are InSAR observations, and the degree to which our relocations can recover "known" calibrated earthquake locations that are not explicitly included as a-priori constraints. Additionally, we present a systematic comparison of earthquake relocations derived from phase picks of two

  7. Earthquake Forecasting Methodology Catalogue - A collection and comparison of the state-of-the-art in earthquake forecasting and prediction methodologies

    NASA Astrophysics Data System (ADS)

    Schaefer, Andreas; Daniell, James; Wenzel, Friedemann

    2015-04-01

    Earthquake forecasting and prediction has been one of the key struggles of modern geosciences for the last few decades. A large number of approaches for various time periods have been developed for different locations around the world. A categorization and review of more than 20 of new and old methods was undertaken to develop a state-of-the-art catalogue in forecasting algorithms and methodologies. The different methods have been categorised into time-independent, time-dependent and hybrid methods, from which the last group represents methods where additional data than just historical earthquake statistics have been used. It is necessary to categorize in such a way between pure statistical approaches where historical earthquake data represents the only direct data source and also between algorithms which incorporate further information e.g. spatial data of fault distributions or which incorporate physical models like static triggering to indicate future earthquakes. Furthermore, the location of application has been taken into account to identify methods which can be applied e.g. in active tectonic regions like California or in less active continental regions. In general, most of the methods cover well-known high-seismicity regions like Italy, Japan or California. Many more elements have been reviewed, including the application of established theories and methods e.g. for the determination of the completeness magnitude or whether the modified Omori law was used or not. Target temporal scales are identified as well as the publication history. All these different aspects have been reviewed and catalogued to provide an easy-to-use tool for the development of earthquake forecasting algorithms and to get an overview in the state-of-the-art.

  8. Multiple-Threshold Event Detection and Other Enhancements to the Virtual Seismologist (VS) Earthquake Early Warning Algorithm

    NASA Astrophysics Data System (ADS)

    Fischer, M.; Caprio, M.; Cua, G. B.; Heaton, T. H.; Clinton, J. F.; Wiemer, S.

    2009-12-01

    The Virtual Seismologist (VS) algorithm is a Bayesian approach to earthquake early warning (EEW) being implemented by the Swiss Seismological Service at ETH Zurich. The application of Bayes’ theorem in earthquake early warning states that the most probable source estimate at any given time is a combination of contributions from a likelihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS algorithm was one of three EEW algorithms involved in the California Integrated Seismic Network (CISN) real-time EEW testing and performance evaluation effort. Its compelling real-time performance in California over the last three years has led to its inclusion in the new USGS-funded effort to develop key components of CISN ShakeAlert, a prototype EEW system that could potentially be implemented in California. A significant portion of VS code development was supported by the SAFER EEW project in Europe. We discuss recent enhancements to the VS EEW algorithm. We developed and continue to test a multiple-threshold event detection scheme, which uses different association / location approaches depending on the peak amplitudes associated with an incoming P pick. With this scheme, an event with sufficiently high initial amplitudes can be declared on the basis of a single station, maximizing warning times for damaging events for which EEW is most relevant. Smaller, non-damaging events, which will have lower initial amplitudes, will require more picks to be declared an event to reduce false alarms. This transforms the VS codes from a regional EEW approach reliant on traditional location estimation (and it requirement of at least 4 picks as implemented by the Binder Earthworm phase associator) to a hybrid on-site/regional approach capable of providing a continuously evolving stream of EEW

  9. Repeating Earthquakes Following an Mw 4.4 Earthquake Near Luther, Oklahoma

    NASA Astrophysics Data System (ADS)

    Clements, T.; Keranen, K. M.; Savage, H. M.

    2015-12-01

    An Mw 4.4 earthquake on April 16, 2013 near Luther, OK was one of the earliest M4+ earthquakes in central Oklahoma, following the Prague sequence in 2011. A network of four local broadband seismometers deployed within a day of the Mw 4.4 event, along with six Oklahoma netquake stations, recorded more than 500 aftershocks in the two weeks following the Luther earthquake. Here we use HypoDD (Waldhauser & Ellsworth, 2000) and waveform cross-correlation to obtain precise aftershock locations. The location uncertainty, calculated using the SVD method in HypoDD, is ~15 m horizontally and ~ 35 m vertically. The earthquakes define a near vertical, NE-SW striking fault plane. Events occur at depths from 2 km to 3.5 km within the granitic basement, with a small fraction of events shallower, near the sediment-basement interface. Earthquakes occur within a zone of ~200 meters thickness on either side of the best-fitting fault surface. We use an equivalency class algorithm to identity clusters of repeating events, defined as event pairs with median three-component correlation > 0.97 across common stations (Aster & Scott, 1993). Repeating events occur as doublets of only two events in over 50% of cases; overall, 41% of earthquakes recorded occur as repeating events. The recurrence intervals for the repeating events range from minutes to days, with common recurrence intervals of less than two minutes. While clusters occur in tight dimensions, commonly of 80 m x 200 m, aftershocks occur in 3 distinct ~2km x 2km-sized patches along the fault. Our analysis suggests that with rapidly deployed local arrays, the plethora of ~Mw 4 earthquakes occurring in Oklahoma and Southern Kansas can be used to investigate the earthquake rupture process and the role of damage zones.

  10. Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter

    NASA Astrophysics Data System (ADS)

    Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.

    2018-04-01

    Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.

  11. Dynamic Source Inversion of a M6.5 Intraslab Earthquake in Mexico: Application of a New Parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Díaz-Mojica, J. J.; Cruz-Atienza, V. M.; Madariaga, R.; Singh, S. K.; Iglesias, A.

    2013-05-01

    We introduce a novel approach for imaging the earthquakes dynamics from ground motion records based on a parallel genetic algorithm (GA). The method follows the elliptical dynamic-rupture-patch approach introduced by Di Carli et al. (2010) and has been carefully verified through different numerical tests (Díaz-Mojica et al., 2012). Apart from the five model parameters defining the patch geometry, our dynamic source description has four more parameters: the stress drop inside the nucleation and the elliptical patches; and two friction parameters, the slip weakening distance and the change of the friction coefficient. These parameters are constant within the rupture surface. The forward dynamic source problem, involved in the GA inverse method, uses a highly accurate computational solver for the problem, namely the staggered-grid split-node. The synthetic inversion presented here shows that the source model parameterization is suitable for the GA, and that short-scale source dynamic features are well resolved in spite of low-pass filtering of the data for periods comparable to the source duration. Since there is always uncertainty in the propagation medium as well as in the source location and the focal mechanisms, we have introduced a statistical approach to generate a set of solution models so that the envelope of the corresponding synthetic waveforms explains as much as possible the observed data. We applied the method to the 2012 Mw6.5 intraslab Zumpango, Mexico earthquake and determined several fundamental source parameters that are in accordance with different and completely independent estimates for Mexican and worldwide earthquakes. Our weighted-average final model satisfactorily explains eastward rupture directivity observed in the recorded data. Some parameters found for the Zumpango earthquake are: Δτ = 30.2+/-6.2 MPa, Er = 0.68+/-0.36x10^15 J, G = 1.74+/-0.44x10^15 J, η = 0.27+/-0.11, Vr/Vs = 0.52+/-0.09 and Mw = 6.64+/-0.07; for the stress drop

  12. Seismicity in 2010 and major earthquakes recorded and located in Costa Rica from 1983 until 2012, by the local OVSICORI-UNA seismic network

    NASA Astrophysics Data System (ADS)

    Ronnie, Q.; Segura, J.; Burgoa, B.; Jimenez, W.; McNally, K. C.

    2013-05-01

    This work is the result of the analysis of existing information in the earthquake database of the Observatorio Sismológico y Vulcanológico de Costa Rica, Universidad Nacional (OVSICORI-UNA), and seeks disclosure of basic seismological information recorded and processed in 2010. In this year there was a transition between the software used to record, store and locate earthquakes. During the first three months of 2010, we used Earthworm (http://folkworm.ceri.memphis.edu/ew-doc), SEISAN (Haskov y Ottemoller, 1999) and Hypocenter (Lienert y Haskov, 1995) to capture, store and locate the earthquakes, respectively; in April 2010, ANTELOPE (http://www.brtt.com/software.html) start to be used for recording and storing and GENLOC (Fan at al, 2006) and LOCSAT (Bratt and Bache 1988), to locate earthquakes. GENLOC was used for local events and LOCSAT for regional and distant earthquakes. The local earthquakes were located using the 1D velocity model of Quintero and Kissling (2001) and for regional and distant earthquakes IASPEI91 (Kennett and Engdahl, 1991) was used. All the events for 2010 and shown in this work were rechecked by the authors. We located 3903 earthquakes in and around Costa Rica and 746 regional and distant seismic events were recorded (see Figure 1). In this work we also give a summary of major earthquakes recorded and located by OVSICORI-UNA network between 1983 and 2012. Seismicity recorded by OVSICORI-UNA network in 2010

  13. Twitter earthquake detection: Earthquake monitoring in a social world

    USGS Publications Warehouse

    Earle, Paul S.; Bowden, Daniel C.; Guy, Michelle R.

    2011-01-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets) with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word "earthquake" clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  14. hypoDD-A Program to Compute Double-Difference Hypocenter Locations

    USGS Publications Warehouse

    Waldhauser, Felix

    2001-01-01

    HypoDD is a Fortran computer program package for relocating earthquakes with the double-difference algorithm of Waldhauser and Ellsworth (2000). This document provides a brief introduction into how to run and use the programs ph2dt and hypoDD to compute double-difference (DD) hypocenter locations. It gives a short overview of the DD technique, discusses the data preprocessing using ph2dt, and leads through the earthquake relocation process using hypoDD. The appendices include the reference manuals for the two programs and a short description of auxiliary programs and example data. Some minor subroutines are presently in the c language, and future releases will be in c. Earthquake location algorithms are usually based on some form of Geiger’s method, the linearization of the travel time equation in a first order Taylor series that relates the difference between the observed and predicted travel time to unknown adjustments in the hypocentral coordinates through the partial derivatives of travel time with respect to the unknowns. Earthquakes can be located individually with this algorithm, or jointly when other unknowns link together the solutions to indivdual earthquakes, such as station corrections in the joint hypocenter determination (JHD) method, or the earth model in seismic tomography. The DD technique (described in detail in Waldhauser and Ellsworth, 2000) takes advantage of the fact that if the hypocentral separation between two earthquakes is small compared to the event-station distance and the scale length of velocity heterogeneity, then the ray paths between the source region and a common station are similar along almost the entire ray path (Fréchet, 1985; Got et al., 1994). In this case, the difference in travel times for two events observed at one station can be attributed to the spatial offset between the events with high accuracy. DD equations are built by differencing Geiger’s equation for earthquake location. In this way, the residual between

  15. Genetic algorithm for TEC seismo-ionospheric anomalies detection around the time of the Solomon (Mw = 8.0) earthquake of 06 February 2013

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, M.

    2013-08-01

    On 6 February 2013, at 12:12:27 local time (01:12:27 UTC) a seismic event registering Mw 8.0 struck the Solomon Islands, located at the boundaries of the Australian and Pacific tectonic plates. Time series prediction is an important and widely interesting topic in the research of earthquake precursors. This paper describes a new computational intelligence approach to detect the unusual variations of the total electron content (TEC) seismo-ionospheric anomalies induced by the powerful Solomon earthquake using genetic algorithm (GA). The GA detected a considerable number of anomalous occurrences on earthquake day and also 7 and 8 days prior to the earthquake in a period of high geomagnetic activities. In this study, also the detected TEC anomalies using the proposed method are compared to the results dealing with the observed TEC anomalies by applying the mean, median, wavelet, Kalman filter, ARIMA, neural network and support vector machine methods. The accordance in the final results of all eight methods is a convincing indication for the efficiency of the GA method. It indicates that GA can be an appropriate non-parametric tool for anomaly detection in a non linear time series showing the seismo-ionospheric precursors variations.

  16. Determining on-fault earthquake magnitude distributions from integer programming

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Parsons, Tom

    2018-02-01

    Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106 variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions.

  17. Determining on-fault earthquake magnitude distributions from integer programming

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2018-01-01

    Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106  variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions. 

  18. An improved data integration algorithm to constrain the 3D displacement field induced by fast deformation phenomena tested on the Napa Valley earthquake

    NASA Astrophysics Data System (ADS)

    Polcari, Marco; Fernández, José; Albano, Matteo; Bignami, Christian; Palano, Mimmo; Stramondo, Salvatore

    2017-12-01

    In this work, we propose an improved algorithm to constrain the 3D ground displacement field induced by fast surface deformations due to earthquakes or landslides. Based on the integration of different data, we estimate the three displacement components by solving a function minimization problem from the Bayes theory. We exploit the outcomes from SAR Interferometry (InSAR), Global Positioning System (GNSS) and Multiple Aperture Interferometry (MAI) to retrieve the 3D surface displacement field. Any other source of information can be added to the processing chain in a simple way, being the algorithm computationally efficient. Furthermore, we use the intensity Pixel Offset Tracking (POT) to locate the discontinuity produced on the surface by a sudden deformation phenomenon and then improve the GNSS data interpolation. This approach allows to be independent from other information such as in-situ investigations, tectonic studies or knowledge of the data covariance matrix. We applied such a method to investigate the ground deformation field related to the 2014 Mw 6.0 Napa Valley earthquake, occurred few kilometers from the San Andreas fault system.

  19. Robust method to detect and locate local earthquakes by means of amplitude measurements.

    NASA Astrophysics Data System (ADS)

    del Puy Papí Isaba, María; Brückl, Ewald

    2016-04-01

    In this study we present a robust new method to detect and locate medium and low magnitude local earthquakes. This method is based on an empirical model of the ground motion obtained from amplitude data of earthquakes in the area of interest, which were located using traditional methods. The first step of our method is the computation of maximum resultant ground velocities in sliding time windows covering the whole period of interest. In the second step, these maximum resultant ground velocities are back-projected to every point of a grid covering the whole area of interest while applying the empirical amplitude - distance relations. We refer to these back-projected ground velocities as pseudo-magnitudes. The number of operating seismic stations in the local network equals the number of pseudo-magnitudes at each grid-point. Our method introduces the new idea of selecting the minimum pseudo-magnitude at each grid-point for further analysis instead of searching for a minimum of the L2 or L1 norm. In case no detectable earthquake occurred, the spatial distribution of the minimum pseudo-magnitudes constrains the magnitude of weak earthquakes hidden in the ambient noise. In the case of a detectable local earthquake, the spatial distribution of the minimum pseudo-magnitudes shows a significant maximum at the grid-point nearest to the actual epicenter. The application of our method is restricted to the area confined by the convex hull of the seismic station network. Additionally, one must ensure that there are no dead traces involved in the processing. Compared to methods based on L2 and even L1 norms, our new method is almost wholly insensitive to outliers (data from locally disturbed seismic stations). A further advantage is the fast determination of the epicenter and magnitude of a seismic event located within a seismic network. This is possible due to the method of obtaining and storing a back-projected matrix, independent of the registered amplitude, for each seismic

  20. Prioritizing earthquake and tsunami alerting efforts

    NASA Astrophysics Data System (ADS)

    Allen, R. M.; Allen, S.; Aranha, M. A.; Chung, A. I.; Hellweg, M.; Henson, I. H.; Melgar, D.; Neuhauser, D. S.; Nof, R. N.; Strauss, J. A.

    2015-12-01

    The timeline of hazards associated with earthquakes ranges from seconds for the strong shaking at the epicenter, to minutes for strong shaking at more distant locations in big quakes, to tens of minutes for a local tsunami. Earthquake and tsunami warning systems must therefore include very fast initial alerts, while also taking advantage of available time in bigger and tsunami-generating quakes. At the UC Berkeley Seismological Laboratory we are developing a suite of algorithms to provide the fullest possible information about earthquake shaking and tsunami inundation from seconds to minutes after a quake. The E-larmS algorithm uses the P-wave to rapidly detect an earthquake and issue a warning. It is currently issuing alerts to test users in as little as 3 sec after the origin time. Development of a new waveform detector may lead to even faster alerts. G-larmS uses permanent deformation estimates from GNSS stations to estimate the geometry and extent of rupture underway providing more accurate ground shaking estimates in big (M>~7) earthquakes. It performed well in the M6.0 2014 Napa earthquake. T-larmS is a new algorithm designed to extend alert capabilities to tsunami inundation. Rapid estimates of source characteristics for subduction zones event can not only be used to warn of the shaking hazard, but also the local tsunami inundation hazard. These algorithms are being developed, implemented and tested with a focus on the western US, but are also now being tested in other parts of the world including Israel, Turkey, Korea and Chile. Beta users in the Bay Area are receiving the alerts and beginning to implement automated actions. They also provide feedback on users needs, which has led to the development of the MyEEW smartphone app. This app allows beta users to receive the alerts on their cell phones. All these efforts feed into our ongoing assessment of directions and priorities for future development and implementation efforts.

  1. Earthquakes along the Azores-Iberia plate boundary revisited

    NASA Astrophysics Data System (ADS)

    Batlló, Josep; Matos, Catarina; Torres, Ricardo; Cruz, Jorge; Custódio, Susana

    2017-04-01

    been consigned in the resulting catalogue. Earthquakes were re-located using both a 1D velocity structure and a linear inversion procedure (Hypocenter) and using a 3D structure developed for the region and a non-linear inversion algorithm (NonLinLoc). The results are interpreted in light of the most recent knowledge of geological structures, precise earthquake locations obtained for the most recent decades, which identify belts of preferential clustering of earthquakes, focal mechanisms and gravity anomalies.

  2. Offline Performance of the Filter Bank EEW Algorithm in the 2014 M6.0 South Napa Earthquake

    NASA Astrophysics Data System (ADS)

    Meier, M. A.; Heaton, T. H.; Clinton, J. F.

    2014-12-01

    Medium size events like the M6.0 South Napa earthquake are very challenging for EEW: the damage such events produce can be severe, but it is generally confined to relatively small zones around the epicenter and the shaking duration is short. This leaves a very short window for timely EEW alerts. Algorithms that wait for several stations to trigger before sending out EEW alerts are typically not fast enough for these kind of events because their blind zone (the zone where strong ground motions start before the warnings arrive) typically covers all or most of the area that experiences strong ground motions. At the same time, single station algorithms are often too unreliable to provide useful alerts. The filter bank EEW algorithm is a new algorithm that is designed to provide maximally accurate and precise earthquake parameter estimates with minimum data input, with the goal of producing reliable EEW alerts when only a very small number of stations have been reached by the p-wave. It combines the strengths of single station and network based algorithms in that it starts parameter estimates as soon as 0.5 seconds of data are available from the first station, but then perpetually incorporates additional data from the same or from any number of other stations. The algorithm analyzes the time dependent frequency content of real time waveforms with a filter bank. It then uses an extensive training data set to find earthquake records from the past that have had similar frequency content at a given time since the p-wave onset. The source parameters of the most similar events are used to parameterize a likelihood function for the source parameters of the ongoing event, which can then be maximized to find the most likely parameter estimates. Our preliminary results show that the filter bank EEW algorithm correctly estimated the magnitude of the South Napa earthquake to be ~M6 with only 1 second worth of data at the nearest station to the epicenter. This estimate is then

  3. Precise hypocenter locations of midcrustal low-frequency earthquakes beneath Mt. Fuji, Japan

    USGS Publications Warehouse

    Nakamichi, H.; Ukawa, M.; Sakai, S.

    2004-01-01

    Midcrustal low-frequency earthquakes (MLFs) have been observed at seismic stations around Mt. Fuji, Japan. In September - December 2000 and April - May 2001, abnormally high numbers of MLFs occurred. We located hypocenters for the 80 MLFs during 1998-2003 by using the hypoDD earthquake location program (Waldhauser and Ellsworth, 2000). The MLF hypocenters define an ellipsoidal volume some 5 km in diameter ranging from 11 to 16 km in focal depth. This volume is centered 3 km northeast of the summit and its long axis is directed NW-SE. The direction of the axis coincides with the major axis of tectonic compression around Mt. Fuji. The center of the MLF epicenters gradually migrated upward and 2-3 km from southeast to northwest during 1998-2001. We interpret that the hypocentral migration of MLFs reflects magma movement associated with a NW-SE oriented dike beneath Mt. Fuji. Copyright ?? The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences.

  4. Fast Ss-Ilm a Computationally Efficient Algorithm to Discover Socially Important Locations

    NASA Astrophysics Data System (ADS)

    Dokuz, A. S.; Celik, M.

    2017-11-01

    Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.

  5. An efficient algorithm for double-difference tomography and location in heterogeneous media, with an application to the Kilauea volcano

    USGS Publications Warehouse

    Monteiller, V.; Got, J.-L.; Virieux, J.; Okubo, P.

    2005-01-01

    Improving our understanding of crustal processes requires a better knowledge of the geometry and the position of geological bodies. In this study we have designed a method based upon double-difference relocation and tomography to image, as accurately as possible, a heterogeneous medium containing seismogenic objects. Our approach consisted not only of incorporating double difference in tomography but also partly in revisiting tomographic schemes for choosing accurate and stable numerical strategies, adapted to the use of cross-spectral time delays. We used a finite difference solution to the eikonal equation for travel time computation and a Tarantola-Valette approach for both the classical and double-difference three-dimensional tomographic inversion to find accurate earthquake locations and seismic velocity estimates. We estimated efficiently the square root of the inverse model's covariance matrix in the case of a Gaussian correlation function. It allows the use of correlation length and a priori model variance criteria to determine the optimal solution. Double-difference relocation of similar earthquakes is performed in the optimal velocity model, making absolute and relative locations less biased by the velocity model. Double-difference tomography is achieved by using high-accuracy time delay measurements. These algorithms have been applied to earthquake data recorded in the vicinity of Kilauea and Mauna Loa volcanoes for imaging the volcanic structures. Stable and detailed velocity models are obtained: the regional tomography unambiguously highlights the structure of the island of Hawaii and the double-difference tomography shows a detailed image of the southern Kilauea caldera-upper east rift zone magmatic complex. Copyright 2005 by the American Geophysical Union.

  6. Earthquake detection through computationally efficient similarity search

    PubMed Central

    Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.

    2015-01-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  7. Seismic swarm associated with the 2008 eruption of Kasatochi Volcano, Alaska: earthquake locations and source parameters

    USGS Publications Warehouse

    Ruppert, Natalia G.; Prejean, Stephanie G.; Hansen, Roger A.

    2011-01-01

    An energetic seismic swarm accompanied an eruption of Kasatochi Volcano in the central Aleutian volcanic arc in August of 2008. In retrospect, the first earthquakes in the swarm were detected about 1 month prior to the eruption onset. Activity in the swarm quickly intensified less than 48 h prior to the first large explosion and subsequently subsided with decline of eruptive activity. The largest earthquake measured as moment magnitude 5.8, and a dozen additional earthquakes were larger than magnitude 4. The swarm exhibited both tectonic and volcanic characteristics. Its shear failure earthquake features were b value = 0.9, most earthquakes with impulsive P and S arrivals and higher-frequency content, and earthquake faulting parameters consistent with regional tectonic stresses. Its volcanic or fluid-influenced seismicity features were volcanic tremor, large CLVD components in moment tensor solutions, and increasing magnitudes with time. Earthquake location tests suggest that the earthquakes occurred in a distributed volume elongated in the NS direction either directly under the volcano or within 5-10 km south of it. Following the MW 5.8 event, earthquakes occurred in a new crustal volume slightly east and north of the previous earthquakes. The central Aleutian Arc is a tectonically active region with seismicity occurring in the crusts of the Pacific and North American plates in addition to interplate events. We postulate that the Kasatochi seismic swarm was a manifestation of the complex interaction of tectonic and magmatic processes in the Earth's crust. Although magmatic intrusion triggered the earthquakes in the swarm, the earthquakes failed in context of the regional stress field.

  8. Aftershocks, earthquake effects, and the location of the large 14 December 1872 earthquake near Entiat, central Washington

    USGS Publications Warehouse

    Brocher, Thomas M.; Hopper, Margaret G.; Algermissen, S.T. Ted; Perkins, David M.; Brockman, Stanley R.; Arnold, Edouard P.

    2017-01-01

    Reported aftershock durations, earthquake effects, and other observations from the large 14 December 1872 earthquake in central Washington are consistent with an epicenter near Entiat, Washington. Aftershocks were reported for more than 3 months only near Entiat. Modal intensity data described in this article are consistent with an Entiat area epicenter, where the largest modified Mercalli intensities, VIII, were assigned between Lake Chelan and Wenatchee. Although ground failures and water effects were widespread, there is a concentration of these features along the Columbia River and its tributaries in the Entiat area. Assuming linear ray paths, misfits from 23 reports of the directions of horizontal shaking have a local minima at Entiat, assuming the reports are describing surface waves, but the region having comparable misfit is large. Broadband seismograms recorded for comparable ray paths provide insight into the reasons why possible S–P times estimated from felt reports at two locations are several seconds too small to be consistent with an Entiat area epicenter.

  9. Comparison of optimized algorithms in facility location allocation problems with different distance measures

    NASA Astrophysics Data System (ADS)

    Kumar, Rakesh; Chandrawat, Rajesh Kumar; Garg, B. P.; Joshi, Varun

    2017-07-01

    Opening the new firm or branch with desired execution is very relevant to facility location problem. Along the lines to locate the new ambulances and firehouses, the government desires to minimize average response time for emergencies from all residents of cities. So finding the best location is biggest challenge in day to day life. These type of problems were named as facility location problems. A lot of algorithms have been developed to handle these problems. In this paper, we review five algorithms that were applied to facility location problems. The significance of clustering in facility location problems is also presented. First we compare Fuzzy c-means clustering (FCM) algorithm with alternating heuristic (AH) algorithm, then with Particle Swarm Optimization (PSO) algorithms using different type of distance function. The data was clustered with the help of FCM and then we apply median model and min-max problem model on that data. After finding optimized locations using these algorithms we find the distance from optimized location point to the demanded point with different distance techniques and compare the results. At last, we design a general example to validate the feasibility of the five algorithms for facilities location optimization, and authenticate the advantages and drawbacks of them.

  10. Seismic swarm associated with the 2008 eruption of Kasatochi Volcano, Alaska: Earthquake locations and source parameters

    USGS Publications Warehouse

    Ruppert, N.A.; Prejean, S.; Hansen, R.A.

    2011-01-01

    An energetic seismic swarm accompanied an eruption of Kasatochi Volcano in the central Aleutian volcanic arc in August of 2008. In retrospect, the first earthquakes in the swarm were detected about 1 month prior to the eruption onset. Activity in the swarm quickly intensified less than 48 h prior to the first large explosion and subsequently subsided with decline of eruptive activity. The largest earthquake measured as moment magnitude 5.8, and a dozen additional earthquakes were larger than magnitude 4. The swarm exhibited both tectonic and volcanic characteristics. Its shear failure earthquake features were b value = 0.9, most earthquakes with impulsive P and S arrivals and higher-frequency content, and earthquake faulting parameters consistent with regional tectonic stresses. Its volcanic or fluid-influenced seismicity features were volcanic tremor, large CLVD components in moment tensor solutions, and increasing magnitudes with time. Earthquake location tests suggest that the earthquakes occurred in a distributed volume elongated in the NS direction either directly under the volcano or within 5-10 km south of it. Following the MW 5.8 event, earthquakes occurred in a new crustal volume slightly east and north of the previous earthquakes. The central Aleutian Arc is a tectonically active region with seismicity occurring in the crusts of the Pacific and North American plates in addition to interplate events. We postulate that the Kasatochi seismic swarm was a manifestation of the complex interaction of tectonic and magmatic processes in the Earth's crust. Although magmatic intrusion triggered the earthquakes in the swarm, the earthquakes failed in context of the regional stress field. Copyright ?? 2011 by the American Geophysical Union.

  11. A new algorithm to detect earthquakes outside the seismic network: preliminary results

    NASA Astrophysics Data System (ADS)

    Giudicepietro, Flora; Esposito, Antonietta Maria; Ricciolino, Patrizia

    2017-04-01

    In this text we are going to present a new technique for detecting earthquakes outside the seismic network, which are often the cause of fault of automatic analysis system. Our goal is to develop a robust method that provides the discrimination result as quickly as possible. We discriminate local earthquakes from regional earthquakes, both recorded at SGG station, equipped with short period sensors, operated by Osservatorio Vesuviano (INGV) in the Southern Apennines (Italy). The technique uses a Multi Layer Perceptron (MLP) neural network with an architecture composed by an input layer, a hidden layer and a single node output layer. We pre-processed the data using the Linear Predictive Coding (LPC) technique to extract the spectral features of the signals in a compact form. We performed several experiments by shortening the signal window length. In particular, we used windows of 4, 2 and 1 seconds containing the onset of the local and the regional earthquakes. We used a dataset of 103 local earthquakes and 79 regional earthquakes, most of which occurred in Greece, Albania and Crete. We split the dataset into a training set, for the network training, and a testing set to evaluate the network's capacity of discrimination. In order to assess the network stability, we repeated this procedure six times, randomly changing the data composition of the training and testing set and the initial weights of the net. We estimated the performance of this method by calculating the average of correct detection percentages obtained for each of the six permutations. The average performances are 99.02%, 98.04% and 98.53%, which concern respectively the experiments carried out on 4, 2 and 1 seconds signal windows. The results show that our method is able to recognize the earthquakes outside the seismic network using only the first second of the seismic records, with a suitable percentage of correct detection. Therefore, this algorithm can be profitably used to make earthquake automatic

  12. Temporal Variation of Tectonic Tremor Activity Associated with Nearby Earthquakes

    NASA Astrophysics Data System (ADS)

    Chao, K.; Van der Lee, S.; Hsu, Y. J.; Pu, H. C.

    2017-12-01

    Tectonic tremor and slow slip events, located downdip from the seismogenic zone, hold the key to recurring patterns of typical earthquakes. Several findings of slow aseismic slip during the prenucletion processes of nearby earthquakes have provided new insight into the study of stress transform of slow earthquakes in fault zones prior to megathrust earthquakes. However, how tectonic tremor is associated with the occurrence of nearby earthquakes remains unclear. To enhance our understanding of the stress interaction between tremor and earthquakes, we developed an algorithm for the automatic detection and location of tectonic tremor in the collisional tectonic environment in Taiwan. Our analysis of a three-year data set indicates a short-term increase in the tremor rate starting at 19 days before the 2010 ML6.4 Jiashian main shock (Chao et al., JGR, 2017). Around the time when the tremor rate began to rise, one GPS station recorded a flip in its direction of motion. We hypothesize that tremor is driven by a slow-slip event that preceded the occurrence of the shallower nearby main shock, even though the inferred slip is too small to be observed by all GPS stations. To better quantify what the necessary condition for tremor to response to nearby earthquakes is, we obtained a 13-year ambient tremor catalog from 2004 to 2016 in the same region. We examine the spatiotemporal relationship between tremor and 37 ML>=5.0 (seven events with ML>=6.0) nearby earthquakes located within 0.5 degrees to the active tremor sources. The findings from this study can enhance our understanding of the interaction among tremor, slow slip, and nearby earthquakes in the high seismic hazard regions.

  13. Algorithms for System Identification and Source Location.

    NASA Astrophysics Data System (ADS)

    Nehorai, Arye

    This thesis deals with several topics in least squares estimation and applications to source location. It begins with a derivation of a mapping between Wiener theory and Kalman filtering for nonstationary autoregressive moving average (ARMO) processes. Applying time domain analysis, connections are found between time-varying state space realizations and input-output impulse response by matrix fraction description (MFD). Using these connections, the whitening filters are derived by the two approaches, and the Kalman gain is expressed in terms of Wiener theory. Next, fast estimation algorithms are derived in a unified way as special cases of the Conjugate Direction Method. The fast algorithms included are the block Levinson, fast recursive least squares, ladder (or lattice) and fast Cholesky algorithms. The results give a novel derivation and interpretation for all these methods, which are efficient alternatives to available recursive system identification algorithms. Multivariable identification algorithms are usually designed only for left MFD models. In this work, recursive multivariable identification algorithms are derived for right MFD models with diagonal denominator matrices. The algorithms are of prediction error and model reference type. Convergence analysis results obtained by the Ordinary Differential Equation (ODE) method are presented along with simulations. Sources of energy can be located by estimating time differences of arrival (TDOA's) of waves between the receivers. A new method for TDOA estimation is proposed for multiple unknown ARMA sources and additive correlated receiver noise. The method is based on a formula that uses only the receiver cross-spectra and the source poles. Two algorithms are suggested that allow tradeoffs between computational complexity and accuracy. A new time delay model is derived and used to show the applicability of the methods for non -integer TDOA's. Results from simulations illustrate the performance of the

  14. Propagation of the velocity model uncertainties to the seismic event location

    NASA Astrophysics Data System (ADS)

    Gesret, A.; Desassis, N.; Noble, M.; Romary, T.; Maisons, C.

    2015-01-01

    Earthquake hypocentre locations are crucial in many domains of application (academic and industrial) as seismic event location maps are commonly used to delineate faults or fractures. The interpretation of these maps depends on location accuracy and on the reliability of the associated uncertainties. The largest contribution to location and uncertainty errors is due to the fact that the velocity model errors are usually not correctly taken into account. We propose a new Bayesian formulation that integrates properly the knowledge on the velocity model into the formulation of the probabilistic earthquake location. In this work, the velocity model uncertainties are first estimated with a Bayesian tomography of active shot data. We implement a sampling Monte Carlo type algorithm to generate velocity models distributed according to the posterior distribution. In a second step, we propagate the velocity model uncertainties to the seismic event location in a probabilistic framework. This enables to obtain more reliable hypocentre locations as well as their associated uncertainties accounting for picking and velocity model uncertainties. We illustrate the tomography results and the gain in accuracy of earthquake location for two synthetic examples and one real data case study in the context of induced microseismicity.

  15. Inverting the parameters of an earthquake-ruptured fault with a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Ting-To; Fernàndez, Josè; Rundle, John B.

    1998-03-01

    Natural selection is the spirit of the genetic algorithm (GA): by keeping the good genes in the current generation, thereby producing better offspring during evolution. The crossover function ensures the heritage of good genes from parent to offspring. Meanwhile, the process of mutation creates a special gene, the character of which does not exist in the parent generation. A program based on genetic algorithms using C language is constructed to invert the parameters of an earthquake-ruptured fault. The verification and application of this code is shown to demonstrate its capabilities. It is determined that this code is able to find the global extreme and can be used to solve more practical problems with constraints gathered from other sources. It is shown that GA is superior to other inverting schema in many aspects. This easy handling and yet powerful algorithm should have many suitable applications in the field of geosciences.

  16. Aftershock Distribution of the Mw=7.8 April 16, 2016 Pedernales Ecuador Subduction Earthquake: Constraints from 3D Earthquake Locations

    NASA Astrophysics Data System (ADS)

    Font, Y.; Agurto-Detzel, H.; Alvarado, A. P.; Regnier, M. M.; Rolandone, F.; Charvis, P.; Mothes, P. A.; Nocquet, J. M.; Jarrin, P.; Ambrois, D.; Maron, C.; Deschamps, A.; Cheze, J.; Peix, F., Sr.; Ruiz, M. C.; Gabriela, P.; Acero, W.; Singaucho, J. C.; Viracucha, C.; Vasconez, F.; De Barros, L.; Mercerat, D.; Courboulex, F.; Galve, A.; Godano, M.; Monfret, T.; Ramos, C.; Martin, X.; Rietbrock, A.; Beck, S. L.; Metlzer, A.

    2017-12-01

    The Mw7.8 Pedernales earthquake is associated with the subduction of the Nazca Plate beneath the South American Plate. The mainshock caused many casualties and widespread damage across the Manabi province. The 150 km-long coseismic rupture area extends beneath the coastline, near 25 km depth. The rupture propagated southward and involved the successive rupture of two discrete asperities, with a maximum slip ( 5 m) on the southern patch. The rupture area is consistent with the highly locked regions observed on interseismic coupling models, overlaps the 7.2 Mw rupture zone, and terminates near where the 1906 Mw 8.8 megathrust earthquake rupture zone is estimated to have ended. Two neighboring highly coupled patches remain locked: (A) south and updip of the coseismic rupture zone and (B) north and downdip. In this study, we are working on the earthquake locations of the first month of aftershocks and compare the seismicity distribution to the interseismic coupling, the rupture area and to early afterslip. We use continuous seismic traces recorded on the permanent network partly installed in the framework of the collaboration between l'Institut de Recherche pour le Développement (France) and the Instituto Geofísico, Escuela Politécnica Nacional (IGEPN), Quito, Ecuador. Detections are conducted using Seiscomp in play-back mode and arrival-times are manually picked. To improve earthquake locations, we use the MAXi technique and a heterogeneous a priori P-wave velocity model that approximates the large velocity variations of the Ecuadorian subduction system. Aftershocks align along 3 to 4 main clusters that strike perpendicularly to the trench, and mostly updip of the co-seismic rupture. Seismicity develops over portions of plate interface that are known to be strongly locked or almost uncoupled. The seismicity pattern is similar to the one observed during a decade of observation during the interseismic period with swarms such as the Galera alignment, Jama and Cabo

  17. Connecting slow earthquakes to huge earthquakes.

    PubMed

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  18. Automatic Earthquake Detection by Active Learning

    NASA Astrophysics Data System (ADS)

    Bergen, K.; Beroza, G. C.

    2017-12-01

    In recent years, advances in machine learning have transformed fields such as image recognition, natural language processing and recommender systems. Many of these performance gains have relied on the availability of large, labeled data sets to train high-accuracy models; labeled data sets are those for which each sample includes a target class label, such as waveforms tagged as either earthquakes or noise. Earthquake seismologists are increasingly leveraging machine learning and data mining techniques to detect and analyze weak earthquake signals in large seismic data sets. One of the challenges in applying machine learning to seismic data sets is the limited labeled data problem; learning algorithms need to be given examples of earthquake waveforms, but the number of known events, taken from earthquake catalogs, may be insufficient to build an accurate detector. Furthermore, earthquake catalogs are known to be incomplete, resulting in training data that may be biased towards larger events and contain inaccurate labels. This challenge is compounded by the class imbalance problem; the events of interest, earthquakes, are infrequent relative to noise in continuous data sets, and many learning algorithms perform poorly on rare classes. In this work, we investigate the use of active learning for automatic earthquake detection. Active learning is a type of semi-supervised machine learning that uses a human-in-the-loop approach to strategically supplement a small initial training set. The learning algorithm incorporates domain expertise through interaction between a human expert and the algorithm, with the algorithm actively posing queries to the user to improve detection performance. We demonstrate the potential of active machine learning to improve earthquake detection performance with limited available training data.

  19. Short-term volcano-tectonic earthquake forecasts based on a moving mean recurrence time algorithm: the El Hierro seismo-volcanic crisis experience

    NASA Astrophysics Data System (ADS)

    García, Alicia; De la Cruz-Reyna, Servando; Marrero, José M.; Ortiz, Ramón

    2016-05-01

    Under certain conditions, volcano-tectonic (VT) earthquakes may pose significant hazards to people living in or near active volcanic regions, especially on volcanic islands; however, hazard arising from VT activity caused by localized volcanic sources is rarely addressed in the literature. The evolution of VT earthquakes resulting from a magmatic intrusion shows some orderly behaviour that may allow the occurrence and magnitude of major events to be forecast. Thus governmental decision makers can be supplied with warnings of the increased probability of larger-magnitude earthquakes on the short-term timescale. We present here a methodology for forecasting the occurrence of large-magnitude VT events during volcanic crises; it is based on a mean recurrence time (MRT) algorithm that translates the Gutenberg-Richter distribution parameter fluctuations into time windows of increased probability of a major VT earthquake. The MRT forecasting algorithm was developed after observing a repetitive pattern in the seismic swarm episodes occurring between July and November 2011 at El Hierro (Canary Islands). From then on, this methodology has been applied to the consecutive seismic crises registered at El Hierro, achieving a high success rate in the real-time forecasting, within 10-day time windows, of volcano-tectonic earthquakes.

  20. Statistical validation of earthquake related observations

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2011-12-01

    The confirmed fractal nature of earthquakes and their distribution in space and time implies that many traditional estimations of seismic hazard (from term-less to short-term ones) are usually based on erroneous assumptions of easy tractable or, conversely, delicately-designed models. The widespread practice of deceptive modeling considered as a "reasonable proxy" of the natural seismic process leads to seismic hazard assessment of unknown quality, which errors propagate non-linearly into inflicted estimates of risk and, eventually, into unexpected societal losses of unacceptable level. The studies aimed at forecast/prediction of earthquakes must include validation in the retro- (at least) and, eventually, in prospective tests. In the absence of such control a suggested "precursor/signal" remains a "candidate", which link to target seismic event is a model assumption. Predicting in advance is the only decisive test of forecast/predictions and, therefore, the score-card of any "established precursor/signal" represented by the empirical probabilities of alarms and failures-to-predict achieved in prospective testing must prove statistical significance rejecting the null-hypothesis of random coincidental occurrence in advance target earthquakes. We reiterate suggesting so-called "Seismic Roulette" null-hypothesis as the most adequate undisturbed random alternative accounting for the empirical spatial distribution of earthquakes: (i) Consider a roulette wheel with as many sectors as the number of earthquake locations from a sample catalog representing seismic locus, a sector per each location and (ii) make your bet according to prediction (i.e., determine, which locations are inside area of alarm, and put one chip in each of the corresponding sectors); (iii) Nature turns the wheel; (iv) accumulate statistics of wins and losses along with the number of chips spent. If a precursor in charge of prediction exposes an imperfection of Seismic Roulette then, having in mind

  1. The Improved Locating Algorithm of Particle Filter Based on ROS Robot

    NASA Astrophysics Data System (ADS)

    Fang, Xun; Fu, Xiaoyang; Sun, Ming

    2018-03-01

    This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.

  2. A Bayesian Approach to Real-Time Earthquake Phase Association

    NASA Astrophysics Data System (ADS)

    Benz, H.; Johnson, C. E.; Earle, P. S.; Patton, J. M.

    2014-12-01

    Real-time location of seismic events requires a robust and extremely efficient means of associating and identifying seismic phases with hypothetical sources. An association algorithm converts a series of phase arrival times into a catalog of earthquake hypocenters. The classical approach based on time-space stacking of the locus of possible hypocenters for each phase arrival using the principal of acoustic reciprocity has been in use now for many years. One of the most significant problems that has emerged over time with this approach is related to the extreme variations in seismic station density throughout the global seismic network. To address this problem we have developed a novel, Bayesian association algorithm, which looks at the association problem as a dynamically evolving complex system of "many to many relationships". While the end result must be an array of one to many relations (one earthquake, many phases), during the association process the situation is quite different. Both the evolving possible hypocenters and the relationships between phases and all nascent hypocenters is many to many (many earthquakes, many phases). The computational framework we are using to address this is a responsive, NoSQL graph database where the earthquake-phase associations are represented as intersecting Bayesian Learning Networks. The approach directly addresses the network inhomogeneity issue while at the same time allowing the inclusion of other kinds of data (e.g., seismic beams, station noise characteristics, priors on estimated location of the seismic source) by representing the locus of intersecting hypothetical loci for a given datum as joint probability density functions.

  3. Outward-dipping ring-fault structure at rabaul caldera as shown by earthquake locations.

    PubMed

    Mori, J; McKee, C

    1987-01-09

    The locations of a large number of earthquakes recorded at Rabaul caldera in Papua New Guinea from late 1983 to mid-1985 have produced a picture of this active caldera's structural boundary. The earthquake epicenters form an elliptical annulus about 10 kilometers long by 4 kilometers wide, centered in the southern part of the Rabaul volcanic complex. A set of events with well-constrained depth determinations shows a ring-fault structure that extends from the surface to a depth of about 4 kilometers and slopes steeply outward from the center of the caldera. This is the first geophysical data set that clearly outlines the orientation of an active caldera's bounding faults. This orientation, however, conflicts with the configuration of many other calderas and is not in keeping with currently preferred models of caldera formation.

  4. Estimating the Locations of Past and Future Large Earthquake Ruptures using Recent M4 and Greater Events

    NASA Astrophysics Data System (ADS)

    Ebel, J.; Chambers, D. W.

    2017-12-01

    Although most aftershock activity dies away within months or a few years of a mainshock, there is evidence that aftershocks still occur decades or even centuries after mainshocks, particularly in areas of low background seismicity such as stable continental regions. There also is evidence of long-lasting aftershock sequences in California. New work to study the occurrences of recent M≥4 in California shows that these events occur preferentially at the edges of past major ruptures, with the effect lessening with decreasing magnitude below M4. Prior to several California mainshocks, the M≥4 seismicity was uniformly spread along the future fault ruptures without concentrations at the fault ends. On these faults, the rates of the M≥4 earthquakes prior to the mainshocks were much greater than the rates of the recent M≥4 earthquakes. These results suggest that the spatial patterns and rates of M≥4 earthquakes may help identify which faults are most prone to rupturing in the near future. Using this idea, speculation on which faults in California may be the next ones to experience major earthquakes is presented. Some Japanese earthquakes were also tested for the patterns of M≥4 earthquake seen in California. The 2000 Mw6.6 Western Tottori earthquake shows a premonitory pattern similar to the patterns seen in California, and there have not been any M≥4 earthquakes in the fault vicinity since 2010. The 1995 Mw6.9 Kobe earthquake had little M≥4 seismicity in the years prior to the mainshock, and the M≥4 seismicity since 2000 has been scattered along the fault rupture. Both the 2016 M7.3 Kumamoto, Kyushu earthquake and the 2016 Mw6.2 Central Tottori earthquake had some M≥4 earthquakes along the fault in the two decades before the mainshocks. The results of these analyses suggest that the locations of recent M≥4 earthquakes may be useful for determining the spatial extents of past earthquake ruptures and also may help indicate which faults may have strong

  5. Application of genetic algorithms to focal mechanism determination

    NASA Astrophysics Data System (ADS)

    Kobayashi, Reiji; Nakanishi, Ichiro

    1994-04-01

    Genetic algorithms are a new class of methods for global optimization. They resemble Monte Carlo techniques, but search for solutions more efficiently than uniform Monte Carlo sampling. In the field of geophysics, genetic algorithms have recently been used to solve some non-linear inverse problems (e.g., earthquake location, waveform inversion, migration velocity estimation). We present an application of genetic algorithms to focal mechanism determination from first-motion polarities of P-waves and apply our method to two recent large events, the Kushiro-oki earthquake of January 15, 1993 and the SW Hokkaido (Japan Sea) earthquake of July 12, 1993. Initial solution and curvature information of the objective function that gradient methods need are not required in our approach. Moreover globally optimal solutions can be efficiently obtained. Calculation of polarities based on double-couple models is the most time-consuming part of the source mechanism determination. The amount of calculations required by the method designed in this study is much less than that of previous grid search methods.

  6. Earthquake prediction in California using regression algorithms and cloud-based big data infrastructure

    NASA Astrophysics Data System (ADS)

    Asencio-Cortés, G.; Morales-Esteban, A.; Shang, X.; Martínez-Álvarez, F.

    2018-06-01

    Earthquake magnitude prediction is a challenging problem that has been widely studied during the last decades. Statistical, geophysical and machine learning approaches can be found in literature, with no particularly satisfactory results. In recent years, powerful computational techniques to analyze big data have emerged, making possible the analysis of massive datasets. These new methods make use of physical resources like cloud based architectures. California is known for being one of the regions with highest seismic activity in the world and many data are available. In this work, the use of several regression algorithms combined with ensemble learning is explored in the context of big data (1 GB catalog is used), in order to predict earthquakes magnitude within the next seven days. Apache Spark framework, H2 O library in R language and Amazon cloud infrastructure were been used, reporting very promising results.

  7. Disease and injury trends among evacuees in a shelter located at the epicenter of the 2016 Kumamoto earthquakes, Japan.

    PubMed

    Yorifuji, Takashi; Sato, Takushi; Yoneda, Toru; Kishida, Yoshiomi; Yamamoto, Sumie; Sakai, Taro; Sashiyama, Hiroshi; Takahashi, Shuko; Orui, Hayato; Kato, Daisuke; Hasegawa, Taro; Suzuki, Yoshihiro; Okamoto, Maki; Hayashi, Hideki; Suganami, Shigeru

    2017-06-16

    Two huge earthquakes struck Kumamoto, Japan, in April 2016, forcing residents to evacuate. Few studies have reported early-phase disease and injury trends among evacuees following major inland earthquakes. We evaluated the trends among evacuees who visited a medical clinic in a shelter located at the epicenter of the 2016 Kumamoto earthquakes. The clinic opened on April 15, the day after the foreshock, and closed 3 weeks later. We reviewed medical charts related to 929 outpatient visits and conducted descriptive analyses. The evacuees experienced mild injuries and common diseases. The types of diseases changed weekly. Elderly people needed medical support for longer than other age groups. Future earthquakes may be inevitable, but establishing arrangements for medical needs or making precautions for infectious diseases in shelters could reduce the effects of earthquake-related health problems.

  8. Constraining Source Locations of Shallow Subduction Megathrust Earthquakes in 1-D and 3-D Velocity Models - A Case Study of the 2002 Mw=6.4 Osa Earthquake, Costa Rica

    NASA Astrophysics Data System (ADS)

    Grevemeyer, I.; Arroyo, I. G.

    2015-12-01

    Earthquake source locations are generally routinely constrained using a global 1-D Earth model. However, the source location might be associated with large uncertainties. This is definitively the case for earthquakes occurring at active continental margins were thin oceanic crust subducts below thick continental crust and hence large lateral changes in crustal thickness occur as a function of distance to the deep-sea trench. Here, we conducted a case study of the 2002 Mw 6.4 Osa thrust earthquake in Costa Rica that was followed by an aftershock sequence. Initial relocations indicated that the main shock occurred fairly trenchward of most large earthquakes along the Middle America Trench off central Costa Rica. The earthquake sequence occurred while a temporary network of ocean-bottom-hydrophones and land stations 80 km to the northwest were deployed. By adding readings from permanent Costa Rican stations, we obtain uncommon P wave coverage of a large subduction zone earthquake. We relocated this catalog using a nonlinear probabilistic approach using a 1-D and two 3-D P-wave velocity models. The 3-D model was either derived from 3-D tomography based on onshore stations and a priori model based on seismic refraction data. All epicentres occurred close to the trench axis, but depth estimates vary by several tens of kilometres. Based on the epicentres and constraints from seismic reflection data the main shock occurred 25 km from the trench and probably along the plate interface at 5-10 km depth. The source location that agreed best with the geology was based on the 3-D velocity model derived from a priori data. Aftershocks propagated downdip to the area of a 1999 Mw 6.9 sequence and partially overlapped it. The results indicate that underthrusting of the young and buoyant Cocos Ridge has created conditions for interpolate seismogenesis shallower and closer to the trench axis than elsewhere along the central Costa Rica margin.

  9. Study on Earthquake Emergency Evacuation Drill Trainer Development

    NASA Astrophysics Data System (ADS)

    ChangJiang, L.

    2016-12-01

    With the improvement of China's urbanization, to ensure people survive the earthquake needs scientific routine emergency evacuation drills. Drawing on cellular automaton, shortest path algorithm and collision avoidance, we designed a model of earthquake emergency evacuation drill for school scenes. Based on this model, we made simulation software for earthquake emergency evacuation drill. The software is able to perform the simulation of earthquake emergency evacuation drill by building spatial structural model and selecting the information of people's location grounds on actual conditions of constructions. Based on the data of simulation, we can operate drilling in the same building. RFID technology could be used here for drill data collection which read personal information and send it to the evacuation simulation software via WIFI. Then the simulation software would contrast simulative data with the information of actual evacuation process, such as evacuation time, evacuation path, congestion nodes and so on. In the end, it would provide a contrastive analysis report to report assessment result and optimum proposal. We hope the earthquake emergency evacuation drill software and trainer can provide overall process disposal concept for earthquake emergency evacuation drill in assembly occupancies. The trainer can make the earthquake emergency evacuation more orderly, efficient, reasonable and scientific to fulfill the increase in coping capacity of urban hazard.

  10. Seismicity, faulting, and structure of the Koyna-Warna seismic region, Western India from local earthquake tomography and hypocenter locations

    NASA Astrophysics Data System (ADS)

    Dixit, Madan M.; Kumar, Sanjay; Catchings, R. D.; Suman, K.; Sarkar, Dipankar; Sen, M. K.

    2014-08-01

    Although seismicity near Koyna Reservoir (India) has persisted for ~50 years and includes the largest induced earthquake (M 6.3) reported worldwide, the seismotectonic framework of the area is not well understood. We recorded ~1800 earthquakes from 6 January 2010 to 28 May 2010 and located a subset of 343 of the highest-quality earthquakes using the tomoDD code of Zhang and Thurber (2003) to better understand the framework. We also inverted first arrivals for 3-D Vp, Vs, and Vp/Vs and Poisson's ratio tomography models of the upper 12 km of the crust. Epicenters for the recorded earthquakes are located south of the Koyna River, including a high-density cluster that coincides with a shallow depth (<1.5 km) zone of relatively high Vp and low Vs (also high Vp/Vs and Poisson's ratios) near Warna Reservoir. This anomalous zone, which extends near vertically to at least 8 km depth and laterally northward at least 15 km, is likely a water-saturated zone of faults under high pore pressures. Because many of the earthquakes occur on the periphery of the fault zone, rather than near its center, the observed seismicity-velocity correlations are consistent with the concept that many of the earthquakes nucleate in fractures adjacent to the main fault zone due to high pore pressure. We interpret our velocity images as showing a series of northwest trending faults locally near the central part of Warna Reservoir and a major northward trending fault zone north of Warna Reservoir.

  11. The Development of Several Electromagnetic Monitoring Strategies and Algorithms for Validating Pre-Earthquake Electromagnetic Signals

    NASA Astrophysics Data System (ADS)

    Bleier, T. E.; Dunson, J. C.; Roth, S.; Mueller, S.; Lindholm, C.; Heraud, J. A.

    2012-12-01

    QuakeFinder, a private research group in California, reports on the development of a 100+ station network consisting of 3-axis induction magnetometers, and air conductivity sensors to collect and characterize pre-seismic electromagnetic (EM) signals. These signals are combined with daily Infra Red signals collected from the GOES weather satellite infrared (IR) instrument to compare and correlate with the ground EM signals, both from actual earthquakes and boulder stressing experiments. This presentation describes the efforts QuakeFinder has undertaken to automatically detect these pulse patterns using their historical data as a reference, and to develop other discriminative algorithms that can be used with air conductivity sensors, and IR instruments from the GOES satellites. The overall big picture results of the QuakeFinder experiment are presented. In 2007, QuakeFinder discovered the occurrence of strong uni-polar pulses in their magnetometer coil data that increased in tempo dramatically prior to the M5.1 earthquake at Alum Rock, California. Suggestions that these pulses might have been lightning or power-line arcing did not fit with the data actually recorded as was reported in Bleier [2009]. Then a second earthquake occurred near the same site on January 7, 2010 as was reported in Dunson [2011], and the pattern of pulse count increases before the earthquake occurred similarly to the 2007 event. There were fewer pulses, and the magnitude of them was decreased, both consistent with the fact that the earthquake was smaller (M4.0 vs M5.4) and farther away (7Km vs 2km). At the same time similar effects were observed at the QuakeFinder Tacna, Peru site before the May 5th, 2010 M6.2 earthquake and a cluster of several M4-5 earthquakes.

  12. Bayesian historical earthquake relocation: an example from the 1909 Taipei earthquake

    USGS Publications Warehouse

    Minson, Sarah E.; Lee, William H.K.

    2014-01-01

    Locating earthquakes from the beginning of the modern instrumental period is complicated by the fact that there are few good-quality seismograms and what traveltimes do exist may be corrupted by both large phase-pick errors and clock errors. Here, we outline a Bayesian approach to simultaneous inference of not only the hypocentre location but also the clock errors at each station and the origin time of the earthquake. This methodology improves the solution for the source location and also provides an uncertainty analysis on all of the parameters included in the inversion. As an example, we applied this Bayesian approach to the well-studied 1909 Mw 7 Taipei earthquake. While our epicentre location and origin time for the 1909 Taipei earthquake are consistent with earlier studies, our focal depth is significantly shallower suggesting a higher seismic hazard to the populous Taipei metropolitan area than previously supposed.

  13. Earthquake source parameters from GPS-measured static displacements with potential for real-time application

    NASA Astrophysics Data System (ADS)

    O'Toole, Thomas B.; Valentine, Andrew P.; Woodhouse, John H.

    2013-01-01

    We describe a method for determining an optimal centroid-moment tensor solution of an earthquake from a set of static displacements measured using a network of Global Positioning System receivers. Using static displacements observed after the 4 April 2010, MW 7.2 El Mayor-Cucapah, Mexico, earthquake, we perform an iterative inversion to obtain the source mechanism and location, which minimize the least-squares difference between data and synthetics. The efficiency of our algorithm for forward modeling static displacements in a layered elastic medium allows the inversion to be performed in real-time on a single processor without the need for precomputed libraries of excitation kernels; we present simulated real-time results for the El Mayor-Cucapah earthquake. The only a priori information that our inversion scheme needs is a crustal model and approximate source location, so the method proposed here may represent an improvement on existing early warning approaches that rely on foreknowledge of fault locations and geometries.

  14. Application of the region-time-length algorithm to study of earthquake precursors in the Thailand-Laos-Myanmar borders

    NASA Astrophysics Data System (ADS)

    Puangjaktha, P.; Pailoplee, S.

    2018-04-01

    In order to examine the precursory seismic quiescence of upcoming hazardous earthquakes, the seismicity data available in the vicinity of the Thailand-Laos-Myanmar borders was analyzed using the Region-Time-Length (RTL) algorithm based statistical technique. The utilized earthquake data were obtained from the International Seismological Centre. Thereafter, the homogeneity and completeness of the catalogue were improved. After performing iterative tests with different values of the r0 and t0 parameters, those of r0 = 120 km and t0 = 2 yr yielded reasonable estimates of the anomalous RTL scores, in both temporal variation and spatial distribution, of a few years prior to five out of eight strong-to-major recognized earthquakes. Statistical evaluation of both the correlation coefficient and stochastic process for the RTL were checked and revealed that the RTL score obtained here excluded artificial or random phenomena. Therefore, the prospective earthquake sources mentioned here should be recognized and effective mitigation plans should be provided.

  15. A novel tree-based algorithm to discover seismic patterns in earthquake catalogs

    NASA Astrophysics Data System (ADS)

    Florido, E.; Asencio-Cortés, G.; Aznarte, J. L.; Rubio-Escudero, C.; Martínez-Álvarez, F.

    2018-06-01

    A novel methodology is introduced in this research study to detect seismic precursors. Based on an existing approach, the new methodology searches for patterns in the historical data. Such patterns may contain statistical or soil dynamics information. It improves the original version in several aspects. First, new seismicity indicators have been used to characterize earthquakes. Second, a machine learning clustering algorithm has been applied in a very flexible way, thus allowing the discovery of new data groupings. Third, a novel search strategy is proposed in order to obtain non-overlapped patterns. And, fourth, arbitrary lengths of patterns are searched for, thus discovering long and short-term behaviors that may influence in the occurrence of medium-large earthquakes. The methodology has been applied to seven different datasets, from three different regions, namely the Iberian Peninsula, Chile and Japan. Reported results show a remarkable improvement with respect to the former version, in terms of all evaluated quality measures. In particular, the number of false positives has decreased and the positive predictive values increased, both of them in a very remarkable manner.

  16. NASA Spacecraft Image Shows Location of Iranian Earthquake

    NASA Image and Video Library

    2017-12-08

    On April 9, 2013 at 11:52 GMT, a magnitude 6.3 earthquake hit southwestern Iran's Bushehr province near the town of Kaki. Preliminary information is that several villages have been destroyed and many people have died, as reported by BBC News. This perspective view of the region was acquired Nov. 17, 2012, by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument on NASA's Terra spacecraft. The location of the earthquake's epicenter is marked with a yellow star. Vegetation is displayed in red; the vertical exaggeration of the topography is 2X. The image is centered near 28.5 degrees north latitude, 51.6 degrees east longitude. With its 14 spectral bands from the visible to the thermal infrared wavelength region and its high spatial resolution of 15 to 90 meters (about 50 to 300 feet), ASTER images Earth to map and monitor the changing surface of our planet. ASTER is one of five Earth-observing instruments launched Dec. 18, 1999, on Terra. The instrument was built by Japan's Ministry of Economy, Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and data products. The broad spectral coverage and high spectral resolution of ASTER provides scientists in numerous disciplines with critical information for surface mapping and monitoring of dynamic conditions and temporal change. Example applications are: monitoring glacial advances and retreats; monitoring potentially active volcanoes; identifying crop stress; determining cloud morphology and physical properties; wetlands evaluation; thermal pollution monitoring; coral reef degradation; surface temperature mapping of soils and geology; and measuring surface heat balance. The U.S. science team is located at NASA's Jet Propulsion Laboratory, Pasadena, Calif. The Terra mission is part of NASA's Science Mission Directorate, Washington, D.C. More information about ASTER is available at asterweb.jpl.nasa.gov/. Image Credit: NASA

  17. Optimizing the real-time automatic location of the events produced in Romania using an advanced processing system

    NASA Astrophysics Data System (ADS)

    Neagoe, Cristian; Grecu, Bogdan; Manea, Liviu

    2016-04-01

    National Institute for Earth Physics (NIEP) operates a real time seismic network which is designed to monitor the seismic activity on the Romanian territory, which is dominated by the intermediate earthquakes (60-200 km) from Vrancea area. The ability to reduce the impact of earthquakes on society depends on the existence of a large number of high-quality observational data. The development of the network in recent years and an advanced seismic acquisition are crucial to achieving this objective. The software package used to perform the automatic real-time locations is Seiscomp3. An accurate choice of the Seiscomp3 setting parameters is necessary to ensure the best performance of the real-time system i.e., the most accurate location for the earthquakes and avoiding any false events. The aim of this study is to optimize the algorithms of the real-time system that detect and locate the earthquakes in the monitored area. This goal is pursued by testing different parameters (e.g., STA/LTA, filters applied to the waveforms) on a data set of representative earthquakes of the local seismicity. The results are compared with the locations from the Romanian Catalogue ROMPLUS.

  18. Rapid Estimates of Rupture Extent for Large Earthquakes Using Aftershocks

    NASA Astrophysics Data System (ADS)

    Polet, J.; Thio, H. K.; Kremer, M.

    2009-12-01

    The spatial distribution of aftershocks is closely linked to the rupture extent of the mainshock that preceded them and a rapid analysis of aftershock patterns therefore has potential for use in near real-time estimates of earthquake impact. The correlation between aftershocks and slip distribution has frequently been used to estimate the fault dimensions of large historic earthquakes for which no, or insufficient, waveform data is available. With the advent of earthquake inversions that use seismic waveforms and geodetic data to constrain the slip distribution, the study of aftershocks has recently been largely focused on enhancing our understanding of the underlying mechanisms in a broader earthquake mechanics/dynamics framework. However, in a near real-time earthquake monitoring environment, in which aftershocks of large earthquakes are routinely detected and located, these data may also be effective in determining a fast estimate of the mainshock rupture area, which would aid in the rapid assessment of the impact of the earthquake. We have analyzed a considerable number of large recent earthquakes and their aftershock sequences and have developed an effective algorithm that determines the rupture extent of a mainshock from its aftershock distribution, in a fully automatic manner. The algorithm automatically removes outliers by spatial binning, and subsequently determines the best fitting “strike” of the rupture and its length by projecting the aftershock epicenters onto a set of lines that cross the mainshock epicenter with incremental azimuths. For strike-slip or large dip-slip events, for which the surface projection of the rupture is recti-linear, the calculated strike correlates well with the strike of the fault and the corresponding length, determined from the distribution of aftershocks projected onto the line, agrees well with the rupture length. In the case of a smaller dip-slip rupture with an aspect ratio closer to 1, the procedure gives a measure

  19. Seismicity, faulting, and structure of the Koyna-Warna seismic region, Western India from local earthquake tomography and hypocenter locations

    USGS Publications Warehouse

    Dixit, Madan M.; Kumar, Sanjay; Catchings, Rufus D.; Suman, K.; Sarkar, Dipankar; Sen, M.K.

    2014-01-01

    Although seismicity near Koyna Reservoir (India) has persisted for ~50 years and includes the largest induced earthquake (M 6.3) reported worldwide, the seismotectonic framework of the area is not well understood. We recorded ~1800 earthquakes from 6 January 2010 to 28 May 2010 and located a subset of 343 of the highest-quality earthquakes using the tomoDD code of Zhang and Thurber (2003) to better understand the framework. We also inverted first arrivals for 3-D Vp, Vs, and Vp/Vs and Poisson's ratio tomography models of the upper 12 km of the crust. Epicenters for the recorded earthquakes are located south of the Koyna River, including a high-density cluster that coincides with a shallow depth (<1.5 km) zone of relatively high Vp and low Vs (also high Vp/Vs and Poisson's ratios) near Warna Reservoir. This anomalous zone, which extends near vertically to at least 8 km depth and laterally northward at least 15 km, is likely a water-saturated zone of faults under high pore pressures. Because many of the earthquakes occur on the periphery of the fault zone, rather than near its center, the observed seismicity-velocity correlations are consistent with the concept that many of the earthquakes nucleate in fractures adjacent to the main fault zone due to high pore pressure. We interpret our velocity images as showing a series of northwest trending faults locally near the central part of Warna Reservoir and a major northward trending fault zone north of Warna Reservoir.

  20. A Benders based rolling horizon algorithm for a dynamic facility location problem

    DOE PAGES

    Marufuzzaman,, Mohammad; Gedik, Ridvan; Roni, Mohammad S.

    2016-06-28

    This study presents a well-known capacitated dynamic facility location problem (DFLP) that satisfies the customer demand at a minimum cost by determining the time period for opening, closing, or retaining an existing facility in a given location. To solve this challenging NP-hard problem, this paper develops a unique hybrid solution algorithm that combines a rolling horizon algorithm with an accelerated Benders decomposition algorithm. Extensive computational experiments are performed on benchmark test instances to evaluate the hybrid algorithm’s efficiency and robustness in solving the DFLP problem. Computational results indicate that the hybrid Benders based rolling horizon algorithm consistently offers high qualitymore » feasible solutions in a much shorter computational time period than the standalone rolling horizon and accelerated Benders decomposition algorithms in the experimental range.« less

  1. Using a modified time-reverse imaging technique to locate low-frequency earthquakes on the San Andreas Fault near Cholame, California

    USGS Publications Warehouse

    Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.

    2015-01-01

    We present a new method to locate low-frequency earthquakes (LFEs) within tectonic tremor episodes based on time-reverse imaging techniques. The modified time-reverse imaging technique presented here is the first method that locates individual LFEs within tremor episodes within 5 km uncertainty without relying on high-amplitude P-wave arrivals and that produces similar hypocentral locations to methods that locate events by stacking hundreds of LFEs without having to assume event co-location. In contrast to classic time-reverse imaging algorithms, we implement a modification to the method that searches for phase coherence over a short time period rather than identifying the maximum amplitude of a superpositioned wavefield. The method is independent of amplitude and can help constrain event origin time. The method uses individual LFE origin times, but does not rely on a priori information on LFE templates and families.We apply the method to locate 34 individual LFEs within tremor episodes that occur between 2010 and 2011 on the San Andreas Fault, near Cholame, California. Individual LFE location accuracies range from 2.6 to 5 km horizontally and 4.8 km vertically. Other methods that have been able to locate individual LFEs with accuracy of less than 5 km have mainly used large-amplitude events where a P-phase arrival can be identified. The method described here has the potential to locate a larger number of individual low-amplitude events with only the S-phase arrival. Location accuracy is controlled by the velocity model resolution and the wavelength of the dominant energy of the signal. Location results are also dependent on the number of stations used and are negligibly correlated with other factors such as the maximum gap in azimuthal coverage, source–station distance and signal-to-noise ratio.

  2. Calculation of earthquake rupture histories using a hybrid global search algorithm: Application to the 1992 Landers, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.

    1996-01-01

    A method is presented for the simultaneous calculation of slip amplitudes and rupture times for a finite fault using a hybrid global search algorithm. The method we use combines simulated annealing with the downhill simplex method to produce a more efficient search algorithm then either of the two constituent parts. This formulation has advantages over traditional iterative or linearized approaches to the problem because it is able to escape local minima in its search through model space for the global optimum. We apply this global search method to the calculation of the rupture history for the Landers, California, earthquake. The rupture is modeled using three separate finite-fault planes to represent the three main fault segments that failed during this earthquake. Both the slip amplitude and the time of slip are calculated for a grid work of subfaults. The data used consist of digital, teleseismic P and SH body waves. Long-period, broadband, and short-period records are utilized to obtain a wideband characterization of the source. The results of the global search inversion are compared with a more traditional linear-least-squares inversion for only slip amplitudes. We use a multi-time-window linear analysis to relax the constraints on rupture time and rise time in the least-squares inversion. Both inversions produce similar slip distributions, although the linear-least-squares solution has a 10% larger moment (7.3 ?? 1026 dyne-cm compared with 6.6 ?? 1026 dyne-cm). Both inversions fit the data equally well and point out the importance of (1) using a parameterization with sufficient spatial and temporal flexibility to encompass likely complexities in the rupture process, (2) including suitable physically based constraints on the inversion to reduce instabilities in the solution, and (3) focusing on those robust rupture characteristics that rise above the details of the parameterization and data set.

  3. G-FAST Early Warning Potential for Great Earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    Crowell, B.; Schmidt, D. A.; Baker, B. I.; Bodin, P.; Vidale, J. E.

    2016-12-01

    The importance of GNSS-based earthquake early warning for modeling large earthquakes has been studied extensively over the past decade and several such systems are currently under development. In the Pacific Northwest, we have developed the G-FAST GNSS-based earthquake early warning module for eventual inclusion in the US West-Coast wide ShakeAlert system. We have also created a test system that allows us to replay past and synthetic earthquakes to identify problems with both the network architecture and the algorithms. Between 2010 and 2016, there have been seven M > 8 earthquakes across the globe, of which three struck offshore Chile; the 27 February 2010 Mw 8.8 Maule, the 1 April 2014 Mw 8.2 Iquique, and the 16 September 2015 Mw 8.3 Illapel. Subsequent to these events, the Chilean national GNSS network operated by the Centro Sismologico Nacional (http://www.sismologia.cl/) greatly expanded to over 150 continuous GNSS stations, providing the best recordings of great earthquakes with GNSS outside of Japan. Here we report on retrospective G-FAST performance for those three great earthquakes in Chile. We discuss the interplay of location errors, latency, and data completeness with respect to the precision and timing of G-FAST earthquake source alerts as well as the computational demands of the system.

  4. A General Event Location Algorithm with Applications to Eclipse and Station Line-of-Sight

    NASA Technical Reports Server (NTRS)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  5. A General Event Location Algorithm with Applications to Eclispe and Station Line-of-Sight

    NASA Technical Reports Server (NTRS)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  6. Delineation of Rupture Propagation of Large Earthquakes Using Source-Scanning Algorithm: A Control Study

    NASA Astrophysics Data System (ADS)

    Kao, H.; Shan, S.

    2004-12-01

    Determination of the rupture propagation of large earthquakes is important and of wide interest to the seismological research community. The conventional inversion method determines the distribution of slip at a grid of subfaults whose orientations are predefined. As a result, difference choices of fault geometry and dimensions often result in different solutions. In this study, we try to reconstruct the rupture history of an earthquake using the newly developed Source-Scanning Algorithm (SSA) without imposing any a priori constraints on the fault's orientation and dimension. The SSA identifies the distribution of seismic sources in two steps. First, it calculates the theoretical arrival times from all grid points inside the model space to all seismic stations by assuming an origin time. Then, the absolute amplitudes of the observed waveforms at the predicted arrival times are added to give the "brightness" of each time-space pair, and the brightest spots mark the locations of sources. The propagation of the rupture is depicted by the migration of the brightest spots throughout a prescribed time window. A series of experiments are conducted to test the resolution of the SSA inversion. Contrary to the conventional wisdom that seismometers should be placed as close as possible to the fault trace to give the best resolution in delineating rupture details, we found that the best results are obtained if the seismograms are recorded at a distance about half of the total rupture length away from the fault trace. This is especially true when the rupture duration is longer than ~10 s. A possible explanation is that the geometric spreading effects for waveforms from different segments of the rupture are about the same if the stations are sufficiently away from the fault trace, thus giving a uniform resolution to the entire rupture history.

  7. PAGER--Rapid assessment of an earthquake?s impact

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.

    2010-01-01

    PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.

  8. Geodetic Finite-Fault-based Earthquake Early Warning Performance for Great Earthquakes Worldwide

    NASA Astrophysics Data System (ADS)

    Ruhl, C. J.; Melgar, D.; Grapenthin, R.; Allen, R. M.

    2017-12-01

    GNSS-based earthquake early warning (EEW) algorithms estimate fault-finiteness and unsaturated moment magnitude for the largest, most damaging earthquakes. Because large events are infrequent, algorithms are not regularly exercised and insufficiently tested on few available datasets. The Geodetic Alarm System (G-larmS) is a GNSS-based finite-fault algorithm developed as part of the ShakeAlert EEW system in the western US. Performance evaluations using synthetic earthquakes offshore Cascadia showed that G-larmS satisfactorily recovers magnitude and fault length, providing useful alerts 30-40 s after origin time and timely warnings of ground motion for onshore urban areas. An end-to-end test of the ShakeAlert system demonstrated the need for GNSS data to accurately estimate ground motions in real-time. We replay real data from several subduction-zone earthquakes worldwide to demonstrate the value of GNSS-based EEW for the largest, most damaging events. We compare predicted ground acceleration (PGA) from first-alert-solutions with those recorded in major urban areas. In addition, where applicable, we compare observed tsunami heights to those predicted from the G-larmS solutions. We show that finite-fault inversion based on GNSS-data is essential to achieving the goals of EEW.

  9. Links Between Earthquake Characteristics and Subducting Plate Heterogeneity in the 2016 Pedernales Ecuador Earthquake Rupture Zone

    NASA Astrophysics Data System (ADS)

    Bai, L.; Mori, J. J.

    2016-12-01

    The collision between the Indian and Eurasian plates formed the Himalayas, the largest orogenic belt on the Earth. The entire region accommodates shallow earthquakes, while intermediate-depth earthquakes are concentrated at the eastern and western Himalayan syntaxis. Here we investigate the focal depths, fault plane solutions, and source rupture process for three earthquake sequences, which are located at the western, central and eastern regions of the Himalayan orogenic belt. The Pamir-Hindu Kush region is located at the western Himalayan syntaxis and is characterized by extreme shortening of the upper crust and strong interaction of various layers of the lithosphere. Many shallow earthquakes occur on the Main Pamir Thrust at focal depths shallower than 20 km, while intermediate-deep earthquakes are mostly located below 75 km. Large intermediate-depth earthquakes occur frequently at the western Himalayan syntaxis about every 10 years on average. The 2015 Nepal earthquake is located in the central Himalayas. It is a typical megathrust earthquake that occurred on the shallow portion of the Main Himalayan Thrust (MHT). Many of the aftershocks are located above the MHT and illuminate faulting structures in the hanging wall with dip angles that are steeper than the MHT. These observations provide new constraints on the collision and uplift processes for the Himalaya orogenic belt. The Indo-Burma region is located south of the eastern Himalayan syntaxis, where the strike of the plate boundary suddenly changes from nearly east-west at the Himalayas to nearly north-south at the Burma Arc. The Burma arc subduction zone is a typical oblique plate convergence zone. The eastern boundary is the north-south striking dextral Sagaing fault, which hosts many shallow earthquakes with focal depth less than 25 km. In contrast, intermediate-depth earthquakes along the subduction zone reflect east-west trending reverse faulting.

  10. An improvement of the Earthworm Based Earthquake Alarm Reporting system in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, D. Y.; Hsiao, N. C.; Yih-Min, W.

    2017-12-01

    The Central Weather Bureau of Taiwan (CWB) has operated the Earthworm Based Earthquake Alarm Reporting (eBEAR) system for the purpose of earthquake early warning (EEW). The system has been used to report EEW messages to the general public since 2016 through text message from the mobile phones and the television programs. The system for inland earthquakes is able to provide accurate and fast warnings. The average epicenter error is about 5 km and the processing time is about 15 seconds. The epicenter error is defined as the distance between the epicenter estimated by the EEW system and the epicenter estimated by man. The processing time is defined as the time difference between the time earthquakes occurred and the time the system issued warning. The CWB seismic network consist about 200 seismic stations. In some area of Taiwan the distance between each seismic station is about 10 km. It means that when an earthquake occurred the seismic P wave is able to propagate through 6 stations, which is the minimum number of required stations in the EEW system, within 20 km. If the latency of data transmitting is about 1 sec, the P-wave velocity is about 6 km per sec and we take 3-sec length time window to estimate earthquake magnitude, then the processing should be around 8 sec. In fact, however, the average processing time is larger than this figure. Because some outliers of P-wave onset picks may exist in the beginning of the earthquake occurrence, the Geiger's method we used in the EEW system for earthquake location is not stable. It usually takes more time to wait for enough number of good picks. In this study we used grid search method to improve the estimations of earthquake location. The MAXEL algorithm (Sheen et al., 2015, 2016) was tested in the EEW system by simulating historical earthquakes occurred in Taiwan. The results show the processing time can be reduced and the location accuracy is acceptable for EEW purpose.

  11. Active accommodation of plate convergence in Southern Iran: Earthquake locations, triggered aseismic slip, and regional strain rates

    NASA Astrophysics Data System (ADS)

    Barnhart, William D.; Lohman, Rowena B.; Mellors, Robert J.

    2013-10-01

    We present a catalog of interferometric synthetic aperture radar (InSAR) constraints on deformation that occurred during earthquake sequences in southern Iran between 1992 and 2011, and explore the implications on the accommodation of large-scale continental convergence between Saudi Arabia and Eurasia within the Zagros Mountains. The Zagros Mountains, a salt-laden fold-and-thrust belt involving ~10 km of sedimentary rocks overlying Precambrian basement rocks, have formed as a result of ongoing continental collision since 10-20 Ma that is currently occurring at a rate of ~3 cm/yr. We first demonstrate that there is a biased misfit in earthquake locations in global catalogs that likely results from neglect of 3-D velocity structure. Previous work involving two M ~ 6 earthquakes with well-recorded aftershocks has shown that the deformation observed with InSAR may represent triggered slip on faults much shallower than the primary earthquake, which likely occurred within the basement rocks (>10 km depth). We explore the hypothesis that most of the deformation observed with InSAR spanning earthquake sequences is also due to shallow, triggered slip above a deeper earthquake, effectively doubling the moment release for each event. We quantify the effects that this extra moment release would have on the discrepancy between seismically and geodetically constrained moment rates in the region, finding that even with the extra triggered fault slip, significant aseismic deformation during the interseismic period is necessary to fully explain the convergence between Eurasia and Saudi Arabia.

  12. Earthquake triggering in southeast Africa following the 2012 Indian Ocean earthquake

    NASA Astrophysics Data System (ADS)

    Neves, Miguel; Custódio, Susana; Peng, Zhigang; Ayorinde, Adebayo

    2018-02-01

    In this paper we present evidence of earthquake dynamic triggering in southeast Africa. We analysed seismic waveforms recorded at 53 broad-band and short-period stations in order to identify possible increases in the rate of microearthquakes and tremor due to the passage of teleseismic waves generated by the Mw8.6 2012 Indian Ocean earthquake. We found evidence of triggered local earthquakes and no evidence of triggered tremor in the region. We assessed the statistical significance of the increase in the number of local earthquakes using β-statistics. Statistically significant dynamic triggering of local earthquakes was observed at 7 out of the 53 analysed stations. Two of these stations are located in the northeast coast of Madagascar and the other five stations are located in the Kaapvaal Craton, southern Africa. We found no evidence of dynamically triggered seismic activity in stations located near the structures of the East African Rift System. Hydrothermal activity exists close to the stations that recorded dynamic triggering, however, it also exists near the East African Rift System structures where no triggering was observed. Our results suggest that factors other than solely tectonic regime and geothermalism are needed to explain the mechanisms that underlie earthquake triggering.

  13. Characterizing the structural maturity of fault zones using high-resolution earthquake locations.

    NASA Astrophysics Data System (ADS)

    Perrin, C.; Waldhauser, F.; Scholz, C. H.

    2017-12-01

    We use high-resolution earthquake locations to characterize the three-dimensional structure of active faults in California and how it evolves with fault structural maturity. We investigate the distribution of aftershocks of several recent large earthquakes that occurred on immature faults (i.e., slow moving and small cumulative displacement), such as the 1992 (Mw7.3) Landers and 1999 (Mw7.1) Hector Mine events, and earthquakes that occurred on mature faults, such as the 1984 (Mw6.2) Morgan Hill and 2004 (Mw6.0) Parkfield events. Unlike previous studies which typically estimated the width of fault zones from the distribution of earthquakes perpendicular to the surface fault trace, we resolve fault zone widths with respect to the 3D fault surface estimated from principal component analysis of local seismicity. We find that the zone of brittle deformation around the fault core is narrower along mature faults compared to immature faults. We observe a rapid fall off of the number of events at a distance range of 70 - 100 m from the main fault surface of mature faults (140-200 m fault zone width), and 200-300 m from the fault surface of immature faults (400-600 m fault zone width). These observations are in good agreement with fault zone widths estimated from guided waves trapped in low velocity damage zones. The total width of the active zone of deformation surrounding the main fault plane reach 1.2 km and 2-4 km for mature and immature faults, respectively. The wider zone of deformation presumably reflects the increased heterogeneity in the stress field along complex and discontinuous faults strands that make up immature faults. In contrast, narrower deformation zones tend to align with well-defined fault planes of mature faults where most of the deformation is concentrated. Our results are in line with previous studies suggesting that surface fault traces become smoother, and thus fault zones simpler, as cumulative fault slip increases.

  14. Simulating Earthquake Early Warning Systems in the Classroom as a New Approach to Teaching Earthquakes

    NASA Astrophysics Data System (ADS)

    D'Alessio, M. A.

    2010-12-01

    A discussion of P- and S-waves seems an ubiquitous part of studying earthquakes in the classroom. Textbooks from middle school through university level typically define the differences between the waves and illustrate the sense of motion. While many students successfully memorize the differences between wave types (often utilizing the first letter as a memory aide), textbooks rarely give tangible examples of how the two waves would "feel" to a person sitting on the ground. One reason for introducing the wave types is to explain how to calculate earthquake epicenters using seismograms and travel time charts -- very abstract representations of earthquakes. Even when the skill is mastered using paper-and-pencil activities or one of the excellent online interactive versions, locating an epicenter simply does not excite many of our students because it evokes little emotional impact, even in students located in earthquake-prone areas. Despite these limitations, huge numbers of students are mandated to complete the task. At the K-12 level, California requires that all students be able to locate earthquake epicenters in Grade 6; in New York, the skill is a required part of the Regent's Examination. Recent innovations in earthquake early warning systems around the globe give us the opportunity to address the same content standard, but with substantially more emotional impact on students. I outline a lesson about earthquakes focused on earthquake early warning systems. The introductory activities include video clips of actual earthquakes and emphasize the differences between the way P- and S-waves feel when they arrive (P arrives first, but is weaker). I include an introduction to the principle behind earthquake early warning (including a summary of possible uses of a few seconds warning about strong shaking) and show examples from Japan. Students go outdoors to simulate P-waves, S-waves, and occupants of two different cities who are talking to one another on cell phones

  15. Accuracy of algorithms to predict accessory pathway location in children with Wolff-Parkinson-White syndrome.

    PubMed

    Wren, Christopher; Vogel, Melanie; Lord, Stephen; Abrams, Dominic; Bourke, John; Rees, Philip; Rosenthal, Eric

    2012-02-01

    The aim of this study was to examine the accuracy in predicting pathway location in children with Wolff-Parkinson-White syndrome for each of seven published algorithms. ECGs from 100 consecutive children with Wolff-Parkinson-White syndrome undergoing electrophysiological study were analysed by six investigators using seven published algorithms, six of which had been developed in adult patients. Accuracy and concordance of predictions were adjusted for the number of pathway locations. Accessory pathways were left-sided in 49, septal in 20 and right-sided in 31 children. Overall accuracy of prediction was 30-49% for the exact location and 61-68% including adjacent locations. Concordance between investigators varied between 41% and 86%. No algorithm was better at predicting septal pathways (accuracy 5-35%, improving to 40-78% including adjacent locations), but one was significantly worse. Predictive accuracy was 24-53% for the exact location of right-sided pathways (50-71% including adjacent locations) and 32-55% for the exact location of left-sided pathways (58-73% including adjacent locations). All algorithms were less accurate in our hands than in other authors' own assessment. None performed well in identifying midseptal or right anteroseptal accessory pathway locations.

  16. Delineating Concealed Faults within Cogdell Oil Field via Earthquake Detection

    NASA Astrophysics Data System (ADS)

    Aiken, C.; Walter, J. I.; Brudzinski, M.; Skoumal, R.; Savvaidis, A.; Frohlich, C.; Borgfeldt, T.; Dotray, P.

    2016-12-01

    Cogdell oil field, located within the Permian Basin of western Texas, has experienced several earthquakes ranging from magnitude 1.7 to 4.6, most of which were recorded since 2006. Using the Earthscope USArray, Gan and Frohlich [2013] relocated some of these events and found a positive correlation in the timing of increased earthquake activity and increased CO2 injection volume. However, focal depths of these earthquakes are unknown due to 70 km station spacing of the USArray. Accurate focal depths as well as new detections can delineate subsurface faults and establish whether earthquakes are occurring in the shallow sediments or in the deeper basement. To delineate subsurface fault(s) in this region, we first detect earthquakes not currently listed in the USGS catalog by applying continuous waveform-template matching algorithms to multiple seismic data sets. We utilize seismic data spanning the time frame of 2006 to 2016 - which includes data from the U.S. Geological Survey Global Seismographic Network, the USArray, and the Sweetwater, TX broadband and nodal array located 20-40 km away. The catalog of earthquakes enhanced by template matching reveals events that were well recorded by the large-N Sweetwater array, so we are experimenting with strategies for optimizing template matching using different configurations of many stations. Since earthquake activity in the Cogdell oil field is on-going (a magnitude 2.6 occurred on May 29, 2016), a temporary deployment of TexNet seismometers has been planned for the immediate vicinity of Cogdell oil field in August 2016. Results on focal depths and detection of small magnitude events are pending this small local network deployment.

  17. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  18. OMG Earthquake! Can Twitter improve earthquake response?

    NASA Astrophysics Data System (ADS)

    Earle, P. S.; Guy, M.; Ostrum, C.; Horvath, S.; Buckmaster, R. A.

    2009-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment its earthquake response products and the delivery of hazard information. The goal is to gather near real-time, earthquake-related messages (tweets) and provide geo-located earthquake detections and rough maps of the corresponding felt areas. Twitter and other social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources. People local to an event often publish information within seconds via these technologies. In contrast, depending on the location of the earthquake, scientific alerts take between 2 to 20 minutes. Examining the tweets following the March 30, 2009, M4.3 Morgan Hill earthquake shows it is possible (in some cases) to rapidly detect and map the felt area of an earthquake using Twitter responses. Within a minute of the earthquake, the frequency of “earthquake” tweets rose above the background level of less than 1 per hour to about 150 per minute. Using the tweets submitted in the first minute, a rough map of the felt area can be obtained by plotting the tweet locations. Mapping the tweets from the first six minutes shows observations extending from Monterey to Sacramento, similar to the perceived shaking region mapped by the USGS “Did You Feel It” system. The tweets submitted after the earthquake also provided (very) short first-impression narratives from people who experienced the shaking. Accurately assessing the potential and robustness of a Twitter-based system is difficult because only tweets spanning the previous seven days can be searched, making a historical study impossible. We have, however, been archiving tweets for several months, and it is clear that significant limitations do exist. The main drawback is the lack of quantitative information

  19. USGS Tweet Earthquake Dispatch (@USGSted): Using Twitter for Earthquake Detection and Characterization

    NASA Astrophysics Data System (ADS)

    Liu, S. B.; Bouchard, B.; Bowden, D. C.; Guy, M.; Earle, P.

    2012-12-01

    The U.S. Geological Survey (USGS) is investigating how online social networking services like Twitter—a microblogging service for sending and reading public text-based messages of up to 140 characters—can augment USGS earthquake response products and the delivery of hazard information. The USGS Tweet Earthquake Dispatch (TED) system is using Twitter not only to broadcast seismically-verified earthquake alerts via the @USGSted and @USGSbigquakes Twitter accounts, but also to rapidly detect widely felt seismic events through a real-time detection system. The detector algorithm scans for significant increases in tweets containing the word "earthquake" or its equivalent in other languages and sends internal alerts with the detection time, tweet text, and the location of the city where most of the tweets originated. It has been running in real-time for 7 months and finds, on average, two or three felt events per day with a false detection rate of less than 10%. The detections have reasonable coverage of populated areas globally. The number of detections is small compared to the number of earthquakes detected seismically, and only a rough location and qualitative assessment of shaking can be determined based on Tweet data alone. However, the Twitter detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The main benefit of the tweet-based detections is speed, with most detections occurring between 19 seconds and 2 minutes from the origin time. This is considerably faster than seismic detections in poorly instrumented regions of the world. Going beyond the initial detection, the USGS is developing data mining techniques to continuously archive and analyze relevant tweets for additional details about the detected events. The information generated about an event is displayed on a web-based map designed using HTML5 for the mobile environment, which can be valuable when the user is not able to access a

  20. Location of early aftershocks of the 2004 Mid-Niigata Prefecture Earthquake (M = 6.8) in central Japan using seismogram envelopes as templates

    NASA Astrophysics Data System (ADS)

    Kosuga, M.

    2013-12-01

    The location of early aftershocks is very important to obtain information of mainshock fault, however, it is often difficult due to the long-lasting coda wave of mainshock and successive occurrence of afterrshocks. To overcome this difficulty, we developed a method of location using seismogram envelopes as templates, and applied the method to the early aftershock sequence of the 2004 Mid-Niigata Prefecture (Chuetsu) Earthquake (M = 6.8) in central Japan. The location method composes of three processes. The first process is the calculation of cross-correlation coefficients between a continuous (target) and template envelopes. We prepare envelopes by taking the logarithm of root-mean-squared amplitude of band-pass filtered seismograms. We perform the calculation by shifting the time window to obtain a set of cross-correlation values for each template. The second process is the event detection (selection of template) and magnitude estimate. We search for the events in descending order of cross-correlation in a time window excluding the dead times around the previously detected events. Magnitude is calculated by the amplitude ratio of target and template envelopes. The third process is the relative event location to the selected template. We applied this method to the Chuetsu earthquake, a large inland earthquake with extensive aftershock activity. The number of detected events depends on the number of templates, frequency range, and the threshold value of cross-correlation. We set the threshold as 0.5 by referring to the histogram of cross-correlation. During a period of one-hour from the mainshock, we could detect more events than the JMA catalog. The location of events is generally near the catalog location. Though we should improve the methods of relative location and magnitude estimate, we conclude that the proposed method works adequately even just after the mainshock of large inland earthquake. Acknowledgement: We thank JMA, NIED, and the University of Tokyo for

  1. A revised “earthquake report” questionaire

    USGS Publications Warehouse

    Stover, C.; Reagor, G.; Simon, R.

    1976-01-01

    The U.S geological Survey is responsible for conducting intensity and damage surveys following felt or destructive earthquakes in the United States. Shortly after a felt or damaging earthquake occurs, a canvass of the affected area is made. Specially developed questionnaires are mailed to volunteer observers located within the estimated felt area. These questionnaires, "Earthquake Reports," are filled out by the observers and returned to the Survey's National Earthquake Information Service, which is located in Colorado. They are then evaluated, and, based on answers to questions about physical effects seen or felt, each canvassed location is assigned to the various locations, they are plotted on an intensity distribution map. When all of the intensity data have been plotted, isoseismals can then be contoured through places where equal intensity was experienced. The completed isoseismal map yields a detailed picture of the earthquake, its effects, and its felt area. All of the data and maps are published quarterly in a U.S Geological Survey Circular series entitled "Earthquakes in the United States".  

  2. An interdisciplinary approach for earthquake modelling and forecasting

    NASA Astrophysics Data System (ADS)

    Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.

    2016-12-01

    Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.

  3. High resolution strain sensor for earthquake precursor observation and earthquake monitoring

    NASA Astrophysics Data System (ADS)

    Zhang, Wentao; Huang, Wenzhu; Li, Li; Liu, Wenyi; Li, Fang

    2016-05-01

    We propose a high-resolution static-strain sensor based on a FBG Fabry-Perot interferometer (FBG-FP) and a wavelet domain cross-correlation algorithm. This sensor is used for crust deformation measurement, which plays an important role in earthquake precursor observation. The Pound-Drever-Hall (PDH) technique based on a narrow-linewidth tunable fiber laser is used to interrogate the FBG-FPs. A demodulation algorithm based on wavelet domain cross-correlation is used to calculate the wavelength difference. The FBG-FP sensor head is fixed on the two steel alloy rods which are installed in the bedrock. The reference FBG-FP is placed in a strain-free state closely to compensate the environment temperature fluctuation. A static-strain resolution of 1.6 n(epsilon) can be achieved. As a result, clear solid tide signals and seismic signals can be recorded, which suggests that the proposed strain sensor can be applied to earthquake precursor observation and earthquake monitoring.

  4. Prompt identification of tsunamigenic earthquakes from 3-component seismic data

    NASA Astrophysics Data System (ADS)

    Kundu, Ajit; Bhadauria, Y. S.; Basu, S.; Mukhopadhyay, S.

    2016-10-01

    An Artificial Neural Network (ANN) based algorithm for prompt identification of shallow focus (depth < 70 km) tsunamigenic earthquakes at a regional distance is proposed in the paper. The promptness here refers to decision making as fast as 5 min after the arrival of LR phase in the seismogram. The root mean square amplitudes of seismic phases recorded by a single 3-component station have been considered as inputs besides location and magnitude. The trained ANN has been found to categorize 100% of the new earthquakes successfully as tsunamigenic or non-tsunamigenic. The proposed method has been corroborated by an alternate mapping technique of earthquake category estimation. The second method involves computation of focal parameters, estimation of water volume displaced at the source and eventually deciding category of the earthquake. The method has been found to identify 95% of the new earthquakes successfully. Both the methods have been tested using three component broad band seismic data recorded at PALK (Pallekele, Sri Lanka) station provided by IRIS for earthquakes originating from Sumatra region of magnitude 6 and above. The fair agreement between the methods ensures that a prompt alert system could be developed based on proposed method. The method would prove to be extremely useful for the regions that are not adequately instrumented for azimuthal coverage.

  5. Earthquake activity along the Himalayan orogenic belt

    NASA Astrophysics Data System (ADS)

    Bai, L.; Mori, J. J.

    2017-12-01

    The collision between the Indian and Eurasian plates formed the Himalayas, the largest orogenic belt on the Earth. The entire region accommodates shallow earthquakes, while intermediate-depth earthquakes are concentrated at the eastern and western Himalayan syntaxis. Here we investigate the focal depths, fault plane solutions, and source rupture process for three earthquake sequences, which are located at the western, central and eastern regions of the Himalayan orogenic belt. The Pamir-Hindu Kush region is located at the western Himalayan syntaxis and is characterized by extreme shortening of the upper crust and strong interaction of various layers of the lithosphere. Many shallow earthquakes occur on the Main Pamir Thrust at focal depths shallower than 20 km, while intermediate-deep earthquakes are mostly located below 75 km. Large intermediate-depth earthquakes occur frequently at the western Himalayan syntaxis about every 10 years on average. The 2015 Nepal earthquake is located in the central Himalayas. It is a typical megathrust earthquake that occurred on the shallow portion of the Main Himalayan Thrust (MHT). Many of the aftershocks are located above the MHT and illuminate faulting structures in the hanging wall with dip angles that are steeper than the MHT. These observations provide new constraints on the collision and uplift processes for the Himalaya orogenic belt. The Indo-Burma region is located south of the eastern Himalayan syntaxis, where the strike of the plate boundary suddenly changes from nearly east-west at the Himalayas to nearly north-south at the Burma Arc. The Burma arc subduction zone is a typical oblique plate convergence zone. The eastern boundary is the north-south striking dextral Sagaing fault, which hosts many shallow earthquakes with focal depth less than 25 km. In contrast, intermediate-depth earthquakes along the subduction zone reflect east-west trending reverse faulting.

  6. Widespread Triggering of Earthquakes in the Central US by the 2011 M9.0 Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Rubinstein, J. L.; Savage, H. M.

    2011-12-01

    The strong shaking of the 2011 M9.0 off-Tohoku earthquake triggered tectonic tremor and earthquakes in many locations around the world. We analyze broadband records from the USARRAY to identify triggered seismicity in more than 10 different locations in the Central United States. We identify triggered events in many states including: Kansas, Nebraska, Arkansas, Minnesota, and Iowa. The locally triggered earthquakes are obscured in broadband records by the Tohoku-Oki mainshock but can be revealed with high-pass filtering. With the exception of one location (central Arkansas), the triggered seismicity occurred in regions that are seismically quiet. The coincidence of this seismicity with the Tohoku-Oki event suggests that these earthquakes were triggered. The triggered seismicity in Arkansas occurred in a region where there has been an active swarm of seismicity since August 2010. There are two lines of evidence to indicate that the seismicity in Arkansas is triggered instead of part of the swarm: (1) we observe two earthquakes that initiate coincident with the arrival of shear wave and Love wave; (2) the seismicity rate increased dramatically following the Tohoku-Oki mainshock. Our observations of widespread earthquake triggering in regions thought to be seismically quiet remind us that earthquakes can occur in most any location. Studying additional teleseismic events has the potential to reveal regions with a propensity for earthquake triggering.

  7. Characterization of the Virginia earthquake effects and source parameters from website traffic analysis

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Lefebvre, S.; Mazet-Roux, G.; Roussel, F.

    2012-12-01

    This paper presents an after the fact study of the Virginia earthquake of 2011 August 23 using only the traffic observed on the EMSC website within minutes of its occurrence. Although the EMSC real time information services remain poorly identified in the US, a traffic surge was observed immediately after the earthquake's occurrence. Such surges, known as flashcrowd and commonly observed on our website after felt events within the Euro-Med region are caused by eyewitnesses looking for information about the shaking they have just felt. EMSC developed an approach named flashsourcing to map the felt area, and in some circumstances, the regions affected by severe damage or network disruption. The felt area is mapped simply by locating the Internet Protocol (IP) addresses of the visitors to the website during these surges while the existence of network disruption is detected by the instantaneous loss at the time of earthquake's occurrence of existing Internet sessions originating from the impacted area. For the Virginia earthquake, which was felt at large distances, the effects of the waves propagation are clearly observed. We show that the visits to our website are triggered by the P waves arrival: the first visitors from a given locality reach our website 90s after their location was shaken by the P waves. From a processing point of view, eyewitnesses can then be considered as ground motion detectors. By doing so, the epicentral location is determined through a simple dedicated location algorithm within 2 min of the earthquake's occurrence and 30 km accuracy. The magnitude can be estimated in similar time frame by using existing empirical relationships between the surface of the felt area and the magnitude. Concerning the effects of the earthquake, we check whether one can discriminate localities affected by strong shaking from web traffic analysis. This is actually the case. Localities affected by strong level of shaking exhibit higher ratio of visitors to the number

  8. Earthquake classification, location, and error analysis in a volcanic environment: implications for the magmatic system of the 1989-1990 eruptions at redoubt volcano, Alaska

    USGS Publications Warehouse

    Lahr, J.C.; Chouet, B.A.; Stephens, C.D.; Power, J.A.; Page, R.A.

    1994-01-01

    Determination of the precise locations of seismic events associated with the 1989-1990 eruptions of Redoubt Volcano posed a number of problems, including poorly known crustal velocities, a sparse station distribution, and an abundance of events with emergent phase onsets. In addition, the high relief of the volcano could not be incorporated into the hypoellipse earthquake location algorithm. This algorithm was modified to allow hypocenters to be located above the elevation of the seismic stations. The velocity model was calibrated on the basis of a posteruptive seismic survey, in which four chemical explosions were recorded by eight stations of the permanent network supplemented with 20 temporary seismographs deployed on and around the volcanic edifice. The model consists of a stack of homogeneous horizontal layers; setting the top of the model at the summit allows events to be located anywhere within the volcanic edifice. Detailed analysis of hypocentral errors shows that the long-period (LP) events constituting the vigorous 23-hour swarm that preceded the initial eruption on December 14 could have originated from a point 1.4 km below the crater floor. A similar analysis of LP events in the swarm preceding the major eruption on January 2 shows they also could have originated from a point, the location of which is shifted 0.8 km northwest and 0.7 km deeper than the source of the initial swarm. We suggest this shift in LP activity reflects a northward jump in the pathway for magmatic gases caused by the sealing of the initial pathway by magma extrusion during the last half of December. Volcano-tectonic (VT) earthquakes did not occur until after the initial 23-hour-long swarm. They began slowly just below the LP source and their rate of occurrence increased after the eruption of 01:52 AST on December 15, when they shifted to depths of 6 to 10 km. After January 2 the VT activity migrated gradually northward; this migration suggests northward propagating withdrawal of

  9. A firefly algorithm for solving competitive location-design problem: a case study

    NASA Astrophysics Data System (ADS)

    Sadjadi, Seyed Jafar; Ashtiani, Milad Gorji; Ramezanian, Reza; Makui, Ahmad

    2016-12-01

    This paper aims at determining the optimal number of new facilities besides specifying both the optimal location and design level of them under the budget constraint in a competitive environment by a novel hybrid continuous and discrete firefly algorithm. A real-world application of locating new chain stores in the city of Tehran, Iran, is used and the results are analyzed. In addition, several examples have been solved to evaluate the efficiency of the proposed model and algorithm. The results demonstrate that the performed method provides good-quality results for the test problems.

  10. Rapid Tsunami Inundation Forecast from Near-field or Far-field Earthquakes using Pre-computed Tsunami Database: Pelabuhan Ratu, Indonesia

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Setiyono, U.; Satake, K.; Fujii, Y.

    2017-12-01

    We built pre-computed tsunami inundation database in Pelabuhan Ratu, one of tsunami-prone areas on the southern coast of Java, Indonesia. The tsunami database can be employed for a rapid estimation of tsunami inundation during an event. The pre-computed tsunami waveforms and inundations are from a total of 340 scenarios ranging from 7.5 to 9.2 in moment magnitude scale (Mw), including simple fault models of 208 thrust faults and 44 tsunami earthquakes on the plate interface, as well as 44 normal faults and 44 reverse faults in the outer-rise region. Using our tsunami inundation forecasting algorithm (NearTIF), we could rapidly estimate the tsunami inundation in Pelabuhan Ratu for three different hypothetical earthquakes. The first hypothetical earthquake is a megathrust earthquake type (Mw 9.0) offshore Sumatra which is about 600 km from Pelabuhan Ratu to represent a worst-case event in the far-field. The second hypothetical earthquake (Mw 8.5) is based on a slip deficit rate estimation from geodetic measurements and represents a most likely large event near Pelabuhan Ratu. The third hypothetical earthquake is a tsunami earthquake type (Mw 8.1) which often occur south off Java. We compared the tsunami inundation maps produced by the NearTIF algorithm with results of direct forward inundation modeling for the hypothetical earthquakes. The tsunami inundation maps produced from both methods are similar for the three cases. However, the tsunami inundation map from the inundation database can be obtained in much shorter time (1 min) than the one from a forward inundation modeling (40 min). These indicate that the NearTIF algorithm based on pre-computed inundation database is reliable and useful for tsunami warning purposes. This study also demonstrates that the NearTIF algorithm can work well even though the earthquake source is located outside the area of fault model database because it uses a time shifting procedure for the best-fit scenario searching.

  11. Discussion of New Approaches to Medium-Short-Term Earthquake Forecast in Practice of The Earthquake Prediction in Yunnan

    NASA Astrophysics Data System (ADS)

    Hong, F.

    2017-12-01

    After retrospection of years of practice of the earthquake prediction in Yunnan area, it is widely considered that the fixed-point earthquake precursory anomalies mainly reflect the field information. The increase of amplitude and number of precursory anomalies could help to determine the original time of earthquakes, however it is difficult to obtain the spatial relevance between earthquakes and precursory anomalies, thus we can hardly predict the spatial locations of earthquakes using precursory anomalies. The past practices have shown that the seismic activities are superior to the precursory anomalies in predicting earthquakes locations, resulting from the increased seismicity were observed before 80% M=6.0 earthquakes in Yunnan area. While the mobile geomagnetic anomalies are turned out to be helpful in predicting earthquakes locations in recent year, for instance, the forecasted earthquakes occurring time and area derived form the 1-year-scale geomagnetic anomalies before the M6.5 Ludian earthquake in 2014 are shorter and smaller than which derived from the seismicity enhancement region. According to the past works, the author believes that the medium-short-term earthquake forecast level, as well as objective understanding of the seismogenic mechanisms, could be substantially improved by the densely laying observation array and capturing the dynamic process of physical property changes in the enhancement region of medium to small earthquakes.

  12. A PC-based computer package for automatic detection and location of earthquakes: Application to a seismic network in eastern sicity (Italy)

    NASA Astrophysics Data System (ADS)

    Patanè, Domenico; Ferrari, Ferruccio; Giampiccolo, Elisabetta; Gresta, Stefano

    Few automated data acquisition and processing systems operate on mainframes, some run on UNIX-based workstations and others on personal computers, equipped with either DOS/WINDOWS or UNIX-derived operating systems. Several large and complex software packages for automatic and interactive analysis of seismic data have been developed in recent years (mainly for UNIX-based systems). Some of these programs use a variety of artificial intelligence techniques. The first operational version of a new software package, named PC-Seism, for analyzing seismic data from a local network is presented in Patanè et al. (1999). This package, composed of three separate modules, provides an example of a new generation of visual object-oriented programs for interactive and automatic seismic data-processing running on a personal computer. In this work, we mainly discuss the automatic procedures implemented in the ASDP (Automatic Seismic Data-Processing) module and real time application to data acquired by a seismic network running in eastern Sicily. This software uses a multi-algorithm approach and a new procedure MSA (multi-station-analysis) for signal detection, phase grouping and event identification and location. It is designed for an efficient and accurate processing of local earthquake records provided by single-site and array stations. Results from ASDP processing of two different data sets recorded at Mt. Etna volcano by a regional network are analyzed to evaluate its performance. By comparing the ASDP pickings with those revised manually, the detection and subsequently the location capabilities of this software are assessed. The first data set is composed of 330 local earthquakes recorded in the Mt. Etna erea during 1997 by the telemetry analog seismic network. The second data set comprises about 970 automatic locations of more than 2600 local events recorded at Mt. Etna during the last eruption (July 2001) at the present network. For the former data set, a comparison of the

  13. Efficient Location Uncertainty Treatment for Probabilistic Modelling of Portfolio Loss from Earthquake Events

    NASA Astrophysics Data System (ADS)

    Scheingraber, Christoph; Käser, Martin; Allmann, Alexander

    2017-04-01

    Probabilistic seismic risk analysis (PSRA) is a well-established method for modelling loss from earthquake events. In the insurance industry, it is widely employed for probabilistic modelling of loss to a distributed portfolio. In this context, precise exposure locations are often unknown, which results in considerable loss uncertainty. The treatment of exposure uncertainty has already been identified as an area where PSRA would benefit from increased research attention. However, so far, epistemic location uncertainty has not been in the focus of a large amount of research. We propose a new framework for efficient treatment of location uncertainty. To demonstrate the usefulness of this novel method, a large number of synthetic portfolios resembling real-world portfolios is systematically analyzed. We investigate the effect of portfolio characteristics such as value distribution, portfolio size, or proportion of risk items with unknown coordinates on loss variability. Several sampling criteria to increase the computational efficiency of the framework are proposed and put into the wider context of well-established Monte-Carlo variance reduction techniques. The performance of each of the proposed criteria is analyzed.

  14. One dimensional P wave velocity structure of the crust beneath west Java and accurate hypocentre locations from local earthquake inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Supardiyono; Santosa, Bagus Jaya; Physics Department, Faculty of Mathematics and Natural Sciences, Sepuluh Nopember Institute of Technology, Surabaya

    A one-dimensional (1-D) velocity model and station corrections for the West Java zone were computed by inverting P-wave arrival times recorded on a local seismic network of 14 stations. A total of 61 local events with a minimum of 6 P-phases, rms 0.56 s and a maximum gap of 299 Degree-Sign were selected. Comparison with previous earthquake locations shows an improvement for the relocated earthquakes. Tests were carried out to verify the robustness of inversion results in order to corroborate the conclusions drawn out from our reasearch. The obtained minimum 1-D velocity model can be used to improve routine earthquakemore » locations and represents a further step toward more detailed seismotectonic studies in this area of West Java.« less

  15. An efficient repeating signal detector to investigate earthquake swarms

    NASA Astrophysics Data System (ADS)

    Skoumal, Robert J.; Brudzinski, Michael R.; Currie, Brian S.

    2016-08-01

    Repetitive earthquake swarms have been recognized as key signatures in fluid injection induced seismicity, precursors to volcanic eruptions, and slow slip events preceding megathrust earthquakes. We investigate earthquake swarms by developing a Repeating Signal Detector (RSD), a computationally efficient algorithm utilizing agglomerative clustering to identify similar waveforms buried in years of seismic recordings using a single seismometer. Instead of relying on existing earthquake catalogs of larger earthquakes, RSD identifies characteristic repetitive waveforms by rapidly identifying signals of interest above a low signal-to-noise ratio and then grouping based on spectral and time domain characteristics, resulting in dramatically shorter processing time than more exhaustive autocorrelation approaches. We investigate seismicity in four regions using RSD: (1) volcanic seismicity at Mammoth Mountain, California, (2) subduction-related seismicity in Oaxaca, Mexico, (3) induced seismicity in Central Alberta, Canada, and (4) induced seismicity in Harrison County, Ohio. In each case, RSD detects a similar or larger number of earthquakes than existing catalogs created using more time intensive methods. In Harrison County, RSD identifies 18 seismic sequences that correlate temporally and spatially to separate hydraulic fracturing operations, 15 of which were previously unreported. RSD utilizes a single seismometer for earthquake detection which enables seismicity to be quickly identified in poorly instrumented regions at the expense of relying on another method to locate the new detections. Due to the smaller computation overhead and success at distances up to ~50 km, RSD is well suited for real-time detection of low-magnitude earthquake swarms with permanent regional networks.

  16. The 7.9 Denali Fault Earthquake: Aftershock Locations, Moment Tensors and Focal Mechanisms from the Regional Seismic Network Data

    NASA Astrophysics Data System (ADS)

    Ratchkovski, N. A.; Hansen, R. A.; Christensen, D.; Kore, K.

    2002-12-01

    The largest earthquake ever recorded on the Denali fault system (magnitude 7.9) struck central Alaska on November 3, 2002. It was preceded by a magnitude 6.7 foreshock on October 23. This earlier earthquake and its zone of aftershocks were located slightly to the west of the 7.9 quake. Aftershock locations and surface slip observations from the 7.9 quake indicate that the rupture was predominately unilateral in the eastward direction. Near Mentasta Lake, a village that experienced some of the worst damage in the quake, the surface rupture scar turns from the Denali fault to the adjacent Totschunda fault, which trends toward more southeasterly toward the Canadian border. Overall, the geologists found that measurable scarps indicate that the north side of the Denali fault moved to the east and vertically up relative to the south. Maximum offsets on the Denali fault were 8.8 meters at the Tok Highway cutoff, and were 2.2 meters on the Totschunda fault. The Alaska regional seismic network consists of over 250 station sites, operated by the Alaska Earthquake Information Center (AEIC), the Alaska Volcano Observatory (AVO), and the Pacific Tsunami Warning Center (PTWC). Over 25 sites are equipped with the broad-band sensors, some of which have in addition the strong motion sensors. The rest of the stations are either 1 or 3-component short-period instruments. The data from these stations are collected, processed and archived at the AEIC. The AEIC staff installed a temporary network with over 20 instruments following the 6.7 Nenana Mountain and the 7.9 events. Prior to the M 7.9 Denali Fault event, the automatic earthquake detection system at AEIC was locating between 15 and 30 events per day. After the event, the system had over 200-400 automatic locations per day for at least 10 days following the 7.9 event. The processing of the data is ongoing with the priority given to the larger events. The cumulative length of the 6.7 and 7.9 aftershock locations along the Denali

  17. From Tornadoes to Earthquakes: Forecast Verification for Binary Events Applied to the 1999 Chi-Chi, Taiwan, Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, C.; Rundle, J. B.; Holliday, J. R.; Nanjo, K.; Turcotte, D. L.; Li, S.; Tiampo, K. F.

    2005-12-01

    Forecast verification procedures for statistical events with binary outcomes typically rely on the use of contingency tables and Relative Operating Characteristic (ROC) diagrams. Originally developed for the statistical evaluation of tornado forecasts on a county-by-county basis, these methods can be adapted to the evaluation of competing earthquake forecasts. Here we apply these methods retrospectively to two forecasts for the m = 7.3 1999 Chi-Chi, Taiwan, earthquake. These forecasts are based on a method, Pattern Informatics (PI), that locates likely sites for future large earthquakes based on large change in activity of the smallest earthquakes. A competing null hypothesis, Relative Intensity (RI), is based on the idea that future large earthquake locations are correlated with sites having the greatest frequency of small earthquakes. We show that for Taiwan, the PI forecast method is superior to the RI forecast null hypothesis. Inspection of the two maps indicates that their forecast locations are indeed quite different. Our results confirm an earlier result suggesting that the earthquake preparation process for events such as the Chi-Chi earthquake involves anomalous changes in activation or quiescence, and that signatures of these processes can be detected in precursory seismicity data. Furthermore, we find that our methods can accurately forecast the locations of aftershocks from precursory seismicity changes alone, implying that the main shock together with its aftershocks represent a single manifestation of the formation of a high-stress region nucleating prior to the main shock.

  18. Earthquakes of Garhwal Himalaya region of NW Himalaya, India: A study of relocated earthquakes and their seismogenic source and stress

    NASA Astrophysics Data System (ADS)

    R, A. P.; Paul, A.; Singh, S.

    2017-12-01

    Since the continent-continent collision 55 Ma, the Himalaya has accommodated 2000 km of convergence along its arc. The strain energy is being accumulated at a rate of 37-44 mm/yr and releases at time as earthquakes. The Garhwal Himalaya is located at the western side of a Seismic Gap, where a great earthquake is overdue atleast since 200 years. This seismic gap (Central Seismic Gap: CSG) with 52% probability for a future great earthquake is located between the rupture zones of two significant/great earthquakes, viz. the 1905 Kangra earthquake of M 7.8 and the 1934 Bihar-Nepal earthquake of M 8.0; and the most recent one, the 2015 Gorkha earthquake of M 7.8 is in the eastern side of this seismic gap (CSG). The Garhwal Himalaya is one of the ideal locations of the Himalaya where all the major Himalayan structures and the Himalayan Seimsicity Belt (HSB) can ably be described and studied. In the present study, we are presenting the spatio-temporal analysis of the relocated local micro-moderate earthquakes, recorded by a seismicity monitoring network, which is operational since, 2007. The earthquake locations are relocated using the HypoDD (double difference hypocenter method for earthquake relocations) program. The dataset from July, 2007- September, 2015 have been used in this study to estimate their spatio-temporal relationships, moment tensor (MT) solutions for the earthquakes of M>3.0, stress tensors and their interactions. We have also used the composite focal mechanism solutions for small earthquakes. The majority of the MT solutions show thrust type mechanism and located near the mid-crustal-ramp (MCR) structure of the detachment surface at 8-15 km depth beneath the outer lesser Himalaya and higher Himalaya regions. The prevailing stress has been identified to be compressional towards NNE-SSW, which is the direction of relative plate motion between the India and Eurasia continental plates. The low friction coefficient estimated along with the stress inversions

  19. W phase source inversion for moderate to large earthquakes (1990-2010)

    USGS Publications Warehouse

    Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo; Hayes, Gavin P.

    2012-01-01

    Rapid characterization of the earthquake source and of its effects is a growing field of interest. Until recently, it still took several hours to determine the first-order attributes of a great earthquake (e.g. Mw≥ 7.5), even in a well-instrumented region. The main limiting factors were data saturation, the interference of different phases and the time duration and spatial extent of the source rupture. To accelerate centroid moment tensor (CMT) determinations, we have developed a source inversion algorithm based on modelling of the W phase, a very long period phase (100–1000 s) arriving at the same time as the P wave. The purpose of this work is to finely tune and validate the algorithm for large-to-moderate-sized earthquakes using three components of W phase ground motion at teleseismic distances. To that end, the point source parameters of all Mw≥ 6.5 earthquakes that occurred between 1990 and 2010 (815 events) are determined using Federation of Digital Seismograph Networks, Global Seismographic Network broad-band stations and STS1 global virtual networks of the Incorporated Research Institutions for Seismology Data Management Center. For each event, a preliminary magnitude obtained from W phase amplitudes is used to estimate the initial moment rate function half duration and to define the corner frequencies of the passband filter that will be applied to the waveforms. Starting from these initial parameters, the seismic moment tensor is calculated using a preliminary location as a first approximation of the centroid. A full CMT inversion is then conducted for centroid timing and location determination. Comparisons with Harvard and Global CMT solutions highlight the robustness of W phase CMT solutions at teleseismic distances. The differences in Mw rarely exceed 0.2 and the source mechanisms are very similar to one another. Difficulties arise when a target earthquake is shortly (e.g. within 10 hr) preceded by another large earthquake, which disturbs the

  20. Constraining the source location of the 30 May 2015 (Mw 7.9) Bonin deep-focus earthquake using seismogram envelopes of high-frequency P waveforms: Occurrence of deep-focus earthquake at the bottom of a subducting slab

    NASA Astrophysics Data System (ADS)

    Takemura, Shunsuke; Maeda, Takuto; Furumura, Takashi; Obara, Kazushige

    2016-05-01

    In this study, the source location of the 30 May 2015 (Mw 7.9) deep-focus Bonin earthquake was constrained using P wave seismograms recorded across Japan. We focus on propagation characteristics of high-frequency P wave. Deep-focus intraslab earthquakes typically show spindle-shaped seismogram envelopes with peak delays of several seconds and subsequent long-duration coda waves; however, both the main shock and aftershock of the 2015 Bonin event exhibited pulse-like P wave propagations with high apparent velocities (~12.2 km/s). Such P wave propagation features were reproduced by finite-difference method simulations of seismic wave propagation in the case of slab-bottom source. The pulse-like P wave seismogram envelopes observed from the 2015 Bonin earthquake show that its source was located at the bottom of the Pacific slab at a depth of ~680 km, rather than within its middle or upper regions.

  1. Intrastab Earthquakes: Dehydration of the Cascadia Slab

    USGS Publications Warehouse

    Preston, L.A.; Creager, K.C.; Crosson, R.S.; Brocher, T.M.; Trehu, A.M.

    2003-01-01

    We simultaneously invert travel times of refracted and wide-angle reflected waves for three-dimensional compressional-wave velocity structure, earthquake locations, and reflector geometry in northwest Washington state. The reflector, interpreted to be the crust-mantle boundary (Moho) of the subducting Juan de Fuca plate, separates intrastab earthquakes into two groups, permitting a new understanding of the origins of intrastab earthquakes in Cascadia. Earthquakes up-dip of the Moho's 45-kilometer depth contour occur below the reflector, in the subducted oceanic mantle, consistent with serpentinite dehydration; earthquakes located down-dip occur primarily within the subducted crust, consistent with the basalt-to-eclogite transformation.

  2. Foreshocks and aftershocks of the Great 1857 California earthquake

    USGS Publications Warehouse

    Meltzner, A.J.; Wald, D.J.

    1999-01-01

    The San Andreas fault is the longest fault in California and one of the longest strike-slip faults anywhere in the world, yet we know little about many aspects of its behavior before, during, and after large earthquakes. We conducted a study to locate and to estimate magnitudes for the largest foreshocks and aftershocks of the 1857 M 7.9 Fort Tejon earthquake on the central and southern segments of the fault. We began by searching archived first-hand accounts from 1857 through 1862, by grouping felt reports temporally, and by assigning modified Mercalli intensities to each site. We then used a modified form of the grid-search algorithm of Bakum and Wentworth, derived from empirical analysis of modern earthquakes, to find the location and magnitude most consistent with the assigned intensities for each of the largest events. The result confirms a conclusion of Sieh that at least two foreshocks ('dawn' and 'sunrise') located on or near the Parkfield segment of the San Andreas fault preceded the mainshock. We estimate their magnitudes to be M ~ 6.1 and M ~ 5.6, respectively. The aftershock rate was below average but within one standard deviation of the number of aftershocks expected based on statistics of modern southern California mainshock-aftershock sequences. The aftershocks included two significant events during the first eight days of the sequence, with magnitudes M ~ 6.25 and M ~ 6.7, near the southern half of the rupture; later aftershocks included a M ~ 6 event near San Bernardino in December 1858 and a M ~ 6.3 event near the Parkfield segment in April 1860. From earthquake logs at Fort Tejon, we conclude that the aftershock sequence lasted a minimum of 3.75 years.

  3. Earthquakes in Arkansas and vicinity 1699-2010

    USGS Publications Warehouse

    Dart, Richard L.; Ausbrooks, Scott M.

    2011-01-01

    This map summarizes approximately 300 years of earthquake activity in Arkansas. It is one in a series of similar State earthquake history maps. Work on the Arkansas map was done in collaboration with the Arkansas Geological Survey. The earthquake data plotted on the map are from several sources: the Arkansas Geological Survey, the Center for Earthquake Research and Information, the National Center for Earthquake Engineering Research, and the Mississippi Department of Environmental Quality. In addition to earthquake locations, other materials presented include seismic hazard and isoseismal maps and related text. Earthquakes are a legitimate concern in Arkansas and parts of adjacent states. Arkansas has undergone a number of significant felt earthquakes since 1811. At least two of these events caused property damage: a magnitude 4.7 earthquake in 1931, and a magnitude 4.3 earthquake in 1967. The map shows all historical and instrumentally located earthquakes in Arkansas and vicinity between 1811 and 2010. The largest historic earthquake in the vicinity of the State was an intensity XI event, on December 16, 1811; the first earthquake in the New Madrid sequence. This violent event and the earthquakes that followed caused considerable damage to the then sparsely settled region.

  4. Logistics Distribution Center Location Evaluation Based on Genetic Algorithm and Fuzzy Neural Network

    NASA Astrophysics Data System (ADS)

    Shao, Yuxiang; Chen, Qing; Wei, Zhenhua

    Logistics distribution center location evaluation is a dynamic, fuzzy, open and complicated nonlinear system, which makes it difficult to evaluate the distribution center location by the traditional analysis method. The paper proposes a distribution center location evaluation system which uses the fuzzy neural network combined with the genetic algorithm. In this model, the neural network is adopted to construct the fuzzy system. By using the genetic algorithm, the parameters of the neural network are optimized and trained so as to improve the fuzzy system’s abilities of self-study and self-adaptation. At last, the sampled data are trained and tested by Matlab software. The simulation results indicate that the proposed identification model has very small errors.

  5. MyShake: A smartphone seismic network for earthquake early warning and beyond.

    PubMed

    Kong, Qingkai; Allen, Richard M; Schreier, Louis; Kwon, Young-Woo

    2016-02-01

    Large magnitude earthquakes in urban environments continue to kill and injure tens to hundreds of thousands of people, inflicting lasting societal and economic disasters. Earthquake early warning (EEW) provides seconds to minutes of warning, allowing people to move to safe zones and automated slowdown and shutdown of transit and other machinery. The handful of EEW systems operating around the world use traditional seismic and geodetic networks that exist only in a few nations. Smartphones are much more prevalent than traditional networks and contain accelerometers that can also be used to detect earthquakes. We report on the development of a new type of seismic system, MyShake, that harnesses personal/private smartphone sensors to collect data and analyze earthquakes. We show that smartphones can record magnitude 5 earthquakes at distances of 10 km or less and develop an on-phone detection capability to separate earthquakes from other everyday shakes. Our proof-of-concept system then collects earthquake data at a central site where a network detection algorithm confirms that an earthquake is under way and estimates the location and magnitude in real time. This information can then be used to issue an alert of forthcoming ground shaking. MyShake could be used to enhance EEW in regions with traditional networks and could provide the only EEW capability in regions without. In addition, the seismic waveforms recorded could be used to deliver rapid microseism maps, study impacts on buildings, and possibly image shallow earth structure and earthquake rupture kinematics.

  6. Locating and Modeling Regional Earthquakes with Broadband Waveform Data

    NASA Astrophysics Data System (ADS)

    Tan, Y.; Zhu, L.; Helmberger, D.

    2003-12-01

    Retrieving source parameters of small earthquakes (Mw < 4.5), including mechanism, depth, location and origin time, relies on local and regional seismic data. Although source characterization for such small events achieves a satisfactory stage in some places with a dense seismic network, such as TriNet, Southern California, a worthy revisit to the historical events in these places or an effective, real-time investigation of small events in many other places, where normally only a few local waveforms plus some short-period recordings are available, is still a problem. To address this issue, we introduce a new type of approach that estimates location, depth, origin time and fault parameters based on 3-component waveform matching in terms of separated Pnl, Rayleigh and Love waves. We show that most local waveforms can be well modeled by a regionalized 1-D model plus different timing corrections for Pnl, Rayleigh and Love waves at relatively long periods, i.e., 4-100 sec for Pnl, and 8-100 sec for surface waves, except for few anomalous paths involving greater structural complexity, meanwhile, these timing corrections reveal similar azimuthal patterns for well-located cluster events, despite their different focal mechanisms. Thus, we can calibrate the paths separately for Pnl, Rayleigh and Love waves with the timing corrections from well-determined events widely recorded by a dense modern seismic network or a temporary PASSCAL experiment. In return, we can locate events and extract their fault parameters by waveform matching for available waveform data, which could be as less as from two stations, assuming timing corrections from the calibration. The accuracy of the obtained source parameters is subject to the error carried by the events used for the calibration. The detailed method requires a Green­_s function library constructed from a regionalized 1-D model together with necessary calibration information, and adopts a grid search strategy for both hypercenter and

  7. Study of earthquakes using a borehole seismic network at Koyna, India

    NASA Astrophysics Data System (ADS)

    Gupta, Harsh; Satyanarayana, Hari VS; Shashidhar, Dodla; Mallika, Kothamasu; Ranjan Mahato, Chitta; Shankar Maity, Bhavani

    2017-04-01

    Koyna, located near the west coast of India, is a classical site of artificial water reservoir triggered earthquakes. Triggered earthquakes started soon after the impoundment of the Koyna Dam in 1962. The activity has continued till now including the largest triggered earthquake of M 6.3 in 1967; 22 earthquakes of M ≥ 5 and several thousands smaller earthquakes. The latest significant earthquake of ML 3.7 occurred on 24th November 2016. In spite of having a network of 23 broad band 3-component seismic stations in the near vicinity of the Koyna earthquake zone, locations of earthquakes had errors of 1 km. The main reason was the presence of 1 km thick very heterogeneous Deccan Traps cover that introduced noise and locations could not be improved. To improve the accuracy of location of earthquakes, a unique network of eight borehole seismic stations surrounding the seismicity was designed. Six of these have been installed at depths varying from 981 m to 1522 m during 2015 and 2016, well below the Deccan Traps cover. During 2016 a total of 2100 earthquakes were located. There has been a significant improvement in the location of earthquakes and the absolute errors of location have come down to ± 300 m. All earthquakes of ML ≥ 0.5 are now located, compared to ML ≥1.0 earlier. Based on seismicity and logistics, a block of 2 km x 2 km area has been chosen for the 3 km deep pilot borehole. The installation of the borehole seismic network has further elucidated the correspondence between rate of water loading/unloading the reservoir and triggered seismicity.

  8. Earthquakes in Mississippi and vicinity 1811-2010

    USGS Publications Warehouse

    Dart, Richard L.; Bograd, Michael B.E.

    2011-01-01

    This map summarizes two centuries of earthquake activity in Mississippi. Work on the Mississippi map was done in collaboration with the Mississippi Department of Environmental Quality, Office of Geology. The earthquake data plotted on the map are from several sources: the Mississippi Department of Environmental Quality, the Center for Earthquake Research and Information, the National Center for Earthquake Engineering Research, and the Arkansas Geological Survey. In addition to earthquake locations, other materials include seismic hazard and isoseismal maps and related text. Earthquakes are a legitimate concern in Mississippi and parts of adjacent States. Mississippi has undergone a number of felt earthquakes since 1811. At least two of these events caused property damage: a magnitude 4.7 earthquake in 1931, and a magnitude 4.3 earthquake in 1967. The map shows all historical and instrumentally located earthquakes in Mississippi and vicinity between 1811 and 2010. The largest historic earthquake in the vicinity of the State was an intensity XI event, on December 16, 1811; the first earthquake in the New Madrid sequence. This violent event and the earthquakes that followed caused considerable damage to the then sparsely settled region.

  9. The 2011 Eruption of Nabro Volcano (Eritrea): Earthquake Locations from a Temporary Broadband Network

    NASA Astrophysics Data System (ADS)

    Hamlyn, J.; Keir, D.; Hammond, J.; Wright, T.; Neuberg, J.; Kibreab, A.; Ogubazghi, G.; Goitom, B.

    2012-04-01

    Nabro volcano dominates the central part of the Nabro Volcanic Range (NVR), which trends SSW-NNE covering a stretch of 110 km from the SEE margin of the Afar depression to the Red Sea. Regionally, the NVR sits within the Afar triangle, the triple junction of the Somalian, Arabian and African plates. On 12th June 2011 Nabro volcano suddenly erupted after being inactive for 10, 000 years. In response, a network of 8 seismometers, were located around the active vent. The seismic signals detected by this array and those arriving at a regional seismic station (located to the north-west) were processed to provide accurate earthquake locations for the period August-October. Transects of the volcano were used to create cross sections to aid the interpretation. Typically, the majority of the seismic events are located at the active vent and on the flanks of Nabro, with fewer events dispersed around the surrounding area. However, there appears to be a smaller hub of events to the south-west of Nabro beneath the neighbouring Mallahle volcanic caldera (located on the Ethiopian side of the international border). This may imply some form of co-dependent relationship within the plumbing of the magma system beneath both calderas.

  10. Structures vibration control via Tuned Mass Dampers using a co-evolution Coral Reefs Optimization algorithm

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.; Camacho-Gómez, C.; Magdaleno, A.; Pereira, E.; Lorenzana, A.

    2017-04-01

    In this paper we tackle a problem of optimal design and location of Tuned Mass Dampers (TMDs) for structures subjected to earthquake ground motions, using a novel meta-heuristic algorithm. Specifically, the Coral Reefs Optimization (CRO) with Substrate Layer (CRO-SL) is proposed as a competitive co-evolution algorithm with different exploration procedures within a single population of solutions. The proposed approach is able to solve the TMD design and location problem, by exploiting the combination of different types of searching mechanisms. This promotes a powerful evolutionary-like algorithm for optimization problems, which is shown to be very effective in this particular problem of TMDs tuning. The proposed algorithm's performance has been evaluated and compared with several reference algorithms in two building models with two and four floors, respectively.

  11. Recognition of strong earthquake-prone areas with a single learning class

    NASA Astrophysics Data System (ADS)

    Gvishiani, A. D.; Agayan, S. M.; Dzeboev, B. A.; Belov, I. O.

    2017-05-01

    This article presents a new Barrier recognition algorithm with learning, designed for recognition of earthquake-prone areas. In comparison to the Crust (Kora) algorithm, used by the classical EPA approach, the Barrier algorithm proceeds with learning just on one "pure" high-seismic class. The new algorithm operates in the space of absolute values of the geological-geophysical parameters of the objects. The algorithm is used for recognition of earthquake-prone areas with M ≥ 6.0 in the Caucasus region. Comparative analysis of the Crust and Barrier algorithms justifies their productive coherence.

  12. Toward a probabilistic acoustic emission source location algorithm: A Bayesian approach

    NASA Astrophysics Data System (ADS)

    Schumacher, Thomas; Straub, Daniel; Higgins, Christopher

    2012-09-01

    Acoustic emissions (AE) are stress waves initiated by sudden strain releases within a solid body. These can be caused by internal mechanisms such as crack opening or propagation, crushing, or rubbing of crack surfaces. One application for the AE technique in the field of Structural Engineering is Structural Health Monitoring (SHM). With piezo-electric sensors mounted to the surface of the structure, stress waves can be detected, recorded, and stored for later analysis. An important step in quantitative AE analysis is the estimation of the stress wave source locations. Commonly, source location results are presented in a rather deterministic manner as spatial and temporal points, excluding information about uncertainties and errors. Due to variability in the material properties and uncertainty in the mathematical model, measures of uncertainty are needed beyond best-fit point solutions for source locations. This paper introduces a novel holistic framework for the development of a probabilistic source location algorithm. Bayesian analysis methods with Markov Chain Monte Carlo (MCMC) simulation are employed where all source location parameters are described with posterior probability density functions (PDFs). The proposed methodology is applied to an example employing data collected from a realistic section of a reinforced concrete bridge column. The selected approach is general and has the advantage that it can be extended and refined efficiently. Results are discussed and future steps to improve the algorithm are suggested.

  13. Earthquake clustering in modern seismicity and its relationship with strong historical earthquakes around Beijing, China

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Main, Ian G.; Musson, Roger M. W.

    2017-11-01

    Beijing, China's capital city, is located in a typical intraplate seismic belt, with relatively high-quality instrumental catalogue data available since 1970. The Chinese historical earthquake catalogue contains six strong historical earthquakes of Ms ≥ 6 around Beijing, the earliest in 294 AD. This poses a significant potential hazard to one of the most densely populated and economically active parts of China. In some intraplate areas, persistent clusters of events associated with historical events can occur over centuries, for example, the ongoing sequence in the New Madrid zone of the eastern US. Here we will examine the evidence for such persistent clusters around Beijing. We introduce a metric known as the `seismic density index' that quantifies the degree of clustering of seismic energy release. For a given map location, this multi-dimensional index depends on the number of events, their magnitudes, and the distances to the locations of the surrounding population of earthquakes. We apply the index to modern instrumental catalogue data between 1970 and 2014, and identify six clear candidate zones. We then compare these locations to earthquake epicentre and seismic intensity data for the six largest historical earthquakes. Each candidate zone contains one of the six historical events, and the location of peak intensity is within 5 km or so of the reported epicentre in five of these cases. In one case—the great Ms 8 earthquake of 1679—the peak is closer to the area of strongest shaking (Intensity XI or more) than the reported epicentre. The present-day event rates are similar to those predicted by the modified Omori law but there is no evidence of ongoing decay in event rates. Accordingly, the index is more likely to be picking out the location of persistent weaknesses in the lithosphere. Our results imply zones of high seismic density index could be used in principle to indicate the location of unrecorded historical of palaeoseismic events, in China and

  14. A Report Of The December 6, 2016 Mw 6.5 Pidie Jaya, Aceh Earthquake

    NASA Astrophysics Data System (ADS)

    Muzli, M.; Daniarsyad, G.; Nugraha, A. D.; Muksin, U.; Widiyantoro, S.; Bradley, K.; Wang, T.; Jousset, P. G.; Erbas, K.; Nurdin, I.; Wei, S.

    2017-12-01

    The December 6, 2016 Mw 6.5 earthquake in Pidie Jaya, Aceh was one of the devastating inland earthquakes in Sumatra that took away more than 100 people's life. Here we present our seismological analysis of the earthquake sequence. The earthquake focal mechanism inversions using regional BMKG broadband data and teleseismic waveform data all indicate a strike-slip focal mechanism with a centroid depth of 15 km. Preliminary finite fault inversion using teleseismic body waves prefers the fault plane with strike of 45 degree and dip of 50 degree, in agreement with the surface geology and USGS aftershock distributions. Nine broadband seismic stations were installed in the source region along the coast one week after the earthquake and have collected the data for one month. The data have been used to locate aftershocks with grid search and double-difference algorithm, which results in the lineup of the seismicity in NE-SW direction, in agreement with the fault inversion and geology results. Using the M4.0 calibration earthquake that was recorded by the temporally network, we relocated the mainshock epicenter, which is also consistent with fault geometry defined by the well located aftershocks. In addition, a portion of the seismicity shows a lineation in E-W direction, indicating a secondary fault that has not been identified before. Aftershock focal mechanisms determined by the first motion reveal similar solutions as the mainshock. The observed macro intensity data shows most of the damaged buildings are distributed along the coast, approximately perpendicular to the preferred fault strike instead of parallel with it. It appears that the distribution of damage is strongly related to the site conditions, since these strong shaking/damage regions are mainly located on the costal sedimentary soils.

  15. Improved phase arrival estimate and location for local earthquakes in South Korea

    NASA Astrophysics Data System (ADS)

    Morton, E. A.; Rowe, C. A.; Begnaud, M. L.

    2012-12-01

    The Korean Institute of Geoscience and Mineral Resources (KIGAM) and the Korean Meteorological Agency (KMA) regularly report local (distance < ~1200 km) seismicity recorded with their networks; we obtain preliminary event location estimates as well as waveform data, but no phase arrivals are reported, so the data are not immediately useful for earthquake location. Our goal is to identify seismic events that are sufficiently well-located to provide accurate seismic travel-time information for events within the KIGAM and KMA networks, and also recorded by some regional stations. Toward that end, we are using a combination of manual phase identification and arrival-time picking, with waveform cross-correlation, to cluster events that have occurred in close proximity to one another, which allows for improved phase identification by comparing the highly correlating waveforms. We cross-correlate the known events with one another on 5 seismic stations and cluster events that correlate above a correlation coefficient threshold of 0.7, which reveals few clusters containing few events each. The small number of repeating events suggests that the online catalogs have had mining and quarry blasts removed before publication, as these can contribute significantly to repeating seismic sources in relatively aseismic regions such as South Korea. The dispersed source locations in our catalog, however, are ideal for seismic velocity modeling by providing superior sampling through the dense seismic station arrangement, which produces favorable event-to-station ray path coverage. Following careful manual phase picking on 104 events chosen to provide adequate ray coverage, we re-locate the events to obtain improved source coordinates. The re-located events are used with Thurber's Simul2000 pseudo-bending local tomography code to estimate the crustal structure on the Korean Peninsula, which is an important contribution to ongoing calibration for events of interest in the region.

  16. Contribution of the Surface and Down-Hole Seismic Networks to the Location of Earthquakes at the Soultz-sous-Forêts Geothermal Site (France)

    NASA Astrophysics Data System (ADS)

    Kinnaert, X.; Gaucher, E.; Kohl, T.; Achauer, U.

    2018-03-01

    Seismicity induced in geo-reservoirs can be a valuable observation to image fractured reservoirs, to characterize hydrological properties, or to mitigate seismic hazard. However, this requires accurate location of the seismicity, which is nowadays an important seismological task in reservoir engineering. The earthquake location (determination of the hypocentres) depends on the model used to represent the medium in which the seismic waves propagate and on the seismic monitoring network. In this work, location uncertainties and location inaccuracies are modeled to investigate the impact of several parameters on the determination of the hypocentres: the picking uncertainty, the numerical precision of picked arrival times, a velocity perturbation and the seismic network configuration. The method is applied to the geothermal site of Soultz-sous-Forêts, which is located in the Upper Rhine Graben (France) and which was subject to detailed scientific investigations. We focus on a massive water injection performed in the year 2000 to enhance the productivity of the well GPK2 in the granitic basement, at approximately 5 km depth, and which induced more than 7000 earthquakes recorded by down-hole and surface seismic networks. We compare the location errors obtained from the joint or the separate use of the down-hole and surface networks. Besides the quantification of location uncertainties caused by picking uncertainties, the impact of the numerical precision of the picked arrival times as provided in a reference catalogue is investigated. The velocity model is also modified to mimic possible effects of a massive water injection and to evaluate its impact on earthquake hypocentres. It is shown that the use of the down-hole network in addition to the surface network provides smaller location uncertainties but can also lead to larger inaccuracies. Hence, location uncertainties would not be well representative of the location errors and interpretation of the seismicity

  17. Joint inversion of teleseismic body-waves and geodetic data for the Mw6.8 aftershock of the Balochistan earthquake with refined epicenter location

    NASA Astrophysics Data System (ADS)

    Wei, S.; Wang, T.; Jonsson, S.; Avouac, J. P.; Helmberger, D. V.

    2014-12-01

    Aftershocks of the 2013 Balochistan earthquake are mainly concentrated along the northeastern end of the mainshock rupture despite of much larger coseismic slip to the southwest. The largest event among them is an Mw6.8 earthquake which occurred three days after the mainshock. A kinematic slip model of the mainshock was obtained by joint inversion of the teleseismic body-waves and horizontal static deformation field derived from remote sensing optical and SAR data, which is composed of seven fault segments with gradually changing strikes and dips [Avouac et al., 2014]. The remote sensing data provide well constraints on the fault geometry and spatial distribution of slip but no timing information. Meanwhile, the initiation of the teleseismic waveform is very sensitive to fault geometry of the epicenter segment (strike and dip) and spatial slip distribution but much less sensitive to the absolute location of the epicenter. The combination of the two data sets allows a much better determination of the absolute epicenter location, which is about 25km to the southwest of the NEIC epicenter location. The well located mainshock epicenter is used to establish path calibrations for teleseismic P-waves, which are essential for relocating the Mw6.8 aftershock. Our grid search shows that the refined epicenter is located right at the northeastern end of the mainshock rupture. This is confirmed by the SAR offsets calculated from images acquired after the mainshock. The azimuth and range offsets display a discontinuity across the rupture trace of the mainshock. Teleseismic only and static only, as well as joint inversions all indicate that the aftershock ruptured an asperity with 25km along strike and range from 8km to 20km in depth. The earthquake was originated in a positive Coulomb stress change regime due to the mainshock and has complementary slip distribution to the mainshock rupture at the northeastern end, suggesting that the entire seismic generic zone in the crust was

  18. The 7.9 Denali Fault, Alaska Earthquake of November 3, 2002: Aftershock Locations, Moment Tensors and Focal Mechanisms from the Regional Seismic Network Data

    NASA Astrophysics Data System (ADS)

    Ratchkovski, N. A.; Hansen, R. A.; Kore, K. R.

    2003-04-01

    The largest earthquake ever recorded on the Denali fault system (magnitude 7.9) struck central Alaska on November 3, 2002. It was preceded by a magnitude 6.7 earthquake on October 23. This earlier earthquake and its zone of aftershocks were located ~20 km to the west of the 7.9 quake. Aftershock locations and surface slip observations from the 7.9 quake indicate that the rupture was predominately unilateral in the eastward direction. The geologists mapped a ~300-km-long rupture and measured maximum offsets of 8.8 meters. The 7.9 event ruptured three different faults. The rupture began on the northeast trending Susitna Glacier Thrust fault, a splay fault south of the Denali fault. Then the rupture transferred to the Denali fault and propagated eastward for 220 km. At about 143W the rupture moved onto the adjacent southeast-trending Totschunda fault and propagated for another 55 km. The cumulative length of the 6.7 and 7.9 aftershock zones along the Denali and Totschunda faults is about 380 km. The earthquakes were recorded and processed by the Alaska Earthquake Information Center (AEIC). The AEIC acquires and processes data from the Alaska Seismic Network, consisting of over 350 seismograph stations. Nearly 40 of these sites are equipped with the broad-band sensors, some of which also have strong motion sensors. The rest of the stations are either 1 or 3-component short-period instruments. The data from these stations are collected, processed and archived at the AEIC. The AEIC staff installed a temporary seismic network of 6 instruments following the 6.7 earthquake and an additional 20 stations following the 7.9 earthquake. Prior to the 7.9 Denali Fault event, the AEIC was locating 35 to 50 events per day. After the event, the processing load increased to over 300 events per day during the first week following the event. In this presentation, we will present and interpret the aftershock location patterns, first motion focal mechanism solutions, and regional seismic

  19. Earthquake effects at nuclear reactor facilities: San Fernando earthquake of February 9th, 1971

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howard, G.; Ibanez, P.; Matthiesen, F.

    1972-02-01

    The effects of the San Fernando earthquake of February 9, 1971 on 26 reactor facilities located in California, Arizona, and Nevada are reported. The safety performance of the facilities during the earthquake is discussed. (JWR)

  20. Iris Location Algorithm Based on the CANNY Operator and Gradient Hough Transform

    NASA Astrophysics Data System (ADS)

    Zhong, L. H.; Meng, K.; Wang, Y.; Dai, Z. Q.; Li, S.

    2017-12-01

    In the iris recognition system, the accuracy of the localization of the inner and outer edges of the iris directly affects the performance of the recognition system, so iris localization has important research meaning. Our iris data contain eyelid, eyelashes, light spot and other noise, even the gray transformation of the images is not obvious, so the general methods of iris location are unable to realize the iris location. The method of the iris location based on Canny operator and gradient Hough transform is proposed. Firstly, the images are pre-processed; then, calculating the gradient information of images, the inner and outer edges of iris are coarse positioned using Canny operator; finally, according to the gradient Hough transform to realize precise localization of the inner and outer edge of iris. The experimental results show that our algorithm can achieve the localization of the inner and outer edges of the iris well, and the algorithm has strong anti-interference ability, can greatly reduce the location time and has higher accuracy and stability.

  1. An efficient biological pathway layout algorithm combining grid-layout and spring embedder for complicated cellular location information

    PubMed Central

    2010-01-01

    Background Graph drawing is one of the important techniques for understanding biological regulations in a cell or among cells at the pathway level. Among many available layout algorithms, the spring embedder algorithm is widely used not only for pathway drawing but also for circuit placement and www visualization and so on because of the harmonized appearance of its results. For pathway drawing, location information is essential for its comprehension. However, complex shapes need to be taken into account when torus-shaped location information such as nuclear inner membrane, nuclear outer membrane, and plasma membrane is considered. Unfortunately, the spring embedder algorithm cannot easily handle such information. In addition, crossings between edges and nodes are usually not considered explicitly. Results We proposed a new grid-layout algorithm based on the spring embedder algorithm that can handle location information and provide layouts with harmonized appearance. In grid-layout algorithms, the mapping of nodes to grid points that minimizes a cost function is searched. By imposing positional constraints on grid points, location information including complex shapes can be easily considered. Our layout algorithm includes the spring embedder cost as a component of the cost function. We further extend the layout algorithm to enable dynamic update of the positions and sizes of compartments at each step. Conclusions The new spring embedder-based grid-layout algorithm and a spring embedder algorithm are applied to three biological pathways; endothelial cell model, Fas-induced apoptosis model, and C. elegans cell fate simulation model. From the positional constraints, all the results of our algorithm satisfy location information, and hence, more comprehensible layouts are obtained as compared to the spring embedder algorithm. From the comparison of the number of crossings, the results of the grid-layout-based algorithm tend to contain more crossings than those of the

  2. Improvement of Earthquake Epicentral Locations Using T-Phases: Testing by Comparison With Surface Wave Relative Event Locations

    DTIC Science & Technology

    2001-10-01

    deployment of 51 ocean -bottom seismometers (OBS) on the seafloor spanning 800 km across the East Pacific Rise provides a unique opportunity to test the...aftershock sequence of earthquakes at the northern end of the Easter microplate . In addition, for the larger earthquakes, we can compare relative... ocean -bottom seismometers OBJECTIVES The objectives of this research are To explore the synergy between hydroacoustic and seismic techniques

  3. Robust optimization model and algorithm for railway freight center location problem in uncertain environment.

    PubMed

    Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  4. Increasing critical sensitivity of the Load/Unload Response Ratio before large earthquakes with identified stress accumulation pattern

    NASA Astrophysics Data System (ADS)

    Yu, Huai-zhong; Shen, Zheng-kang; Wan, Yong-ge; Zhu, Qing-yong; Yin, Xiang-chu

    2006-12-01

    The Load/Unload Response Ratio (LURR) method is proposed for short-to-intermediate-term earthquake prediction [Yin, X.C., Chen, X.Z., Song, Z.P., Yin, C., 1995. A New Approach to Earthquake Prediction — The Load/Unload Response Ratio (LURR) Theory, Pure Appl. Geophys., 145, 701-715]. This method is based on measuring the ratio between Benioff strains released during the time periods of loading and unloading, corresponding to the Coulomb Failure Stress change induced by Earth tides on optimally oriented faults. According to the method, the LURR time series usually climb to an anomalously high peak prior to occurrence of a large earthquake. Previous studies have indicated that the size of critical seismogenic region selected for LURR measurements has great influence on the evaluation of LURR. In this study, we replace the circular region usually adopted in LURR practice with an area within which the tectonic stress change would mostly affect the Coulomb stress on a potential seismogenic fault of a future event. The Coulomb stress change before a hypothetical earthquake is calculated based on a simple back-slip dislocation model of the event. This new algorithm, by combining the LURR method with our choice of identified area with increased Coulomb stress, is devised to improve the sensitivity of LURR to measure criticality of stress accumulation before a large earthquake. Retrospective tests of this algorithm on four large earthquakes occurred in California over the last two decades show remarkable enhancement of the LURR precursory anomalies. For some strong events of lesser magnitudes occurred in the same neighborhoods and during the same time periods, significant anomalies are found if circular areas are used, and are not found if increased Coulomb stress areas are used for LURR data selection. The unique feature of this algorithm may provide stronger constraints on forecasts of the size and location of future large events.

  5. Temporal variation of tectonic tremor activity in southern Taiwan around the 2010 ML6.4 Jiashian earthquake

    NASA Astrophysics Data System (ADS)

    Chao, Kevin; Peng, Zhigang; Hsu, Ya-Ju; Obara, Kazushige; Wu, Chunquan; Ching, Kuo-En; van der Lee, Suzan; Pu, Hsin-Chieh; Leu, Peih-Lin; Wech, Aaron

    2017-07-01

    Deep tectonic tremor, which is extremely sensitive to small stress variations, could be used to monitor fault zone processes during large earthquake cycles and aseismic processes before large earthquakes. In this study, we develop an algorithm for the automatic detection and location of tectonic tremor beneath the southern Central Range of Taiwan and examine the spatiotemporal relationship between tremor and the 4 March 2010 ML6.4 Jiashian earthquake, located about 20 km from active tremor sources. We find that tremor in this region has a relatively short duration, short recurrence time, and no consistent correlation with surface GPS data. We find a short-term increase in the tremor rate 19 days before the Jiashian main shock, and around the time when the tremor rate began to rise one GPS station recorded a flip in its direction of motion. We hypothesize that tremor is driven by a slow-slip event that preceded the occurrence of the shallower Jiashian main shock, even though the inferred slip is too small to be observed by all GPS stations. Our study shows that tectonic tremor may reflect stress variation during the prenucleation process of a nearby earthquake.

  6. Structural features of the Pernicana Fault (M. Etna, Sicily, Italy) inferred by high precise location of the microseismicity

    NASA Astrophysics Data System (ADS)

    Alparone, S.; Gambino, S.; Mostaccio, A.; Spampinato, S.; Tuvè, T.; Ursino, A.

    2009-04-01

    The north-eastern flank of Mt. Etna is crossed by an important and active tectonic structure, the Pernicana Fault having a mean strike WNW-ESE. It links westward to the active NE Rift and seems to have an important role in controlling instability processes affecting the eastern flank of the volcano. Recent studies suggest that Pernicana Fault is very active through sinistral, oblique-slip movements and is also characterised by frequent shallow seismicity (depth < 2 km bsl) on the uphill western segment and by remarkable creeping on the downhill eastern one. The Pernicana Fault earthquakes, which can reach magnitudes up to 4.2, sometimes with coseismic surface faulting, caused severe damages to tourist resorts and villages along or close this structure. In the last years, a strong increase of seismicity, also characterized by swarms, was recorded by INGV-CT permanent local seismic network close the Pernicana Fault. A three-step procedure was applied to calculate precise hypocentre locations. In a first step, we chose to apply cross-correlation analysis, in order to easily evaluate the similarity of waveforms useful to identify earthquakes families. In a second step, we calculate probabilistic earthquake locations using the software package NONLINLOC, which includes systematic, complete grid search and global, non-linear search methods. Subsequently, we perform relative relocation of correlated event pairs using the double-difference earthquake algorithm and the program HypoDD. The double-difference algorithm minimizes the residuals between observed and calculated travel time difference for pairs of earthquakes at common stations by iteratively adjusting the vector difference between the hypocenters. We show the recognized spatial seismic clusters identifying the most active and hazarding sectors of the structure, their geometry and depth. Finally, in order to clarify the geodynamic framework of the area, we associate these results with calculated focal mechanisms

  7. Twitter Seismology: Earthquake Monitoring and Response in a Social World

    NASA Astrophysics Data System (ADS)

    Bowden, D. C.; Earle, P. S.; Guy, M.; Smoczyk, G.

    2011-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment USGS earthquake response products and the delivery of hazard information. The potential uses of Twitter for earthquake response include broadcasting earthquake alerts, rapidly detecting widely felt events, qualitatively assessing earthquake damage effects, communicating with the public, and participating in post-event collaboration. Several seismic networks and agencies are currently distributing Twitter earthquake alerts including the European-Mediterranean Seismological Centre (@LastQuake), Natural Resources Canada (@CANADAquakes), and the Indonesian meteorological agency (@infogempabmg); the USGS will soon distribute alerts via the @USGSted and @USGSbigquakes Twitter accounts. Beyond broadcasting alerts, the USGS is investigating how to use tweets that originate near the epicenter to detect and characterize shaking events. This is possible because people begin tweeting immediately after feeling an earthquake, and their short narratives and exclamations are available for analysis within 10's of seconds of the origin time. Using five months of tweets that contain the word "earthquake" and its equivalent in other languages, we generate a tweet-frequency time series. The time series clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a simple Short-Term-Average / Long-Term-Average algorithm similar to that commonly used to detect seismic phases. As with most auto-detection algorithms, the parameters can be tuned to catch more or less events at the cost of more or less false triggers. When tuned to a moderate sensitivity, the detector found 48 globally-distributed, confirmed seismic events with only 2 false triggers. A space-shuttle landing and "The Great California ShakeOut" caused the false triggers. This number of

  8. MyShake: A smartphone seismic network for earthquake early warning and beyond

    PubMed Central

    Kong, Qingkai; Allen, Richard M.; Schreier, Louis; Kwon, Young-Woo

    2016-01-01

    Large magnitude earthquakes in urban environments continue to kill and injure tens to hundreds of thousands of people, inflicting lasting societal and economic disasters. Earthquake early warning (EEW) provides seconds to minutes of warning, allowing people to move to safe zones and automated slowdown and shutdown of transit and other machinery. The handful of EEW systems operating around the world use traditional seismic and geodetic networks that exist only in a few nations. Smartphones are much more prevalent than traditional networks and contain accelerometers that can also be used to detect earthquakes. We report on the development of a new type of seismic system, MyShake, that harnesses personal/private smartphone sensors to collect data and analyze earthquakes. We show that smartphones can record magnitude 5 earthquakes at distances of 10 km or less and develop an on-phone detection capability to separate earthquakes from other everyday shakes. Our proof-of-concept system then collects earthquake data at a central site where a network detection algorithm confirms that an earthquake is under way and estimates the location and magnitude in real time. This information can then be used to issue an alert of forthcoming ground shaking. MyShake could be used to enhance EEW in regions with traditional networks and could provide the only EEW capability in regions without. In addition, the seismic waveforms recorded could be used to deliver rapid microseism maps, study impacts on buildings, and possibly image shallow earth structure and earthquake rupture kinematics. PMID:26933682

  9. The 2007 Boso Slow Slip Event and the associated earthquake swarm

    NASA Astrophysics Data System (ADS)

    Sekine, S.; Hirose, H.; Kimura, H.; Obara, K.

    2007-12-01

    In the Boso Peninsula, which is located in southeast of the Japan mainland, slow slip events (SSE) have been observed by the GEONET GPS array operated by the Geographical Survey Institute Japan and the NIED tiltmeter network every 6-7 years (Ozawa et al.,2003; NIED 2003). The unique characteristics of the Boso SSE are that earthquake swarm activities have also occurred in association with the SSE. The latest activity of the SSE and the earthquake swarm took place in August 2007. On 13th August, an earthquake swarm began to occur at east off Boso Peninsula and the slow tilt deformations also started. The earthquake sources migrated to the NNE direction, which is the same direction of the relative plate motion of the subducting Philippine Sea Plate with respect to the overriding plate. The largest earthquake in this episode (Mw 5.3) occurred on 16th and the second largest one (Mw 5.2) on 18th. Most of the larger earthquakes show low- angle thrust type focal mechanisms that are consistent with the plate motion and the geometry of the subduction plate interface. The tilt changes seem to stop on 17th and the activity of the swarm rapidly decreases after 19th. The maximum tilt change of 0.8 micro radian with northwest down tilting was observed at KT2H, the nearest station from the source region. Based on the tilt records around Boso Peninsula, we estimate a fault model for the SSE using genetic algorithm inversion to non-linear parameter and the weighted least squares method to linear parameters. As a result, the estimated moment magnitude and the amount of slip are 6.4 and 10 cm, respectively. The size and the location of the SSE are similar to the previous episodes. The estimated fault plane is very consistent with the configuration of the plate interface (Kimura et al., 2006). Most of the earthquakes are located on the deeper edge of the estimated SSE fault area. The coincidence of the swarm and the SSE suggests a causal relation between them and may help us to

  10. A 3-D velocity model for earthquake location from combined geological and geophysical data: a case study from the TABOO near fault observatory (Northern Apennines, Italy)

    NASA Astrophysics Data System (ADS)

    Latorre, Diana; Lupattelli, Andrea; Mirabella, Francesco; Trippetta, Fabio; Valoroso, Luisa; Lomax, Anthony; Di Stefano, Raffaele; Collettini, Cristiano; Chiaraluce, Lauro

    2014-05-01

    Accurate hypocenter location at the crustal scale strongly depends on our knowledge of the 3D velocity structure. The integration of geological and geophysical data, when available, should contribute to a reliable seismic velocity model in order to guarantee high quality earthquake locations as well as their consistency with the geological structure. Here we present a 3D, P- and S-wave velocity model of the Upper Tiber valley region (Northern Apennines) retrieved by combining an extremely robust dataset of surface and sub-surface geological data (seismic reflection profiles and boreholes), in situ and laboratory velocity measurements, and earthquake data. The study area is a portion of the Apennine belt undergoing active extension where a set of high-angle normal faults is detached on the Altotiberina low-angle normal fault (ATF). From 2010, this area hosts a scientific infrastructure (the Alto Tiberina Near Fault Observatory, TABOO; http://taboo.rm.ingv.it/), consisting of a dense array of multi-sensor stations, devoted to studying the earthquakes preparatory phase and the deformation processes along the ATF fault system. The proposed 3D velocity model is a layered model in which irregular shaped surfaces limit the boundaries between main lithological units. The model has been constructed by interpolating depth converted seismic horizons interpreted along 40 seismic reflection profiles (down to 4s two way travel times) that have been calibrated with 6 deep boreholes (down to 5 km depth) and constrained by detailed geological maps and structural surveys data. The layers of the model are characterized by similar rock types and seismic velocity properties. The P- and S-waves velocities for each layer have been derived from velocity measurements coming from both boreholes (sonic logs) and laboratory, where measurements have been performed on analogue natural samples increasing confining pressure in order to simulate crustal conditions. In order to test the 3D velocity

  11. A global search inversion for earthquake kinematic rupture history: Application to the 2000 western Tottori, Japan earthquake

    USGS Publications Warehouse

    Piatanesi, A.; Cirella, A.; Spudich, P.; Cocco, M.

    2007-01-01

    We present a two-stage nonlinear technique to invert strong motions records and geodetic data to retrieve the rupture history of an earthquake on a finite fault. To account for the actual rupture complexity, the fault parameters are spatially variable peak slip velocity, slip direction, rupture time and risetime. The unknown parameters are given at the nodes of the subfaults, whereas the parameters within a subfault are allowed to vary through a bilinear interpolation of the nodal values. The forward modeling is performed with a discrete wave number technique, whose Green's functions include the complete response of the vertically varying Earth structure. During the first stage, an algorithm based on the heat-bath simulated annealing generates an ensemble of models that efficiently sample the good data-fitting regions of parameter space. In the second stage (appraisal), the algorithm performs a statistical analysis of the model ensemble and computes a weighted mean model and its standard deviation. This technique, rather than simply looking at the best model, extracts the most stable features of the earthquake rupture that are consistent with the data and gives an estimate of the variability of each model parameter. We present some synthetic tests to show the effectiveness of the method and its robustness to uncertainty of the adopted crustal model. Finally, we apply this inverse technique to the well recorded 2000 western Tottori, Japan, earthquake (Mw 6.6); we confirm that the rupture process is characterized by large slip (3-4 m) at very shallow depths but, differently from previous studies, we imaged a new slip patch (2-2.5 m) located deeper, between 14 and 18 km depth. Copyright 2007 by the American Geophysical Union.

  12. Distributing Earthquakes Among California's Faults: A Binary Integer Programming Approach

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2016-12-01

    Statement of the problem is simple: given regional seismicity specified by a Gutenber-Richter (G-R) relation, how are earthquakes distributed to match observed fault-slip rates? The objective is to determine the magnitude-frequency relation on individual faults. The California statewide G-R b-value and a-value are estimated from historical seismicity, with the a-value accounting for off-fault seismicity. UCERF3 consensus slip rates are used, based on geologic and geodetic data and include estimates of coupling coefficients. The binary integer programming (BIP) problem is set up such that each earthquake from a synthetic catalog spanning millennia can occur at any location along any fault. The decision vector, therefore, consists of binary variables, with values equal to one indicating the location of each earthquake that results in an optimal match of slip rates, in an L1-norm sense. Rupture area and slip associated with each earthquake are determined from a magnitude-area scaling relation. Uncertainty bounds on the UCERF3 slip rates provide explicit minimum and maximum constraints to the BIP model, with the former more important to feasibility of the problem. There is a maximum magnitude limit associated with each fault, based on fault length, providing an implicit constraint. Solution of integer programming problems with a large number of variables (>105 in this study) has been possible only since the late 1990s. In addition to the classic branch-and-bound technique used for these problems, several other algorithms have been recently developed, including pre-solving, sifting, cutting planes, heuristics, and parallelization. An optimal solution is obtained using a state-of-the-art BIP solver for M≥6 earthquakes and California's faults with slip-rates > 1 mm/yr. Preliminary results indicate a surprising diversity of on-fault magnitude-frequency relations throughout the state.

  13. Injection-induced earthquakes

    USGS Publications Warehouse

    Ellsworth, William L.

    2013-01-01

    Earthquakes in unusual locations have become an important topic of discussion in both North America and Europe, owing to the concern that industrial activity could cause damaging earthquakes. It has long been understood that earthquakes can be induced by impoundment of reservoirs, surface and underground mining, withdrawal of fluids and gas from the subsurface, and injection of fluids into underground formations. Injection-induced earthquakes have, in particular, become a focus of discussion as the application of hydraulic fracturing to tight shale formations is enabling the production of oil and gas from previously unproductive formations. Earthquakes can be induced as part of the process to stimulate the production from tight shale formations, or by disposal of wastewater associated with stimulation and production. Here, I review recent seismic activity that may be associated with industrial activity, with a focus on the disposal of wastewater by injection in deep wells; assess the scientific understanding of induced earthquakes; and discuss the key scientific challenges to be met for assessing this hazard.

  14. Feasibility study of earthquake early warning (EEW) in Hawaii

    USGS Publications Warehouse

    Thelen, Weston A.; Hotovec-Ellis, Alicia J.; Bodin, Paul

    2016-09-30

    when using a regional network of seismometers. Given the current network, a single-station approach provides more warning for damaging earthquakes that occur close to the station, but it would have limited benefit compared to a fully implemented ShakeAlert system. For Honolulu, for example, the single-station approach provides an advantage over ShakeAlert only for earthquakes that occur in a narrow zone extending northeast and southwest of O‘ahu. Instrumentation and alarms associated with the single-station approach are typically maintained and assessed within the target facility, and thus no outside connectivity is required. A single-station approach, then, is unlikely to help broader populations beyond the individuals at the target facility, but they have the benefit of being commercially available for relatively little cost. The USGS Hawaiian Volcano Observatory (HVO) is the Advanced National Seismic System (ANSS) regional seismic network responsible for locating and characterizing earthquakes across the State of Hawaii. During 2014 and 2015, HVO tested a network-based EEW algorithm within the current seismic network in order to assess the suitability for building a full EEW system. Using the current seismic instrumentation and processing setup at HVO, it is possible for a network approach to release an alarm a little more than 3 seconds after the earthquake is recorded on the fourth seismometer. Presently, earthquakes having M≥3 detected with the ElarmS algorithm have an average location error of approximately 4.5 km and an average magnitude error of -0.3 compared to the reviewed catalog locations from the HVO. Additional stations and upgrades to existing seismic stations would serve to improve solution precision and warning times and additional staffing would be required to provide support for a robust, network-based EEW system. For a critical facility on the Island of Hawaiʻi, such as the telescopes atop Mauna Kea, one phased approach to mitigate losses

  15. 3-D P- and S-wave velocity structure and low-frequency earthquake locations in the Parkfield, California region

    USGS Publications Warehouse

    Zeng, Xiangfang; Thurber, Clifford H.; Shelly, David R.; Harrington, Rebecca M.; Cochran, Elizabeth S.; Bennington, Ninfa L.; Peterson, Dana; Guo, Bin; McClement, Kara

    2016-01-01

    To refine the 3-D seismic velocity model in the greater Parkfield, California region, a new data set including regular earthquakes, shots, quarry blasts and low-frequency earthquakes (LFEs) was assembled. Hundreds of traces of each LFE family at two temporary arrays were stacked with time–frequency domain phase weighted stacking method to improve signal-to-noise ratio. We extend our model resolution to lower crustal depth with LFE data. Our result images not only previously identified features but also low velocity zones (LVZs) in the area around the LFEs and the lower crust beneath the southern Rinconada Fault. The former LVZ is consistent with high fluid pressure that can account for several aspects of LFE behaviour. The latter LVZ is consistent with a high conductivity zone in magnetotelluric studies. A new Vs model was developed with S picks that were obtained with a new autopicker. At shallow depth, the low Vs areas underlie the strongest shaking areas in the 2004 Parkfield earthquake. We relocate LFE families and analyse the location uncertainties with the NonLinLoc and tomoDD codes. The two methods yield similar results.

  16. CISN ShakeAlert: Improving the Virtual Seismologist (VS) earthquake early warning framework to provide faster, more robust warning information

    NASA Astrophysics Data System (ADS)

    Meier, M.; Cua, G. B.; Wiemer, S.; Fischer, M.

    2011-12-01

    The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) that uses observed phase arrivals, ground motion amplitudes and selected prior information to estimate earthquake magnitude, location and origin time, and predict the distribution of peak ground motion throughout a region using envelope attenuation relationships. Implementation of the VS algorithm in California is an on-going effort of the Swiss Seismological Service (SED) at ETH Zürich. VS is one of three EEW algorithms - the other two being ElarmS (Allen and Kanamori, 2003) and On-Site (Wu and Kanamori, 2005; Boese et al., 2008) - that form the basis of the California Integrated Seismic Network ShakeAlert system, a prototype end-to-end EEW system that could potentially be implemented in California. The current prototype version of VS in California requires picks at 4 stations to initiate an event declaration. On average, taking into account data latency, variable station distribution, and processing time, this initial estimate is available about 20 seconds after the earthquake origin time, corresponding to a blind zone of about 70 km around the epicenter which would receive no warning, but where it would be the most useful. To increase the available warning time, we want to produce EEW estimates faster (with less than 4 stations). However, working with less than 4 stations with our current approach would increase the number of false alerts, for which there is very little tolerance in a useful EEW system. We explore the use of back-azimuth estimations and the Voronoi-based concept of not-yet-arrived data for reducing false alerts of the earliest VS estimates. The concept of not-yet-arrived data was originally used to provide evolutionary location estimates in EEW (Horiuchi, 2005; Cua and Heaton, 2007; Satriano et al. 2008). However, it can also be applied in discriminating between earthquake and non-earthquake signals. For real earthquakes, the

  17. Earthquakes in South Carolina and Vicinity 1698-2009

    USGS Publications Warehouse

    Dart, Richard L.; Talwani, Pradeep; Stevenson, Donald

    2010-01-01

    This map summarizes more than 300 years of South Carolina earthquake history. It is one in a series of three similar State earthquake history maps. The current map and the previous two for Virginia and Ohio are accessible at http://pubs.usgs.gov/of/2006/1017/ and http://pubs.usgs.gov/of/2008/1221/. All three State earthquake maps were collaborative efforts between the U.S. Geological Survey and respective State agencies. Work on the South Carolina map was done in collaboration with the Department of Geological Sciences, University of South Carolina. As with the two previous maps, the history of South Carolina earthquakes was derived from letters, journals, diaries, newspaper accounts, academic journal articles, and, beginning in the early 20th century, instrumental recordings (seismograms). All historical (preinstrumental) earthquakes that were large enough to be felt have been located based on felt reports. Some of these events caused damage to buildings and their contents. The more recent widespread use of seismographs has allowed many smaller earthquakes, previously undetected, to be recorded and accurately located. The seismicity map shows historically located and instrumentally recorded earthquakes in and near South Carolina

  18. Laser-Interferometric Broadband Seismometer for Epicenter Location Estimation

    PubMed Central

    Lee, Kyunghyun; Kwon, Hyungkwan; You, Kwanho

    2017-01-01

    In this paper, we suggest a seismic signal measurement system that uses a laser interferometer. The heterodyne laser interferometer is used as a seismometer due to its high accuracy and robustness. Seismic data measured by the laser interferometer is used to analyze crucial earthquake characteristics. To measure P-S time more precisely, the short time Fourier transform and instantaneous frequency estimation methods are applied to the intensity signal (Iy) of the laser interferometer. To estimate the epicenter location, the range difference of arrival algorithm is applied with the P-S time result. The linear matrix equation of the epicenter localization can be derived using P-S time data obtained from more than three observatories. We prove the performance of the proposed algorithm through simulation and experimental results. PMID:29065515

  19. Catalog of earthquake hypocenters at Alaskan volcanoes: January 1 through December 31, 2006

    USGS Publications Warehouse

    Dixon, James P.; Stihler, Scott D.; Power, John A.; Searcy, Cheryl

    2008-01-01

    Between January 1 and December 31, 2006, AVO located 8,666 earthquakes of which 7,783 occurred on or near the 33 volcanoes monitored within Alaska. Monitoring highlights in 2006 include: an eruption of Augustine Volcano, a volcanic-tectonic earthquake swarm at Mount Martin, elevated seismicity and volcanic unrest at Fourpeaked Mountain, and elevated seismicity and low-level tremor at Mount Veniaminof and Korovin Volcano. A new seismic subnetwork was installed on Fourpeaked Mountain. This catalog includes: (1) descriptions and locations of seismic instrumentation deployed in the field during 2006, (2) a description of earthquake detection, recording, analysis, and data archival systems, (3) a description of seismic velocity models used for earthquake locations, (4) a summary of earthquakes located in 2006, and (5) an accompanying UNIX tar-file with a summary of earthquake origin times, hypocenters, magnitudes, phase arrival times, location quality statistics, daily station usage statistics, and all files used to determine the earthquake locations in 2006.

  20. Earthquakes in Virginia and vicinity 1774 - 2004

    USGS Publications Warehouse

    Tarr, Arthur C.; Wheeler, Russell L.

    2006-01-01

    This map summarizes two and a third centuries of earthquake activity. The seismic history consists of letters, journals, diaries, and newspaper and scholarly articles that supplement seismograph recordings (seismograms) dating from the early twentieth century to the present. All of the pre-instrumental (historical) earthquakes were large enough to be felt by people or to cause shaking damage to buildings and their contents. Later, widespread use of seismographs meant that tremors too small or distant to be felt could be detected and accurately located. Earthquakes are a legitimate concern in Virginia and parts of adjacent States. Moderate earthquakes cause slight local damage somewhere in the map area about twice a decade on the average. Additionally, many buildings in the map area were constructed before earthquake protection was added to local building codes. The large map shows all historical and instrumentally located earthquakes from 1774 through 2004.

  1. Deviant Earthquakes: Data-driven Constraints on the Variability in Earthquake Source Properties and Seismic Hazard

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel Taylor

    The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of

  2. Earthquake precursors: spatial-temporal gravity changes before the great earthquakes in the Sichuan-Yunnan area

    NASA Astrophysics Data System (ADS)

    Zhu, Yi-Qing; Liang, Wei-Feng; Zhang, Song

    2018-01-01

    Using multiple-scale mobile gravity data in the Sichuan-Yunnan area, we systematically analyzed the relationships between spatial-temporal gravity changes and the 2014 Ludian, Yunnan Province Ms6.5 earthquake and the 2014 Kangding Ms6.3, 2013 Lushan Ms7.0, and 2008 Wenchuan Ms8.0 earthquakes in Sichuan Province. Our main results are as follows. (1) Before the occurrence of large earthquakes, gravity anomalies occur in a large area around the epicenters. The directions of gravity change gradient belts usually agree roughly with the directions of the main fault zones of the study area. Such gravity changes might reflect the increase of crustal stress, as well as the significant active tectonic movements and surface deformations along fault zones, during the period of gestation of great earthquakes. (2) Continuous significant changes of the multiple-scale gravity fields, as well as greater gravity changes with larger time scales, can be regarded as medium-range precursors of large earthquakes. The subsequent large earthquakes always occur in the area where the gravity changes greatly. (3) The spatial-temporal gravity changes are very useful in determining the epicenter of coming large earthquakes. The large gravity networks are useful to determine the general areas of coming large earthquakes. However, the local gravity networks with high spatial-temporal resolution are suitable for determining the location of epicenters. Therefore, denser gravity observation networks are necessary for better forecasts of the epicenters of large earthquakes. (4) Using gravity changes from mobile observation data, we made medium-range forecasts of the Kangding, Ludian, Lushan, and Wenchuan earthquakes, with especially successful forecasts of the location of their epicenters. Based on the above discussions, we emphasize that medium-/long-term potential for large earthquakes might exist nowadays in some areas with significant gravity anomalies in the study region. Thus, the monitoring

  3. Scaling of seismic memory with earthquake size

    NASA Astrophysics Data System (ADS)

    Zheng, Zeyu; Yamasaki, Kazuko; Tenenbaum, Joel; Podobnik, Boris; Tamura, Yoshiyasu; Stanley, H. Eugene

    2012-07-01

    It has been observed that discrete earthquake events possess memory, i.e., that events occurring in a particular location are dependent on the history of that location. We conduct an analysis to see whether continuous real-time data also display a similar memory and, if so, whether such autocorrelations depend on the size of earthquakes within close spatiotemporal proximity. We analyze the seismic wave form database recorded by 64 stations in Japan, including the 2011 “Great East Japan Earthquake,” one of the five most powerful earthquakes ever recorded, which resulted in a tsunami and devastating nuclear accidents. We explore the question of seismic memory through use of mean conditional intervals and detrended fluctuation analysis (DFA). We find that the wave form sign series show power-law anticorrelations while the interval series show power-law correlations. We find size dependence in earthquake autocorrelations: as the earthquake size increases, both of these correlation behaviors strengthen. We also find that the DFA scaling exponent α has no dependence on the earthquake hypocenter depth or epicentral distance.

  4. Earthquakes in Ohio and Vicinity 1776-2007

    USGS Publications Warehouse

    Dart, Richard L.; Hansen, Michael C.

    2008-01-01

    This map summarizes two and a third centuries of earthquake activity. The seismic history consists of letters, journals, diaries, and newspaper and scholarly articles that supplement seismograph recordings (seismograms) dating from the early twentieth century to the present. All of the pre-instrumental (historical) earthquakes were large enough to be felt by people or to cause shaking damage to buildings and their contents. Later, widespread use of seismographs meant that tremors too small or distant to be felt could be detected and accurately located. Earthquakes are a legitimate concern in Ohio and parts of adjacent States. Ohio has experienced more than 160 felt earthquakes since 1776. Most of these events caused no damage or injuries. However, 15 Ohio earthquakes resulted in property damage and some minor injuries. The largest historic earthquake in the state occurred in 1937. This event had an estimated magnitude of 5.4 and caused considerable damage in the town of Anna and in several other western Ohio communities. The large map shows all historical and instrumentally located earthquakes from 1776 through 2007.

  5. CyberTEAM Interactive Epicenter Locator Tool

    NASA Astrophysics Data System (ADS)

    Ouyang, Y.; Hayden, K.; Lehmann, M.; Kilb, D.

    2008-12-01

    News coverage showing collapsed buildings, broken bridges and smashed cars help middle school students visualize the hazardous nature of earthquakes. However, few students understand how scientists investigate earthquakes through analysis of data collected using technology devices from around the world. The important findings by Muawia Barazangi and James Dorman in 1969 revealed how earthquakes charted between 1961 and 1967 delineated narrow belts of seismicity. This important discovery prompted additional research that eventually led to the theory of plate tectonics. When a large earthquake occurs, people from distances near and far can feel it to varying degrees. But how do scientists examine data to identify the locations of earthquake epicenters? The scientific definition of an earthquake: "a movement within the Earth's crust or mantle, caused by the sudden rupture or repositioning of underground material as they release stress" can be confusing for students first studying Earth science in 6th grade. Students struggle with understanding how scientists can tell when and where a rupture occurs, when the inner crust and mantle are not visible to us. Our CyberTEAM project provides 6th grade teachers with the opportunity to engage adolescents in activities that make textbooks come alive as students manipulate the same data that today's scientists use. We have developed an Earthquake Epicenter Location Tool that includes two Flash-based interactive learning objects that can be used to study basic seismology concepts and lets the user determine earthquake epicenters from current data. Through the Wilber II system maintained at the IRIS (Incorporated Research Institutions for Seismology) Web site, this project retrieves seismic data of recent earthquakes and makes them available to the public. Students choose an earthquake to perform further explorations. For each earthquake, a selection of USArray seismic stations are marked on a Google Map. Picking a station on the

  6. Aftershock locations and rupture characteristics of the 2006 May 27, Yogyakarta-Indonesia earthquake

    NASA Astrophysics Data System (ADS)

    Irwan, M.; Ando, M.; Kimata, F.; Tadokoro, K.; Nakamichi, H.; Muto, D.; Okuda, T.; Hasanuddin, A.; Mipi A., K.; Setyadji, B.; Andreas, H.; Gamal, M.; Arif, R.

    2006-12-01

    A strong earthquake (M6.3) rocked the Bantul district, south of Yogyakarta Special Province (DIY) on the morningof May 27, 2006. We installed a temporary array of 6 seismographs to record aftershocks of the earthquake. The area of aftershocks, which may be interpreted as mainshock ruptured area has dimensions of about 25 km length and 20 km width, in the N48E direction. At depth the seismicity mainly concentrated between 5 to 15 km. The distribution of aftershock does not appear to come very close to the surface. There is no obvious surface evidence of causative fault in this area, though we find many crack and fissures that seem to have produced by the strong ground motion. We used the orientation and size of the fault determined from our aftershock results to carry out an inversion of teleseismic data for the slip distribution. We used broad- band seismograms of the IRIS network with epicentral distances between 30 and 90 degrees. We assume a single fault plane, strike 48 degree and dip 80 degree, which is inferred from the aftershock distribution. The total seismic moment is 0.369 x 10(19) Nm with maximum slip 0.4 meters. The asperity is located about 5 km away southwest of USGS estimated epicenter. Although the distances from the seismic source to heavily damaged areas Bantul and Klaten are 10 to 50 km, soft sedimentary soil likely to have generated very damaging motions within the area.

  7. Integrated Land- and Underwater-Based Sensors for a Subduction Zone Earthquake Early Warning System

    NASA Astrophysics Data System (ADS)

    Pirenne, B.; Rosenberger, A.; Rogers, G. C.; Henton, J.; Lu, Y.; Moore, T.

    2016-12-01

    Ocean Networks Canada (ONC — oceannetworks.ca/ ) operates cabled ocean observatories off the coast of British Columbia (BC) to support research and operational oceanography. Recently, ONC has been funded by the Province of BC to deliver an earthquake early warning (EEW) system that integrates offshore and land-based sensors to deliver alerts of incoming ground shaking from the Cascadia Subduction Zone. ONC's cabled seismic network has the unique advantage of being located offshore on either side of the surface expression of the subduction zone. The proximity of ONC's sensors to the fault can result in faster, more effective warnings, which translates into more lives saved, injuries avoided and more ability for mitigative actions to take place.ONC delivers near real-time data from various instrument types simultaneously, providing distinct advantages to seismic monitoring and earthquake early warning. The EEW system consists of a network of sensors, located on the ocean floor and on land, that detect and analyze the initial p-wave of an earthquake as well as the crustal deformation on land during the earthquake sequence. Once the p-wave is detected and characterized, software systems correlate the data streams of the various sensors and deliver alerts to clients through a Common Alerting Protocol-compliant data package. This presentation will focus on the development of the earthquake early warning capacity at ONC. It will describe the seismic sensors and their distribution, the p-wave detection algorithms selected and the overall architecture of the system. It will further overview the plan to achieve operational readiness at project completion.

  8. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California.

    PubMed

    Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F

    2011-10-04

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.

  9. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California

    PubMed Central

    Lee, Ya-Ting; Turcotte, Donald L.; Holliday, James R.; Sachs, Michael K.; Rundle, John B.; Chen, Chien-Chih; Tiampo, Kristy F.

    2011-01-01

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M≥4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M≥4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor–Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most “successful” in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts. PMID:21949355

  10. Magnitudes and locations of the 1811-1812 New Madrid, Missouri, and the 1886 Charleston, South Carolina, earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Hopper, M.G.

    2004-01-01

    We estimate locations and moment magnitudes M and their uncertainties for the three largest events in the 1811-1812 sequence near New Madrid, Missouri, and for the 1 September 1886 event near Charleston, South Carolina. The intensity magnitude M1, our preferred estimate of M, is 7.6 for the 16 December 1811 event that occurred in the New Madrid seismic zone (NMSZ) on the Bootheel lineament or on the Blytheville seismic zone. M1, is 7.5 for the 23 January 1812 event for a location on the New Madrid north zone of the NMSZ and 7.8 for the 7 February 1812 event that occurred on the Reelfoot blind thrust of the NMSZ. Our preferred locations for these events are located on those NMSZ segments preferred by Johnston and Schweig (1996). Our estimates of M are 0.1-0.4 M units less than those of Johnston (1996b) and 0.3-0.5 M units greater than those of Hough et al. (2000). M1 is 6.9 for the 1 September 1886 event for a location at the Summerville-Middleton Place cluster of recent small earthquakes located about 30 km northwest of Charleston.

  11. PPP Sliding Window Algorithm and Its Application in Deformation Monitoring.

    PubMed

    Song, Weiwei; Zhang, Rui; Yao, Yibin; Liu, Yanyan; Hu, Yuming

    2016-05-31

    Compared with the double-difference relative positioning method, the precise point positioning (PPP) algorithm can avoid the selection of a static reference station and directly measure the three-dimensional position changes at the observation site and exhibit superiority in a variety of deformation monitoring applications. However, because of the influence of various observing errors, the accuracy of PPP is generally at the cm-dm level, which cannot meet the requirements needed for high precision deformation monitoring. For most of the monitoring applications, the observation stations maintain stationary, which can be provided as a priori constraint information. In this paper, a new PPP algorithm based on a sliding window was proposed to improve the positioning accuracy. Firstly, data from IGS tracking station was processed using both traditional and new PPP algorithm; the results showed that the new algorithm can effectively improve positioning accuracy, especially for the elevation direction. Then, an earthquake simulation platform was used to simulate an earthquake event; the results illustrated that the new algorithm can effectively detect the vibrations change of a reference station during an earthquake. At last, the observed Wenchuan earthquake experimental results showed that the new algorithm was feasible to monitor the real earthquakes and provide early-warning alerts.

  12. Earthquake Triggering in the September 2017 Mexican Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Fielding, E. J.; Gombert, B.; Duputel, Z.; Huang, M. H.; Liang, C.; Bekaert, D. P.; Moore, A. W.; Liu, Z.; Ampuero, J. P.

    2017-12-01

    Southern Mexico was struck by four earthquakes with Mw > 6 and numerous smaller earthquakes in September 2017, starting with the 8 September Mw 8.2 Tehuantepec earthquake beneath the Gulf of Tehuantepec offshore Chiapas and Oaxaca. We study whether this M8.2 earthquake triggered the three subsequent large M>6 quakes in southern Mexico to improve understanding of earthquake interactions and time-dependent risk. All four large earthquakes were extensional despite the the subduction of the Cocos plate. The traditional definition of aftershocks: likely an aftershock if it occurs within two rupture lengths of the main shock soon afterwards. Two Mw 6.1 earthquakes, one half an hour after the M8.2 beneath the Tehuantepec gulf and one on 23 September near Ixtepec in Oaxaca, both fit as traditional aftershocks, within 200 km of the main rupture. The 19 September Mw 7.1 Puebla earthquake was 600 km away from the M8.2 shock, outside the standard aftershock zone. Geodetic measurements from interferometric analysis of synthetic aperture radar (InSAR) and time-series analysis of GPS station data constrain finite fault total slip models for the M8.2, M7.1, and M6.1 Ixtepec earthquakes. The early M6.1 aftershock was too close in time and space to the M8.2 to measure with InSAR or GPS. We analyzed InSAR data from Copernicus Sentinel-1A and -1B satellites and JAXA ALOS-2 satellite. Our preliminary geodetic slip model for the M8.2 quake shows significant slip extended > 150 km NW from the hypocenter, longer than slip in the v1 finite-fault model (FFM) from teleseismic waveforms posted by G. Hayes at USGS NEIC. Our slip model for the M7.1 earthquake is similar to the v2 NEIC FFM. Interferograms for the M6.1 Ixtepec quake confirm the shallow depth in the upper-plate crust and show centroid is about 30 km SW of the NEIC epicenter, a significant NEIC location bias, but consistent with cluster relocations (E. Bergman, pers. comm.) and with Mexican SSN location. Coulomb static stress

  13. CONEDEP: COnvolutional Neural network based Earthquake DEtection and Phase Picking

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Huang, Y.; Yue, H.; Zhou, S.; An, S.; Yun, N.

    2017-12-01

    We developed an automatic local earthquake detection and phase picking algorithm based on Fully Convolutional Neural network (FCN). The FCN algorithm detects and segments certain features (phases) in 3 component seismograms to realize efficient picking. We use STA/LTA algorithm and template matching algorithm to construct the training set from seismograms recorded 1 month before and after the Wenchuan earthquake. Precise P and S phases are identified and labeled to construct the training set. Noise data are produced by combining back-ground noise and artificial synthetic noise to form the equivalent scale of noise set as the signal set. Training is performed on GPUs to achieve efficient convergence. Our algorithm has significantly improved performance in terms of the detection rate and precision in comparison with STA/LTA and template matching algorithms.

  14. Cluster-search based monitoring of local earthquakes in SeisComP3

    NASA Astrophysics Data System (ADS)

    Roessler, D.; Becker, J.; Ellguth, E.; Herrnkind, S.; Weber, B.; Henneberger, R.; Blanck, H.

    2016-12-01

    We present a new cluster-search based SeisComP3 module for locating local and regional earthquakes in real time. Real-time earthquake monitoring systems such as SeisComP3 provide the backbones for earthquake early warning (EEW), tsunami early warning (TEW) and the rapid assessment of natural and induced seismicity. For any earthquake monitoring system fast and accurate event locations are fundamental determining the reliability and the impact of further analysis. SeisComP3 in the OpenSource version includes a two-stage detector for picking P waves and a phase associator for locating earthquakes based on P-wave detections. scanloc is a more advanced earthquake location program developed by gempa GmbH with seamless integration into SeisComP3. scanloc performs advanced cluster search to discriminate earthquakes occurring closely in space and time and makes additional use of S-wave detections. It has proven to provide fast and accurate earthquake locations at local and regional distances where it outperforms the base SeisComP3 tools. We demonstrate the performance of scanloc for monitoring induced seismicity as well as local and regional earthquakes in different tectonic regimes including subduction, spreading and intra-plate regions. In particular we present examples and catalogs from real-time monitoring of earthquake in Northern Chile based on data from the IPOC network by GFZ German Research Centre for Geosciences for the recent years. Depending on epicentral distance and data transmission, earthquake locations are available within a few seconds after origin time when using scanloc. The association of automatic S-wave detections provides a better constraint on focal depth.

  15. Analysis of the geophysical data using a posteriori algorithms

    NASA Astrophysics Data System (ADS)

    Voskoboynikova, Gyulnara; Khairetdinov, Marat

    2016-04-01

    The problems of monitoring, prediction and prevention of extraordinary natural and technogenic events are priority of modern problems. These events include earthquakes, volcanic eruptions, the lunar-solar tides, landslides, falling celestial bodies, explosions utilized stockpiles of ammunition, numerous quarry explosion in open coal mines, provoking technogenic earthquakes. Monitoring is based on a number of successive stages, which include remote registration of the events responses, measurement of the main parameters as arrival times of seismic waves or the original waveforms. At the final stage the inverse problems associated with determining the geographic location and time of the registration event are solving. Therefore, improving the accuracy of the parameters estimation of the original records in the high noise is an important problem. As is known, the main measurement errors arise due to the influence of external noise, the difference between the real and model structures of the medium, imprecision of the time definition in the events epicenter, the instrumental errors. Therefore, posteriori algorithms more accurate in comparison with known algorithms are proposed and investigated. They are based on a combination of discrete optimization method and fractal approach for joint detection and estimation of the arrival times in the quasi-periodic waveforms sequence in problems of geophysical monitoring with improved accuracy. Existing today, alternative approaches to solving these problems does not provide the given accuracy. The proposed algorithms are considered for the tasks of vibration sounding of the Earth in times of lunar and solar tides, and for the problem of monitoring of the borehole seismic source location in trade drilling.

  16. Mexican Earthquakes and Tsunamis Catalog Reviewed

    NASA Astrophysics Data System (ADS)

    Ramirez-Herrera, M. T.; Castillo-Aja, R.

    2015-12-01

    Today the availability of information on the internet makes online catalogs very easy to access by both scholars and the public in general. The catalog in the "Significant Earthquake Database", managed by the National Center for Environmental Information (NCEI formerly NCDC), NOAA, allows access by deploying tabular and cartographic data related to earthquakes and tsunamis contained in the database. The NCEI catalog is the product of compiling previously existing catalogs, historical sources, newspapers, and scientific articles. Because NCEI catalog has a global coverage the information is not homogeneous. Existence of historical information depends on the presence of people in places where the disaster occurred, and that the permanence of the description is preserved in documents and oral tradition. In the case of instrumental data, their availability depends on the distribution and quality of seismic stations. Therefore, the availability of information for the first half of 20th century can be improved by careful analysis of the available information and by searching and resolving inconsistencies. This study shows the advances we made in upgrading and refining data for the earthquake and tsunami catalog of Mexico since 1500 CE until today, presented in the format of table and map. Data analysis allowed us to identify the following sources of error in the location of the epicenters in existing catalogs: • Incorrect coordinate entry • Place name erroneous or mistaken • Too general data that makes difficult to locate the epicenter, mainly for older earthquakes • Inconsistency of earthquakes and the tsunami occurrence: earthquake's epicenter located too far inland reported as tsunamigenic. The process of completing the catalogs directly depends on the availability of information; as new archives are opened for inspection, there are more opportunities to complete the history of large earthquakes and tsunamis in Mexico. Here, we also present new earthquake and

  17. Studies of earthquakes and microearthquakes using near-field seismic and geodetic observations

    NASA Astrophysics Data System (ADS)

    O'Toole, Thomas Bartholomew

    The Centroid-Moment Tensor (CMT) method allows an optimal point-source description of an earthquake to be recovered from a set of seismic observations, and, for over 30 years, has been routinely applied to determine the location and source mechanism of teleseismically recorded earthquakes. The CMT approach is, however, entirely general: any measurements of seismic displacement fields could, in theory, be used within the CMT inversion formulation, so long as the treatment of the earthquake as a point source is valid for that data. We modify the CMT algorithm to enable a variety of near-field seismic observables to be inverted for the source parameters of an earthquake. The first two data types that we implement are provided by Global Positioning System receivers operating at sampling frequencies of 1,Hz and above. When deployed in the seismic near field, these instruments may be used as long-period-strong-motion seismometers, recording displacement time series that include the static offset. We show that both the displacement waveforms, and static displacements alone, can be used to obtain CMT solutions for moderate-magnitude earthquakes, and that performing analyses using these data may be useful for earthquake early warning. We also investigate using waveform recordings - made by conventional seismometers deployed at the surface, or by geophone arrays placed in boreholes - to determine CMT solutions, and their uncertainties, for microearthquakes induced by hydraulic fracturing. A similar waveform inversion approach could be applied in many other settings where induced seismicity and microseismicity occurs..

  18. Istanbul Earthquake Early Warning and Rapid Response System

    NASA Astrophysics Data System (ADS)

    Erdik, M. O.; Fahjan, Y.; Ozel, O.; Alcik, H.; Aydin, M.; Gul, M.

    2003-12-01

    As part of the preparations for the future earthquake in Istanbul a Rapid Response and Early Warning system in the metropolitan area is in operation. For the Early Warning system ten strong motion stations were installed as close as possible to the fault zone. Continuous on-line data from these stations via digital radio modem provide early warning for potentially disastrous earthquakes. Considering the complexity of fault rupture and the short fault distances involved, a simple and robust Early Warning algorithm, based on the exceedance of specified threshold time domain amplitude levels is implemented. The band-pass filtered accelerations and the cumulative absolute velocity (CAV) are compared with specified threshold levels. When any acceleration or CAV (on any channel) in a given station exceeds specific threshold values it is considered a vote. Whenever we have 2 station votes within selectable time interval, after the first vote, the first alarm is declared. In order to specify the appropriate threshold levels a data set of near field strong ground motions records form Turkey and the world has been analyzed. Correlations among these thresholds in terms of the epicenter distance the magnitude of the earthquake have been studied. The encrypted early warning signals will be communicated to the respective end users by UHF systems through a "service provider" company. The users of the early warning signal will be power and gas companies, nuclear research facilities, critical chemical factories, subway system and several high-rise buildings. Depending on the location of the earthquake (initiation of fault rupture) and the recipient facility the alarm time can be as high as about 8s. For the rapid response system one hundred 18 bit-resolution strong motion accelerometers were placed in quasi-free field locations (basement of small buildings) in the populated areas of the city, within an area of approximately 50x30km, to constitute a network that will enable early

  19. Earthquake in Hindu Kush Region, Afghanistan

    NASA Image and Video Library

    2015-10-27

    On Oct. 26, 2015, NASA Terra spacecraft acquired this image of northeastern Afghanistan where a magnitude 7.5 earthquake struck the Hindu Kush region. The earthquake's epicenter was at a depth of 130 miles (210 kilometers), on a probable shallowly dipping thrust fault. At this location, the Indian subcontinent moves northward and collides with Eurasia, subducting under the Asian continent, and raising the highest mountains in the world. This type of earthquake is common in the area: a similar earthquake occurred 13 years ago about 12 miles (20 kilometers) away. This perspective image from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument on NASA's Terra spacecraft, looking southwest, shows the hypocenter with a star. The image was acquired July 8, 2015, and is located near 36.4 degrees north, 70.7 degrees east. http://photojournal.jpl.nasa.gov/catalog/PIA20035

  20. Seismotectonic framework of the 2010 February 27 Mw 8.8 Maule, Chile earthquake sequence

    USGS Publications Warehouse

    Hayes, Gavin P.; Bergman, Eric; Johnson, Kendra J.; Benz, Harley M.; Brown, Lucy; Meltzer, Anne S.

    2013-01-01

    After the 2010 Mw 8.8 Maule earthquake, an international collaboration involving teams and instruments from Chile, the US, the UK, France and Germany established the International Maule Aftershock Deployment temporary network over the source region of the event to facilitate detailed, open-access studies of the aftershock sequence. Using data from the first 9-months of this deployment, we have analyzed the detailed spatial distribution of over 2500 well-recorded aftershocks. All earthquakes have been relocated using a hypocentral decomposition algorithm to study the details of and uncertainties in both their relative and absolute locations. We have computed regional moment tensor solutions for the largest of these events to produce a catalogue of 465 mechanisms, and have used all of these data to study the spatial distribution of the aftershock sequence with respect to the Chilean megathrust. We refine models of co-seismic slip distribution of the Maule earthquake, and show how small changes in fault geometries assumed in teleseismic finite fault modelling significantly improve fits to regional GPS data, implying that the accuracy of rapid teleseismic fault models can be substantially improved by consideration of existing fault geometry model databases. We interpret all of these data in an integrated seismotectonic framework for the Maule earthquake rupture and its aftershock sequence, and discuss the relationships between co-seismic rupture and aftershock distributions. While the majority of aftershocks are interplate thrust events located away from regions of maximum co-seismic slip, interesting clusters of aftershocks are identified in the lower plate at both ends of the main shock rupture, implying internal deformation of the slab in response to large slip on the plate boundary interface. We also perform Coulomb stress transfer calculations to compare aftershock locations and mechanisms to static stress changes following the Maule rupture. Without the

  1. A Decision Processing Algorithm for CDC Location Under Minimum Cost SCM Network

    NASA Astrophysics Data System (ADS)

    Park, N. K.; Kim, J. Y.; Choi, W. Y.; Tian, Z. M.; Kim, D. J.

    Location of CDC in the matter of network on Supply Chain is becoming on the high concern these days. Present status of methods on CDC has been mainly based on the calculation manually by the spread sheet to achieve the goal of minimum logistics cost. This study is focused on the development of new processing algorithm to overcome the limit of present methods, and examination of the propriety of this algorithm by case study. The algorithm suggested by this study is based on the principle of optimization on the directive GRAPH of SCM model and suggest the algorithm utilizing the traditionally introduced MST, shortest paths finding methods, etc. By the aftermath of this study, it helps to assess suitability of the present on-going SCM network and could be the criterion on the decision-making process for the optimal SCM network building-up for the demand prospect in the future.

  2. Earthquake locations determined by the Southern Alaska seismograph network for October 1971 through May 1989

    USGS Publications Warehouse

    Fogleman, Kent A.; Lahr, John C.; Stephens, Christopher D.; Page, Robert A.

    1993-01-01

    This report describes the instrumentation and evolution of the U.S. Geological Survey’s regional seismograph network in southern Alaska, provides phase and hypocenter data for seismic events from October 1971 through May 1989, reviews the location methods used, and discusses the completeness of the catalog and the accuracy of the computed hypocenters. Included are arrival time data for explosions detonated under the Trans-Alaska Crustal Transect (TACT) in 1984 and 1985.The U.S. Geological Survey (USGS) operated a regional network of seismographs in southern Alaska from 1971 to the mid 1990s. The principal purpose of this network was to record seismic data to be used to precisely locate earthquakes in the seismic zones of southern Alaska, delineate seismically active faults, assess seismic risks, document potential premonitory earthquake phenomena, investigate current tectonic deformation, and study the structure and physical properties of the crust and upper mantle. A task fundamental to all of these goals was the routine cataloging of parameters for earthquakes located within and adjacent to the seismograph network.The initial network of 10 stations, 7 around Cook Inlet and 3 near Valdez, was installed in 1971. In subsequent summers additions or modifications to the network were made. By the fall of 1973, 26 stations extended from western Cook Inlet to eastern Prince William Sound, and 4 stations were located to the east between Cordova and Yakutat. A year later 20 additional stations were installed. Thirteen of these were placed along the eastern Gulf of Alaska with support from the National Oceanic and Atmospheric Administration (NOAA) under the Outer Continental Shelf Environmental Assessment Program to investigate the seismicity of the outer continental shelf, a region of interest for oil exploration. Since then the region covered by the network remained relatively fixed while efforts were made to make the stations more reliable through improved electronic

  3. Accounts of damage from historical earthquakes in the northeastern Caribbean to aid in the determination of their location and intensity magnitudes

    USGS Publications Warehouse

    Flores, Claudia H.; ten Brink, Uri S.; Bakun, William H.

    2012-01-01

    Documentation of an event in the past depended on the population and political trends of the island, and the availability of historical documents is limited by the physical resource digitization schedule and by the copyright laws of each archive. Examples of documents accessed are governors' letters, newspapers, and other circulars published within the Caribbean, North America, and Western Europe. Key words were used to search for publications that contain eyewitness accounts of various large earthquakes. Finally, this catalog provides descriptions of damage to buildings used in previous studies for the estimation of moment intensity (MI) and location of significantly damaging or felt earthquakes in Hispaniola and in the northeastern Caribbean, all of which have been described in other studies.

  4. Global Instrumental Seismic Catalog: earthquake relocations for 1900-present

    NASA Astrophysics Data System (ADS)

    Villasenor, A.; Engdahl, E.; Storchak, D. A.; Bondar, I.

    2010-12-01

    We present the current status of our efforts to produce a set of homogeneous earthquake locations and improved focal depths towards the compilation of a Global Catalog of instrumentally recorded earthquakes that will be complete down to the lowest magnitude threshold possible on a global scale and for the time period considered. This project is currently being carried out under the auspices of GEM (Global Earthquake Model). The resulting earthquake catalog will be a fundamental dataset not only for earthquake risk modeling and assessment on a global scale, but also for a large number of studies such as global and regional seismotectonics; the rupture zones and return time of large, damaging earthquakes; the spatial-temporal pattern of moment release along seismic zones and faults etc. Our current goal is to re-locate all earthquakes with available station arrival data using the following magnitude thresholds: M5.5 for 1964-present, M6.25 for 1918-1963, M7.5 (complemented with significant events in continental regions) for 1900-1917. Phase arrival time data for earthquakes after 1963 are available in digital form from the International Seismological Centre (ISC). For earthquakes in the time period 1918-1963, phase data is obtained by scanning the printed International Seismological Summary (ISS) bulletins and applying optical character recognition routines. For earlier earthquakes we will collect phase data from individual station bulletins. We will illustrate some of the most significant results of this relocation effort, including aftershock distributions for large earthquakes, systematic differences in epicenter and depth with respect to previous location, examples of grossly mislocated events, etc.

  5. Comparison of four moderate-size earthquakes in southern California using seismology and InSAR

    USGS Publications Warehouse

    Mellors, R.J.; Magistrale, H.; Earle, P.; Cogbill, A.H.

    2004-01-01

    Source parameters determined from interferometric synthetic aperture radar (InSAR) measurements and from seismic data are compared from four moderate-size (less than M 6) earthquakes in southern California. The goal is to verify approximate detection capabilities of InSAR, assess differences in the results, and test how the two results can be reconciled. First, we calculated the expected surface deformation from all earthquakes greater than magnitude 4 in areas with available InSAR data (347 events). A search for deformation from the events in the interferograms yielded four possible events with magnitudes less than 6. The search for deformation was based on a visual inspection as well as cross-correlation in two dimensions between the measured signal and the expected signal. A grid-search algorithm was then used to estimate focal mechanism and depth from the InSAR data. The results were compared with locations and focal mechanisms from published catalogs. An independent relocation using seismic data was also performed. The seismic locations fell within the area of the expected rupture zone for the three events that show clear surface deformation. Therefore, the technique shows the capability to resolve locations with high accuracy and is applicable worldwide. The depths determined by InSAR agree with well-constrained seismic locations determined in a 3D velocity model. Depth control for well-imaged shallow events using InSAR data is good, and better than the seismic constraints in some cases. A major difficulty for InSAR analysis is the poor temporal coverage of InSAR data, which may make it impossible to distinguish deformation due to different earthquakes at the same location.

  6. Multisclae heterogeneity of the 2011 Tohoku-oki earthquake by inversion

    NASA Astrophysics Data System (ADS)

    Aochi, H.; Ulrich, T.; Cornier, G.

    2012-12-01

    Earthquake fault heterogeneity is often studied on a set of subfaults in kinematic inversion, while it is sometimes described with spatially localized geometry. Aochi and Ide (EPS, 2011) and Ide and Aochi (submitted to Pageoph and AGU, 2012) apply a concept of multi-scale heterogeneity to simulate the dynamic rupture process of the 2011 Tohoku-oki earthquake, introducing circular patches of different dimension in fault fracture energy distribution. Previously the patches are given by the past moderate earthquakes in this region, and this seems to be consistent with the evolution process of this mega earthquake, although a few patches, in particular, the largest patch, had not been known previously. In this study, we try to identify patches by inversion. As demonstrated in several earthquakes including the 2010 Maule (M8.8) earthquake, it is possible to indentify two asperities of ellipse kinematically or dynamically (e.g. Ruiz and Madariaga, 2011, and so on). In the successful examples, different asperities are rather visible, separated in space. However the Tohoku-oki earthquake has hierarchical structure of heterogeneity. We apply the Genetic Algorithm to inverse the model parameters from the ground motions (K-net and Kik-net from NIED) and the high sampling GPS (GSI). Starting from low frequency ranges (> 50 seconds), we obtain an ellipse corresponding to M9 event located around the hypocenter, coherent with the previous result by Madariaga et al. (pers. comm.). However it is difficult to identify the second smaller with few constraints. This is mainly because the largest covers the entire rupture area and any smaller patch improves the fitting only for the closer stations. Again, this needs to introduce the multi-scale concept in inversion procedure. Instead of finding the largest one at first, we have to start to extract rather smaller moderate patches from the beginning of the record, following the rupture process.

  7. 2016 update on induced earthquakes in the United States

    USGS Publications Warehouse

    Petersen, Mark D.

    2016-01-01

    During the past decade people living in numerous locations across the central U.S. experienced many more small to moderate sized earthquakes than ever before. This earthquake activity began increasing about 2009 and peaked during 2015 and into early 2016. For example, prior to 2009 Oklahoma typically experienced 1 or 2 small earthquakes per year with magnitude greater than 3.0 but by 2015 this number rose to over 900 earthquakes per year of that size and over 30 earthquakes greater than 4.0. These earthquakes can cause damage. In 2011 a magnitude 5.6 earthquake struck near the town of Prague, Oklahoma on a preexisting fault and caused severe damage to several houses and school buildings. During the past 6 years more than 1500 reports of damaging shaking levels were reported in areas of induced seismicity. This rapid increase and the potential for damaging ground shaking from induced earthquakes caused alarm to about 8 million people living nearby and officials responsible for public safety. They wanted to understand why earthquakes were increasing and the potential threats to society and buildings located nearby.

  8. A consistent and uniform research earthquake catalog for the AlpArray region: preliminary results.

    NASA Astrophysics Data System (ADS)

    Molinari, I.; Bagagli, M.; Kissling, E. H.; Diehl, T.; Clinton, J. F.; Giardini, D.; Wiemer, S.

    2017-12-01

    The AlpArray initiative (www.alparray.ethz.ch) is a large-scale European collaboration ( 50 institutes involved) to study the entire Alpine orogen at high resolution with a variety of geoscientific methods. AlpArray provides unprecedentedly uniform station coverage for the region with more than 650 broadband seismic stations, 300 of which are temporary. The AlpArray Seismic Network (AASN) is a joint effort of 25 institutes from 10 nations, operates since January 2016 and is expected to continue until the end of 2018. In this study, we establish a uniform earthquake catalogue for the Greater Alpine region during the operation period of the AASN with a aimed completeness of M2.5. The catalog has two main goals: 1) calculation of consistent and precise hypocenter locations 2) provide preliminary but uniform magnitude calculations across the region. The procedure is based on automatic high-quality P- and S-wave pickers, providing consistent phase arrival times in combination with a picking quality assessment. First, we detect all events in the region in 2016/2017 using an STA/LTA based detector. Among the detected events, we select 50 geographically homogeneously distributed events with magnitudes ≥2.5 representative for the entire catalog. We manually pick the selected events to establish a consistent P- and S-phase reference data set, including arrival-time time uncertainties. The reference data, are used to adjust the automatic pickers and to assess their performance. In a first iteration, a simple P-picker algorithm is applied to the entire dataset, providing initial picks for the advanced MannekenPix (MPX) algorithm. In a second iteration, the MPX picker provides consistent and reliable automatic first arrival P picks together with a pick-quality estimate. The derived automatic P picks are then used as initial values for a multi-component S-phase picking algorithm. Subsequently, automatic picks of all well-locatable earthquakes will be considered to calculate

  9. Seismicity and seismic hazard in Sabah, East Malaysia from earthquake and geodetic data

    NASA Astrophysics Data System (ADS)

    Gilligan, A.; Rawlinson, N.; Tongkul, F.; Stephenson, R.

    2017-12-01

    While the levels of seismicity are low in most of Malaysia, the state of Sabah in northern Borneo has moderate levels of seismicity. Notable earthquakes in the region include the 1976 M6.2 Lahad Datu earthquake and the 2015 M6 Ranau earthquake. The recent Ranau earthquake resulted in the deaths of 18 people on Mt Kinabalu, an estimated 100 million RM ( US$23 million) damage to buildings, roads, and infrastructure from shaking, and flooding, reduced water quality, and damage to farms from landslides. Over the last 40 years the population of Sabah has increased to over four times what it was in 1976, yet seismic hazard in Sabah remains poorly understood. Using seismic and geodetic data we hope to better quantify the hazards posed by earthquakes in Sabah, and thus help to minimize risk. In order to do this we need to know about the locations of earthquakes, types of earthquakes that occur, and faults that are generating them. We use data from 15 MetMalaysia seismic stations currently operating in Sabah to develop a region-specific velocity model from receiver functions and a pre-existing surface wave model. We use this new velocity model to (re)locate earthquakes that occurred in Sabah from 2005-2016, including a large number of aftershocks from the 2015 Ranau earthquake. We use a probabilistic nonlinear earthquake location program to locate the earthquakes and then refine their relative locations using a double difference method. The recorded waveforms are further used to obtain moment tensor solutions for these earthquakes. Earthquake locations and moment tensor solutions are then compared with the locations of faults throughout Sabah. Faults are identified from high-resolution IFSAR images and subsequent fieldwork, with a particular focus on the Lahad Datau and Ranau areas. Used together, these seismic and geodetic data can help us to develop a new seismic hazard model for Sabah, as well as aiding in the delivery of outreach activities regarding seismic hazard

  10. Surface-Wave Relocation of Remote Continental Earthquakes

    NASA Astrophysics Data System (ADS)

    Kintner, J. A.; Ammon, C. J.; Cleveland, M.

    2017-12-01

    Accurate hypocenter locations are essential for seismic event analysis. Single-event location estimation methods provide relatively imprecise results in remote regions with few nearby seismic stations. Previous work has demonstrated that improved relative epicentroid precision in oceanic environments is obtainable using surface-wave cross correlation measurements. We use intermediate-period regional and teleseismic Rayleigh and Love waves to estimate relative epicentroid locations of moderately-sized seismic events in regions around Iran. Variations in faulting geometry, depth, and intermediate-period dispersion make surface-wave based event relocation challenging across this broad continental region. We compare and integrate surface-wave based relative locations with InSAR centroid location estimates. However, mapping an earthquake sequence mainshock to an InSAR fault deformation model centroid is not always a simple process, since the InSAR observations are sensitive to post-seismic deformation. We explore these ideas using earthquake sequences in western Iran. We also apply surface-wave relocation to smaller magnitude earthquakes (3.5 < M < 5.0). Inclusion of smaller-magnitude seismic events in a relocation effort requires a shift in bandwidth to shorter periods, which increases the sensitivity of relocations to surface-wave dispersion. Frequency-domain inter-event phase observations are used to understand the time-domain cross-correlation information, and to choose the appropriate band for applications using shorter periods. Over short inter-event distances, the changing group velocity does not strongly degrade the relative locations. For small-magnitude seismic events in continental regions, surface-wave relocation does not appear simple enough to allow broad routine application, but using this method to analyze individual earthquake sequences can provide valuable insight into earthquake and faulting processes.

  11. Earthquakes in the Central United States, 1699-2010

    USGS Publications Warehouse

    Dart, Richard L.; Volpi, Christina M.

    2010-01-01

    This publication is an update of an earlier report, U.S. Geological Survey (USGS) Geologic Investigation I-2812 by Wheeler and others (2003), titled ?Earthquakes in the Central United States-1699-2002.? Like the original poster, the center of the updated poster is a map showing the pattern of earthquake locations in the most seismically active part of the central United States. Arrayed around the map are short explanatory texts and graphics, which describe the distribution of historical earthquakes and the effects of the most notable of them. The updated poster contains additional, post 2002, earthquake data. These are 38 earthquakes covering the time interval from January 2003 to June 2010, including the Mount Carmel, Illinois, earthquake of 2008. The USGS Preliminary Determination of Epicenters (PDE) was the source of these additional data. Like the I-2812 poster, this poster was prepared for a nontechnical audience and designed to inform the general public as to the widespread occurrence of felt and damaging earthquakes in the Central United States. Accordingly, the poster should not be used to assess earthquake hazard in small areas or at individual locations.

  12. Applying time-reverse-imaging techniques to locate individual low-frequency earthquakes on the San Andreas fault near Cholame, California

    NASA Astrophysics Data System (ADS)

    Horstmann, T.; Harrington, R. M.; Cochran, E.; Shelly, D. R.

    2013-12-01

    Observations of non-volcanic tremor have become ubiquitous in recent years. In spite of the abundance of observations, locating tremor remains a difficult task because of the lack of distinctive phase arrivals. Here we use time-reverse-imaging techniques that do not require identifying phase arrivals to locate individual low-frequency-earthquakes (LFEs) within tremor episodes on the San Andreas fault near Cholame, California. Time windows of 1.5-second duration containing LFEs are selected from continuously recorded waveforms of the local seismic network filtered between 1-5 Hz. We propagate the time-reversed seismic signal back through the subsurface using a staggered-grid finite-difference code. Assuming all rebroadcasted waveforms result from similar wave fields at the source origin, we search for wave field coherence in time and space to obtain the source location and origin time where the constructive interference is a maximum. We use an interpolated velocity model with a grid spacing of 100 m and a 5 ms time step to calculate the relative curl field energy amplitudes for each rebroadcasted seismogram every 50 ms for each grid point in the model. Finally, we perform a grid search for coherency in the curl field using a sliding time window, and taking the absolute value of the correlation coefficient to account for differences in radiation pattern. The highest median cross-correlation coefficient value over at a given grid point indicates the source location for the rebroadcasted event. Horizontal location errors based on the spatial extent of the highest 10% cross-correlation coefficient are on the order of 4 km, and vertical errors on the order of 3 km. Furthermore, a test of the method using earthquake data shows that the method produces an identical hypocentral location (within errors) as that obtained by standard ray-tracing methods. We also compare the event locations to a LFE catalog that locates the LFEs from stacked waveforms of repeated LFEs

  13. Earthquake triggering at alaskan volcanoes following the 3 November 2002 denali fault earthquake

    USGS Publications Warehouse

    Moran, S.C.; Power, J.A.; Stihler, S.D.; Sanchez, J.J.; Caplan-Auerbach, J.

    2004-01-01

    The 3 November 2002 Mw 7.9 Denali fault earthquake provided an excellent opportunity to investigate triggered earthquakes at Alaskan volcanoes. The Alaska Volcano Observatory operates short-period seismic networks on 24 historically active volcanoes in Alaska, 247-2159 km distant from the mainshock epicenter. We searched for evidence of triggered seismicity by examining the unfiltered waveforms for all stations in each volcano network for ???1 hr after the Mw 7.9 arrival time at each network and for significant increases in located earthquakes in the hours after the mainshock. We found compelling evidence for triggering only at the Katmai volcanic cluster (KVC, 720-755 km southwest of the epicenter), where small earthquakes with distinct P and 5 arrivals appeared within the mainshock coda at one station and a small increase in located earthquakes occurred for several hours after the mainshock. Peak dynamic stresses of ???0.1 MPa at Augustine Volcano (560 km southwest of the epicenter) are significantly lower than those recorded in Yellowstone and Utah (>3000 km southeast of the epicenter), suggesting that strong directivity effects were at least partly responsible for the lack of triggering at Alaskan volcanoes. We describe other incidents of earthquake-induced triggering in the KVC, and outline a qualitative magnitude/distance-dependent triggering threshold. We argue that triggering results from the perturbation of magmatic-hydrothermal systems in the KVC and suggest that the comparative lack of triggering at other Alaskan volcanoes could be a result of differences in the nature of magmatic-hydrothermal systems.

  14. The 1868 Hayward Earthquake Alliance: A Case Study - Using an Earthquake Anniversary to Promote Earthquake Preparedness

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.

    2008-12-01

    simulations of a Hayward Fault earthquake, (5) a new USGS Fact Sheet about the earthquake and the Hayward Fault, (6) a virtual tour of the 1868 earthquake, and (7) a new online field trip guide to the Hayward Fault using locations accessible by car and public transit. Finally, the California Geological Survey and many other Alliance members sponsored the Third Conference on Earthquake Hazards in the East Bay at CSU East Bay in Hayward for the three days following the 140th anniversary. The 1868 Alliance hopes to commemorate the anniversary of the 1868 Hayward Earthquake every year to maintain and increase public awareness of this fault, the hazards it and other East Bay Faults pose, and the ongoing need for earthquake preparedness and mitigation.

  15. Association of earthquakes and faults in the San Francisco Bay area using Bayesian inference

    USGS Publications Warehouse

    Wesson, R.L.; Bakun, W.H.; Perkins, D.M.

    2003-01-01

    Bayesian inference provides a method to use seismic intensity data or instrumental locations, together with geologic and seismologic data, to make quantitative estimates of the probabilities that specific past earthquakes are associated with specific faults. Probability density functions are constructed for the location of each earthquake, and these are combined with prior probabilities through Bayes' theorem to estimate the probability that an earthquake is associated with a specific fault. Results using this method are presented here for large, preinstrumental, historical earthquakes and for recent earthquakes with instrumental locations in the San Francisco Bay region. The probabilities for individual earthquakes can be summed to construct a probabilistic frequency-magnitude relationship for a fault segment. Other applications of the technique include the estimation of the probability of background earthquakes, that is, earthquakes not associated with known or considered faults, and the estimation of the fraction of the total seismic moment associated with earthquakes less than the characteristic magnitude. Results for the San Francisco Bay region suggest that potentially damaging earthquakes with magnitudes less than the characteristic magnitudes should be expected. Comparisons of earthquake locations and the surface traces of active faults as determined from geologic data show significant disparities, indicating that a complete understanding of the relationship between earthquakes and faults remains elusive.

  16. Risk and return: evaluating Reverse Tracing of Precursors earthquake predictions

    NASA Astrophysics Data System (ADS)

    Zechar, J. Douglas; Zhuang, Jiancang

    2010-09-01

    In 2003, the Reverse Tracing of Precursors (RTP) algorithm attracted the attention of seismologists and international news agencies when researchers claimed two successful predictions of large earthquakes. These researchers had begun applying RTP to seismicity in Japan, California, the eastern Mediterranean and Italy; they have since applied it to seismicity in the northern Pacific, Oregon and Nevada. RTP is a pattern recognition algorithm that uses earthquake catalogue data to declare alarms, and these alarms indicate that RTP expects a moderate to large earthquake in the following months. The spatial extent of alarms is highly variable and each alarm typically lasts 9 months, although the algorithm may extend alarms in time and space. We examined the record of alarms and outcomes since the prospective application of RTP began, and in this paper we report on the performance of RTP to date. To analyse these predictions, we used a recently developed approach based on a gambling score, and we used a simple reference model to estimate the prior probability of target earthquakes for each alarm. Formally, we believe that RTP investigators did not rigorously specify the first two `successful' predictions in advance of the relevant earthquakes; because this issue is contentious, we consider analyses with and without these alarms. When we included contentious alarms, RTP predictions demonstrate statistically significant skill. Under a stricter interpretation, the predictions are marginally unsuccessful.

  17. Geophysical Anomalies and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require

  18. Historical and recent large megathrust earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    Ruiz, S.; Madariaga, R.

    2018-05-01

    Recent earthquakes in Chile, 2014, Mw 8.2 Iquique, 2015, Mw 8.3 Illapel and 2016, Mw 7.6 Chiloé have put in evidence some problems with the straightforward application of ideas about seismic gaps, earthquake periodicity and the general forecast of large megathrust earthquakes. In northern Chile, before the 2014 Iquique earthquake 4 large earthquakes were reported in written chronicles, 1877, 1786, 1615 and 1543; in North-Central Chile, before the 2015 Illapel event, 3 large earthquakes 1943, 1880, 1730 were reported; and the 2016 Chiloé earthquake occurred in the southern zone of the 1960 Valdivia megathrust rupture, where other large earthquakes occurred in 1575, 1737 and 1837. The periodicity of these events has been proposed as a good long-term forecasting. However, the seismological aspects of historical Chilean earthquakes were inferred mainly from old chronicles written before subduction in Chile was discovered. Here we use the original description of earthquakes to re-analyze the historical archives. Our interpretation shows that a-priori ideas, like seismic gaps and characteristic earthquakes, influenced the estimation of magnitude, location and rupture area of the older Chilean events. On the other hand, the advance in the characterization of the rheological aspects that controlled the contact between Nazca and South-American plate and the study of tsunami effects provide better estimations of the location of historical earthquakes along the seismogenic plate interface. Our re-interpretation of historical earthquakes shows a large diversity of earthquakes types; there is a major difference between giant earthquakes that break the entire plate interface and those of Mw 8.0 that only break a portion of it.

  19. The January 2014 Northern Cuba Earthquake Sequence - Unusual Location and Unexpected Source Mechanism Variability

    NASA Astrophysics Data System (ADS)

    Braunmiller, J.; Thompson, G.; McNutt, S. R.

    2017-12-01

    On 9 January 2014, a magnitude Mw=5.1 earthquake occurred along the Bahamas-Cuba suture at the northern coast of Cuba revealing a surprising seismic hazard source for both Cuba and southern Florida where it was widely felt. Due to its location, the event and its aftershocks (M>3.5) were recorded only at far distances (300+ km) resulting in high-detection thresholds, low location accuracy, and limited source parameter resolution. We use three-component regional seismic data to study the sequence. High-pass filtered seismograms at the closest site in southern Florida are similar in character suggesting a relatively tight event cluster and revealing additional, smaller aftershocks not included in the ANSS or ISC catalogs. Aligning on the P arrival and low-pass filtering (T>10 s) uncovers a surprise polarity flip of the large amplitude surface waves on vertical seismograms for some aftershocks relative to the main shock. We performed regional moment tensor inversions of the main shock and its largest aftershocks using complete three-component seismograms from stations distributed throughout the region to confirm the mechanism changes. Consistent with the GCMT solution, we find an E-W trending normal faulting mechanism for the main event and for one immediate aftershock. Two aftershocks indicate E-W trending reverse faulting with essentially flipped P- and T-axes relative to the normal faulting events (and the same B-axes). Within uncertainties, depths of the two event families are indistinguishable and indicate shallow faulting (<10 km). One intriguing possible interpretation is that both families ruptured the same fault with reverse mechanisms compensating for overshooting. However, activity could also be spatially separated either vertically (with reverse mechanisms possibly below extension) or laterally. The shallow source depth and the 200-km long uplifted chain of islands indicate that larger, shallow and thus potentially tsunamigenic earthquakes could occur just

  20. An Isometric Mapping Based Co-Location Decision Tree Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Wei, J.; Zhou, X.; Zhang, R.; Huang, W.; Sha, H.; Chen, J.

    2018-05-01

    Decision tree (DT) induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information) as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT) method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT), which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1) The extraction method of exposed carbonate rocks is of high accuracy. (2) The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.

  1. The Pocatello Valley, Idaho, earthquake

    USGS Publications Warehouse

    Rogers, A. M.; Langer, C.J.; Bucknam, R.C.

    1975-01-01

    A Richter magnitude 6.3 earthquake occurred at 8:31 p.m mountain daylight time on March 27, 1975, near the Utah-Idaho border in Pocatello Valley. The epicenter of the main shock was located at 42.094° N, 112.478° W, and had a focal depth of 5.5 km. This earthquake was the largest in the continental United States since the destructive San Fernando earthquake of February 1971. The main shock was preceded by a magnitude 4.5 foreshock on March 26. 

  2. High-resolution earthquake relocation in the Fort Worth and Permian Basins using regional seismic stations

    NASA Astrophysics Data System (ADS)

    Ogwari, P.; DeShon, H. R.; Hornbach, M.

    2017-12-01

    Post-2008 earthquake rate increases in the Central United States have been associated with large-scale subsurface disposal of waste-fluids from oil and gas operations. The beginning of various earthquake sequences in Fort Worth and Permian basins have occurred in the absence of seismic stations at local distances to record and accurately locate hypocenters. Most typically, the initial earthquakes have been located using regional seismic network stations (>100km epicentral distance) and using global 1D velocity models, which usually results in large location uncertainty, especially in depth, does not resolve magnitude <2.5 events, and does not constrain the geometry of the activated fault(s). Here, we present a method to better resolve earthquake occurrence and location using matched filters and regional relative location when local data becomes available. We use the local distance data for high-resolution earthquake location, identifying earthquake templates and accurate source-station raypath velocities for the Pg and Lg phases at regional stations. A matched-filter analysis is then applied to seismograms recorded at US network stations and at adopted TA stations that record the earthquakes before and during the local network deployment period. Positive detections are declared based on manual review of associated with P and S arrivals on local stations. We apply hierarchical clustering to distinguish earthquakes that are both spatially clustered and spatially separated. Finally, we conduct relative earthquake and earthquake cluster location using regional station differential times. Initial analysis applied to the 2008-2009 DFW airport sequence in north Texas results in time continuous imaging of epicenters extending into 2014. Seventeen earthquakes in the USGS earthquake catalog scattered across a 10km2 area near DFW airport are relocated onto a single fault using these approaches. These techniques will also be applied toward imaging recent earthquakes in the

  3. A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    NASA Astrophysics Data System (ADS)

    Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.

    2014-08-01

    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for

  4. Catalog of earthquake hypocenters at Alaskan volcanoes: January 1 through December 31, 2011

    USGS Publications Warehouse

    Dixon, James P.; Stihler, Scott D.; Power, John A.; Searcy, Cheryl K.

    2012-01-01

    Between January 1 and December 31, 2011, the Alaska Volcano Observatory (AVO) located 4,364 earthquakes, of which 3,651 occurred within 20 kilometers of the 33 volcanoes with seismograph subnetworks. There was no significant seismic activity above background levels in 2011 at these instrumented volcanic centers. This catalog includes locations, magnitudes, and statistics of the earthquakes located in 2011 with the station parameters, velocity models, and other files used to locate these earthquakes.

  5. Plenty of Deep Long-Period Earthquakes Beneath Cascade Volcanoes

    NASA Astrophysics Data System (ADS)

    Nichols, M. L.; Malone, S. D.; Moran, S. C.; Thelen, W. A.; Vidale, J. E.

    2009-12-01

    The Pacific Northwest Seismic Network (PNSN) records and locates earthquakes within Washington and Oregon, including those occurring at 10 Cascade volcanic centers. In an earlier study (Malone and Moran, EOS 1997), a total of 11 deep long-period (DLP) earthquakes were reported beneath 3 Washington volcanoes. They are characterized by emergent P- and S- arrivals, long and ringing codas, and contain most of their energy below 5 Hz. DLP earthquakes are significant because they have been observed to occur prior to or in association with eruptions at several volcanoes, and as a result are inferred to represent movement of deep-seated magma and associated fluids in the mid-to-lower crust. To more thoroughly characterize DLP occurrence in Washington and Oregon, we employed a two-step algorithm to systematically search the PNSN’s earthquake catalogue for DLP events occurring between 1980 and 2008. In the first step we applied a spectral ratio test to the demeaned and tapered triggered event waveforms to distinguish long-period events from the more common higher frequency volcano-tectonic and regional tectonic earthquakes. In the second step we visually analyzed waveforms of the flagged long-period events to distinguish DLP earthquakes from long-period rockfalls, explosions, shallow low-frequency events, and glacier quakes. We identified 56 DLP earthquakes beneath 7 Cascade volcanic centers. Of these, 31 occurred at Mount Baker, where the background flux of magmatic gases is greater than at the other volcanoes in our study. The other 6 volcanoes with DLPs (counts in parentheses) are Glacier Peak (5), Mount Rainier (9), Mount St. Helens (9), Mount Hood (1), Three Sisters (1), and Crater Lake (1). No DLP events were identified beneath Mount Adams, Mount Jefferson, or Newberry Volcano. The events are 10-40 km deep and have an average magnitude of around 1.5 (Mc), with both the largest and deepest DLPs occurring beneath Mount Baker. Cascade DLP earthquakes occur mostly as

  6. Coupling of Sentinel-1, Sentinel-2 and ALOS-2 to assess coseismic deformation and earthquake-induced landslides following 26 June, 2016 earthquake in Kyrgyzstan

    NASA Astrophysics Data System (ADS)

    Vajedian, Sanaz; Motagh, Mahdi; Wetzel, Hans-Ulrich; Teshebaeva, Kanayim

    2017-04-01

    The active deformation in Kyrgyzstan results from the collision between Indian and Asia tectonic plates at a rate of 29 ± 1 mm/yr. This collision is accommodated by deformation on prominent faults, which can be ruptured coseismically and trigger other hazards like landslides. Many earthquake and earthquake-induced landslides in Kyrgyzstan occur in mountainous areas, where limited accessibility makes ground-based measurements for the assessment of their impact a challenging task. In this context, remote sensing measurements are extraordinary useful as they improve our knowledge about coseismic rupture process and provide information on other types of hazards that are triggered during and/or after the earthquakes. This investigation aims to use L-band ALOS/PALSAR, C-band Sentinel-1, Sentinel-2 data to evaluate fault slip model and coseismic-induced landslides related to 26 June 2016 Sary-Tash earthquake, southwest Kyrgyzstan. First we implement three methods to measure coseismic surface motion using radar data including Interferometric SAR (InSAR) analysis, SAR tracking technique and multiple aperture InSAR (MAI), followed by using Genetic Algorithm (GA) to invert the final displacement field to infer combination of orientation, location and slip on rectangular uniform slip fault plane. Slip distribution analysis is done by applying Tikhonov regularization to solve the constrained least-square method with Laplacian smoothing approach. The estimated coseismic slip model suggests a nearly W-E thrusting fault ruptured during the earthquake event in which the main rupture occurred at a depth between 11 and 14 km. Second, the local phase shifts related to landslides are inferred by detailed analysis pre-seismic, coseismic and postseismic C-band and L-band interferograms and the results are compared with the interpretations derived from Sentinel-2 data acquired before and after the earthquake.

  7. Real-time earthquake monitoring using a search engine method.

    PubMed

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-12-04

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.

  8. Exploring Earthquakes in Real-Time

    NASA Astrophysics Data System (ADS)

    Bravo, T. K.; Kafka, A. L.; Coleman, B.; Taber, J. J.

    2013-12-01

    Earthquakes capture the attention of students and inspire them to explore the Earth. Adding the ability to view and explore recordings of significant and newsworthy earthquakes in real-time makes the subject even more compelling. To address this opportunity, the Incorporated Research Institutions for Seismology (IRIS), in collaboration with Moravian College, developed ';jAmaSeis', a cross-platform application that enables students to access real-time earthquake waveform data. Students can watch as the seismic waves are recorded on their computer, and can be among the first to analyze the data from an earthquake. jAmaSeis facilitates student centered investigations of seismological concepts using either a low-cost educational seismograph or streamed data from other educational seismographs or from any seismic station that sends data to the IRIS Data Management System. After an earthquake, students can analyze the seismograms to determine characteristics of earthquakes such as time of occurrence, distance from the epicenter to the station, magnitude, and location. The software has been designed to provide graphical clues to guide students in the analysis and assist in their interpretations. Since jAmaSeis can simultaneously record up to three stations from anywhere on the planet, there are numerous opportunities for student driven investigations. For example, students can explore differences in the seismograms from different distances from an earthquake and compare waveforms from different azimuthal directions. Students can simultaneously monitor seismicity at a tectonic plate boundary and in the middle of the plate regardless of their school location. This can help students discover for themselves the ideas underlying seismic wave propagation, regional earthquake hazards, magnitude-frequency relationships, and the details of plate tectonics. The real-time nature of the data keeps the investigations dynamic, and offers students countless opportunities to explore.

  9. Rupture geometry and slip distribution of the 2016 January 21st Ms6.4 Menyuan, China earthquake

    NASA Astrophysics Data System (ADS)

    Zhou, Y.

    2017-12-01

    On 21 January 2016, an Ms6.4 earthquake stroke Menyuan country, Qinghai Province, China. The epicenter of the main shock and locations of its aftershocks indicate that the Menyuan earthquake occurred near the left-lateral Lenglongling fault. However, the focal mechanism suggests that the earthquake should take place on a thrust fault. In addition, field investigation indicates that the earthquake did not rupture the ground surface. Therefore, the rupture geometry is unclear as well as coseismic slip distribution. We processed two pairs of InSAR images acquired by the ESA Sentinel-1A satellite with the ISCE software, and both ascending and descending orbits were included. After subsampling the coseismic InSAR images into about 800 pixels, coseismic displacement data along LOS direction are inverted for earthquake source parameters. We employ an improved mixed linear-nonlinear Bayesian inversion method to infer fault geometric parameters, slip distribution, and the Laplacian smoothing factor simultaneously. This method incorporates a hybrid differential evolution algorithm, which is an efficient global optimization algorithm. The inversion results show that the Menyuan earthquake ruptured a blind thrust fault with a strike of 124°and a dip angle of 41°. This blind fault was never investigated before and intersects with the left-lateral Lenglongling fault, but the strikes of them are nearly parallel. The slip sense is almost pure thrusting, and there is no significant slip within 4km depth. The max slip value is up to 0.3m, and the estimated moment magnitude is Mw5.93, in agreement with the seismic inversion result. The standard error of residuals between InSAR data and model prediction is as small as 0.5cm, verifying the correctness of the inversion results.

  10. Earthquake Fingerprints: Representing Earthquake Waveforms for Similarity-Based Detection

    NASA Astrophysics Data System (ADS)

    Bergen, K.; Beroza, G. C.

    2016-12-01

    New earthquake detection methods, such as Fingerprint and Similarity Thresholding (FAST), use fast approximate similarity search to identify similar waveforms in long-duration data without templates (Yoon et al. 2015). These methods have two key components: fingerprint extraction and an efficient search algorithm. Fingerprint extraction converts waveforms into fingerprints, compact signatures that represent short-duration waveforms for identification and search. Earthquakes are detected using an efficient indexing and search scheme, such as locality-sensitive hashing, that identifies similar waveforms in a fingerprint database. The quality of the search results, and thus the earthquake detection results, is strongly dependent on the fingerprinting scheme. Fingerprint extraction should map similar earthquake waveforms to similar waveform fingerprints to ensure a high detection rate, even under additive noise and small distortions. Additionally, fingerprints corresponding to noise intervals should have mutually dissimilar fingerprints to minimize false detections. In this work, we compare the performance of multiple fingerprint extraction approaches for the earthquake waveform similarity search problem. We apply existing audio fingerprinting (used in content-based audio identification systems) and time series indexing techniques and present modified versions that are specifically adapted for seismic data. We also explore data-driven fingerprinting approaches that can take advantage of labeled or unlabeled waveform data. For each fingerprinting approach we measure its ability to identify similar waveforms in a low signal-to-noise setting, and quantify the trade-off between true and false detection rates in the presence of persistent noise sources. We compare the performance using known event waveforms from eight independent stations in the Northern California Seismic Network.

  11. Determination of the fault plane and rupture size of the 2013 Santa Cruz earthquake, Bolivia, 5.2 Mw, by relative location of the aftershocks

    NASA Astrophysics Data System (ADS)

    Rivadeneyra-Vera, C.; Assumpção, M.; Minaya, E.; Aliaga, P.; Avila, G.

    2016-11-01

    The Central Andes of southern Bolivia is a highly seismic region with many active faults, that could generate earthquakes up to 8.9 Mw. In 2013, an earthquake of 5.2 Mw occurred in Santa Cruz de la Sierra, in the sub-Andean belt, close to the Mandeyapecua fault, one of the most important reverse faults in Bolivia. Five larger aftershocks were reported by the International Seismological Centre (ISC) and 33 smaller aftershocks were recorded by the San Calixto Observatory (OSC) in the two months after the mainshock. Distances between epicenters of the events were up to 36 km, which is larger than expected for an earthquake of this magnitude. Using data from South American regional stations and the relative location technique with Rayleigh waves, the epicenters of the five larger aftershocks of the Santa Cruz series were determined in relation to the mainshock. This method enabled to achieve epicentral locations with uncertainties smaller than 1 km. Additionally, using data of three Bolivian stations (MOC, SIV and LPAZ) eight smaller aftershocks, recorded by the OSC, were relocated through correlation of P and S waves. The results show a NNW-SSE trend of epicenters and suggest an E dipping plane. The maximum distance between the aftershocks is 14 km, which is not consistent with the expected subsurface rupture length, in accordance with the magnitude of the mainshock. The events are located away from the Mandeyapecua fault and show an opposite dip, demonstrating that these events were generated by another fault in the area, that had not been well studied yet.

  12. Security Implications of Induced Earthquakes

    NASA Astrophysics Data System (ADS)

    Jha, B.; Rao, A.

    2016-12-01

    The increase in earthquakes induced or triggered by human activities motivates us to research how a malicious entity could weaponize earthquakes to cause damage. Specifically, we explore the feasibility of controlling the location, timing and magnitude of an earthquake by activating a fault via injection and production of fluids into the subsurface. Here, we investigate the relationship between the magnitude and trigger time of an induced earthquake to the well-to-fault distance. The relationship between magnitude and distance is important to determine the farthest striking distance from which one could intentionally activate a fault to cause certain level of damage. We use our novel computational framework to model the coupled multi-physics processes of fluid flow and fault poromechanics. We use synthetic models representative of the New Madrid Seismic Zone and the San Andreas Fault Zone to assess the risk in the continental US. We fix injection and production flow rates of the wells and vary their locations. We simulate injection-induced Coulomb destabilization of faults and evolution of fault slip under quasi-static deformation. We find that the effect of distance on the magnitude and trigger time is monotonic, nonlinear, and time-dependent. Evolution of the maximum Coulomb stress on the fault provides insights into the effect of the distance on rupture nucleation and propagation. The damage potential of induced earthquakes can be maintained even at longer distances because of the balance between pressure diffusion and poroelastic stress transfer mechanisms. We conclude that computational modeling of induced earthquakes allows us to measure feasibility of weaponzing earthquakes and developing effective defense mechanisms against such attacks.

  13. Glacier quakes mimicking volcanic earthquakes: The challenge of monitoring ice-clad volcanoes and some solutions

    NASA Astrophysics Data System (ADS)

    Allstadt, K.; Carmichael, J. D.; Malone, S. D.; Bodin, P.; Vidale, J. E.; Moran, S. C.

    2012-12-01

    Swarms of repeating earthquakes at volcanoes are often a sign of volcanic unrest. However, glaciers also can generate repeating seismic signals, so detecting unrest at glacier-covered volcanoes can be a challenge. We have found that multi-day swarms of shallow, low-frequency, repeating earthquakes occur regularly at Mount Rainier, a heavily glaciated stratovolcano in Washington, but that most swarms had escaped recognition until recently. Typically such earthquakes were too small to be routinely detected by the seismic network and were often buried in the noise on visual records, making the few swarms that had been detected seem more unusual and significant at the time they were identified. Our comprehensive search for repeating earthquakes through the past 10 years of continuous seismic data uncovered more than 30 distinct swarms of low-frequency earthquakes at Rainier, each consisting of hundreds to thousands of events. We found that these swarms locate high on the glacier-covered edifice, occur almost exclusively between late fall and early spring, and that their onset coincides with heavy snowfalls. We interpret the correlation with snowfall to indicate a seismically observable glacial response to snow loading. Efforts are underway to confirm this by monitoring glacier motion before and after a major snowfall event using ground based radar interferometry. Clearly, if the earthquakes in these swarms reflect a glacial source, then they are not directly related to volcanic activity. However, from an operational perspective they make volcano monitoring difficult because they closely resemble earthquakes that often precede and accompany volcanic eruptions. Because we now have a better sense of the background level of such swarms and know that their occurrence is seasonal and correlated with snowfall, it will now be easier to recognize if future swarms at Rainier are unusual and possibly related to volcanic activity. To methodically monitor for such unusual activity

  14. Earthquake Analysis (EA) Software for The Earthquake Observatories

    NASA Astrophysics Data System (ADS)

    Yanik, K.; Tezel, T.

    2009-04-01

    There are many software that can used for observe the seismic signals and locate the earthquakes, but some of them commercial and has technical support. For this reason, many seismological observatories developed and use their own seismological software packets which are convenient with their seismological network. In this study, we introduce our software which has some capabilities that it can read seismic signals and process and locate the earthquakes. This software is used by the General Directorate of Disaster Affairs Earthquake Research Department Seismology Division (here after ERD) and will improve according to the new requirements. ERD network consist of 87 seismic stations that 63 of them were equipped with 24 bite digital Guralp CMG-3T, 16 of them with analogue short period S-13-Geometrics and 8 of them 24 bite digital short period S-13j-DR-24 Geometrics seismometers. Data is transmitted with satellite from broadband stations, whereas leased line used from short period stations. Daily data archive capacity is 4 GB. In big networks, it is very important that observe the seismic signals and locate the earthquakes as soon as possible. This is possible, if they use software which was developed considering their network properties. When we started to develop a software for big networks as our, we recognized some realities that all known seismic format data should be read without any convert process, observing of the only selected stations and do this on the map directly, add seismic files with import command, establishing relation between P and S phase readings and location solutions, store in database and entering to the program with user name and password. In this way, we can prevent data disorder and repeated phase readings. There are many advantages, when data store on the database proxies. These advantages are easy access to data from anywhere using ethernet, publish the bulletin and catalogues using website, easily sending of short message (sms) and e

  15. Station Correction Uncertainty in Multiple Event Location Algorithms and the Effect on Error Ellipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne

    Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less

  16. Identification tibia and fibula bone fracture location using scanline algorithm

    NASA Astrophysics Data System (ADS)

    Muchtar, M. A.; Simanjuntak, S. E.; Rahmat, R. F.; Mawengkang, H.; Zarlis, M.; Sitompul, O. S.; Winanto, I. D.; Andayani, U.; Syahputra, M. F.; Siregar, I.; Nasution, T. H.

    2018-03-01

    Fracture is a condition that there is a damage in the continuity of the bone, usually caused by stress, trauma or weak bones. The tibia and fibula are two separated-long bones in the lower leg, closely linked at the knee and ankle. Tibia/fibula fracture often happen when there is too much force applied to the bone that it can withstand. One of the way to identify the location of tibia/fibula fracture is to read X-ray image manually. Visual examination requires more time and allows for errors in identification due to the noise in image. In addition, reading X-ray needs highlighting background to make the objects in X-ray image appear more clearly. Therefore, a method is required to help radiologist to identify the location of tibia/fibula fracture. We propose some image-processing techniques for processing cruris image and Scan line algorithm for the identification of fracture location. The result shows that our proposed method is able to identify it and reach up to 87.5% of accuracy.

  17. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion

    PubMed Central

    Dou, Qingxu; Wei, Lijun; Magee, Derek R.; Atkins, Phil R.; Chapman, David N.; Curioni, Giulio; Goddard, Kevin F.; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R.; Rustighi, Emiliano; Swingler, Steven G.; Rogers, Christopher D. F.; Cohn, Anthony G.

    2016-01-01

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed “multi-utility multi-sensor” system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and

  18. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.

    PubMed

    Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G

    2016-11-02

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.

  19. Visible Earthquakes: a web-based tool for visualizing and modeling InSAR earthquake data

    NASA Astrophysics Data System (ADS)

    Funning, G. J.; Cockett, R.

    2012-12-01

    InSAR (Interferometric Synthetic Aperture Radar) is a technique for measuring the deformation of the ground using satellite radar data. One of the principal applications of this method is in the study of earthquakes; in the past 20 years over 70 earthquakes have been studied in this way, and forthcoming satellite missions promise to enable the routine and timely study of events in the future. Despite the utility of the technique and its widespread adoption by the research community, InSAR does not feature in the teaching curricula of most university geoscience departments. This is, we believe, due to a lack of accessibility to software and data. Existing tools for the visualization and modeling of interferograms are often research-oriented, command line-based and/or prohibitively expensive. Here we present a new web-based interactive tool for comparing real InSAR data with simple elastic models. The overall design of this tool was focused on ease of access and use. This tool should allow interested nonspecialists to gain a feel for the use of such data and greatly facilitate integration of InSAR into upper division geoscience courses, giving students practice in comparing actual data to modeled results. The tool, provisionally named 'Visible Earthquakes', uses web-based technologies to instantly render the displacement field that would be observable using InSAR for a given fault location, geometry, orientation, and slip. The user can adjust these 'source parameters' using a simple, clickable interface, and see how these affect the resulting model interferogram. By visually matching the model interferogram to a real earthquake interferogram (processed separately and included in the web tool) a user can produce their own estimates of the earthquake's source parameters. Once satisfied with the fit of their models, users can submit their results and see how they compare with the distribution of all other contributed earthquake models, as well as the mean and median

  20. Remote Imaging of Earthquake Characteristics Along Oceanic Transforms

    NASA Astrophysics Data System (ADS)

    Cleveland, M.; Ammon, C. J.

    2014-12-01

    Compared with subduction and continental transform systems, many characteristics of oceanic transform faults (OTF) are better defined (first-order structure and composition, thermal properties, etc.). Still, many aspects of earthquake behavior along OTFs remain poorly understood as a result of their relative remoteness. But the substantial aseismic deformation (averaging roughly 85%) that occurs along OTFs and the implied interaction of aseismic with seismic deformation is an opportunity to explore fundamental earthquake nucleation and rupture processes. However, the study of OTF earthquake properties is not easy because these faults are often located in remote regions, lacking nearby seismic networks. Thus, many standard network-based seismic approaches are infeasible, but some can be adapted to the effort. For example, double-difference methods applied to cross-correlation measured Rayleigh wave time shifts is an effective tool to provide greatly improved relative epicentroid locations, origin-time shifts, and relative event magnitudes for earthquakes in remote regions. The same comparative waveform measurements can provide insight into rupture directivity of the larger OTF events. In this study, we calculate improved relative earthquake locations and magnitudes of earthquakes along the Blanco Fracture Zone in the northeast Pacific Ocean and compare and contrast that work with a study of the more remote Menard Transform Fault (MTF), located in the southeast Pacific Ocean. For the Blanco, we work exclusively with Rayleigh (R1) observations exploiting the dense networks in the northern hemisphere. For the MTF, we combine R1 with Love (G1) observations to map and to analyze the distribution of strong asperities along this remote, 200-km-long fault. Specifically, we attempt to better define the relationship between observed near-transform normal and vertical strike-slip earthquakes in the vicinity of the MTF. We test our ability to use distant observations (the

  1. Combining Real-Time Seismic and GPS Data for Earthquake Early Warning (Invited)

    NASA Astrophysics Data System (ADS)

    Boese, M.; Heaton, T. H.; Hudnut, K. W.

    2013-12-01

    Scientists at Caltech, UC Berkeley, the Univ. of SoCal, the Univ. of Washington, the US Geological Survey, and ETH Zurich have developed an earthquake early warning (EEW) demonstration system for California and the Pacific Northwest. To quickly determine the earthquake magnitude and location, 'ShakeAlert' currently processes and interprets real-time data-streams from ~400 seismic broadband and strong-motion stations within the California Integrated Seismic Network (CISN). Based on these parameters, the 'UserDisplay' software predicts and displays the arrival and intensity of shaking at a given user site. Real-time ShakeAlert feeds are currently shared with around 160 individuals, companies, and emergency response organizations to educate potential users about EEW and to identify needs and applications of EEW in a future operational warning system. Recently, scientists at the contributing institutions have started to develop algorithms for ShakeAlert that make use of high-rate real-time GPS data to improve the magnitude estimates for large earthquakes (M>6.5) and to determine slip distributions. Knowing the fault slip in (near) real-time is crucial for users relying on or operating distributed systems, such as for power, water or transportation, especially if these networks run close to or across large faults. As shown in an earlier study, slip information is also useful to predict (in a probabilistic sense) how far a fault rupture will propagate, thus enabling more robust probabilistic ground-motion predictions at distant locations. Finally, fault slip information is needed for tsunami warning, such as in the Cascadia subduction-zone. To handle extended fault-ruptures of large earthquakes in real-time, Caltech and USGS Pasadena are currently developing and testing a two-step procedure that combines seismic and geodetic data; in the first step, high-frequency strong-motion amplitudes are used to rapidly classify near-and far-source stations. Then, the location and

  2. Revision of an automated microseismic location algorithm for DAS - 3C geophone hybrid array

    NASA Astrophysics Data System (ADS)

    Mizuno, T.; LeCalvez, J.; Raymer, D.

    2017-12-01

    Application of distributed acoustic sensing (DAS) has been studied in several areas in seismology. One of the areas is microseismic reservoir monitoring (e.g., Molteni et al., 2017, First Break). Considering the present limitations of DAS, which include relatively low signal-to-noise ratio (SNR) and no 3C polarization measurements, a DAS - 3C geophone hybrid array is a practical option when using a single monitoring well. Considering the large volume of data from distributed sensing, microseismic event detection and location using a source scanning type algorithm is a reasonable choice, especially for real-time monitoring. The algorithm must handle both strain rate along the borehole axis for DAS and particle velocity for 3C geophones. Only a small quantity of large SNR events will be detected throughout a large aperture encompassing the hybrid array; therefore, the aperture is to be optimized dynamically to eliminate noisy channels for a majority of events. For such hybrid array, coalescence microseismic mapping (CMM) (Drew et al., 2005, SPE) was revised. CMM forms a likelihood function of location of event and its origin time. At each receiver, a time function of event arrival likelihood is inferred using an SNR function, and it is migrated to time and space to determine hypocenter and origin time likelihood. This algorithm was revised to dynamically optimize such a hybrid array by identifying receivers where a microseismic signal is possibly detected and using only those receivers to compute the likelihood function. Currently, peak SNR is used to select receivers. To prevent false results due to small aperture, a minimum aperture threshold is employed. The algorithm refines location likelihood using 3C geophone polarization. We tested this algorithm using a ray-based synthetic dataset. Leaney (2014, PhD thesis, UBC) is used to compute particle velocity at receivers. Strain rate along the borehole axis is computed from particle velocity as DAS microseismic

  3. Seismogeodesy and Rapid Earthquake and Tsunami Source Assessment

    NASA Astrophysics Data System (ADS)

    Melgar Moctezuma, Diego

    This dissertation presents an optimal combination algorithm for strong motion seismograms and regional high rate GPS recordings. This seismogeodetic solution produces estimates of ground motion that recover the whole seismic spectrum, from the permanent deformation to the Nyquist frequency of the accelerometer. This algorithm will be demonstrated and evaluated through outdoor shake table tests and recordings of large earthquakes, notably the 2010 Mw 7.2 El Mayor-Cucapah earthquake and the 2011 Mw 9.0 Tohoku-oki events. This dissertations will also show that strong motion velocity and displacement data obtained from the seismogeodetic solution can be instrumental to quickly determine basic parameters of the earthquake source. We will show how GPS and seismogeodetic data can produce rapid estimates of centroid moment tensors, static slip inversions, and most importantly, kinematic slip inversions. Throughout the dissertation special emphasis will be placed on how to compute these source models with minimal interaction from a network operator. Finally we will show that the incorporation of off-shore data such as ocean-bottom pressure and RTK-GPS buoys can better-constrain the shallow slip of large subduction events. We will demonstrate through numerical simulations of tsunami propagation that the earthquake sources derived from the seismogeodetic and ocean-based sensors is detailed enough to provide a timely and accurate assessment of expected tsunami intensity immediately following a large earthquake.

  4. A new statistical PCA-ICA algorithm for location of R-peaks in ECG.

    PubMed

    Chawla, M P S; Verma, H K; Kumar, Vinod

    2008-09-16

    The success of ICA to separate the independent components from the mixture depends on the properties of the electrocardiogram (ECG) recordings. This paper discusses some of the conditions of independent component analysis (ICA) that could affect the reliability of the separation and evaluation of issues related to the properties of the signals and number of sources. Principal component analysis (PCA) scatter plots are plotted to indicate the diagnostic features in the presence and absence of base-line wander in interpreting the ECG signals. In this analysis, a newly developed statistical algorithm by authors, based on the use of combined PCA-ICA for two correlated channels of 12-channel ECG data is proposed. ICA technique has been successfully implemented in identifying and removal of noise and artifacts from ECG signals. Cleaned ECG signals are obtained using statistical measures like kurtosis and variance of variance after ICA processing. This analysis also paper deals with the detection of QRS complexes in electrocardiograms using combined PCA-ICA algorithm. The efficacy of the combined PCA-ICA algorithm lies in the fact that the location of the R-peaks is bounded from above and below by the location of the cross-over points, hence none of the peaks are ignored or missed.

  5. Detecting and Locating Seismic Events Without Phase Picks or Velocity Models

    NASA Astrophysics Data System (ADS)

    Arrowsmith, S.; Young, C. J.; Ballard, S.; Slinkard, M.

    2015-12-01

    The standard paradigm for seismic event monitoring is to scan waveforms from a network of stations and identify the arrival time of various seismic phases. A signal association algorithm then groups the picks to form events, which are subsequently located by minimizing residuals between measured travel times and travel times predicted by an Earth model. Many of these steps are prone to significant errors which can lead to erroneous arrival associations and event locations. Here, we revisit a concept for event detection that does not require phase picks or travel time curves and fuses detection, association and location into a single algorithm. Our pickless event detector exploits existing catalog and waveform data to build an empirical stack of the full regional seismic wavefield, which is subsequently used to detect and locate events at a network level using correlation techniques. Because the technique uses more of the information content of the original waveforms, the concept is particularly powerful for detecting weak events that would be missed by conventional methods. We apply our detector to seismic data from the University of Utah Seismograph Stations network and compare our results with the earthquake catalog published by the University of Utah. We demonstrate that the pickless detector can detect and locate significant numbers of events previously missed by standard data processing techniques.

  6. Earthquakes in and near the northeastern United States, 1638-1998

    USGS Publications Warehouse

    Wheeler, R.L.; Trevor, N.K.; Tarr, A.C.; Crone, A.J.

    2000-01-01

    The data are those used to make a large-format, colored map of earthquakes in the northeastern United States and adjacent parts of Canada and the Atlantic Ocean (Wheeler, 2000; Wheeler and others, 2001; references in Data_Quality_Information, Lineage). The map shows the locations of 1,069 known earthquakes of magnitude 3.0 or larger, and is designed for a non-technical audience. Colored circles represent earthquake locations, colored and sized by magnitude. Short descriptions, colonial-era woodcuts, newspaper headlines, and photographs summarize the dates, times of day, damage, and other effects of notable earthquakes. The base map shows color-coded elevation, shaded to emphasize relief.

  7. Evidence for Ancient Mesoamerican Earthquakes

    NASA Astrophysics Data System (ADS)

    Kovach, R. L.; Garcia, B.

    2001-12-01

    Evidence for past earthquake damage at Mesoamerican ruins is often overlooked because of the invasive effects of tropical vegetation and is usually not considered as a casual factor when restoration and reconstruction of many archaeological sites are undertaken. Yet the proximity of many ruins to zones of seismic activity would argue otherwise. Clues as to the types of damage which should be soughtwere offered in September 1999 when the M = 7.5 Oaxaca earthquake struck the ruins of Monte Alban, Mexico, where archaeological renovations were underway. More than 20 structures were damaged, 5 of them seriously. Damage features noted were walls out of plumb, fractures in walls, floors, basal platforms and tableros, toppling of columns, and deformation, settling and tumbling of walls. A Modified Mercalli Intensity of VII (ground accelerations 18-34 %b) occurred at the site. Within the diffuse landward extension of the Caribbean plate boundary zone M = 7+ earthquakes occur with repeat times of hundreds of years arguing that many Maya sites were subjected to earthquakes. Damage to re-erected and reinforced stelae, walls, and buildings were witnessed at Quirigua, Guatemala, during an expedition underway when then 1976 M = 7.5 Guatemala earthquake on the Motagua fault struck. Excavations also revealed evidence (domestic pttery vessels and skeleton of a child crushed under fallen walls) of an ancient earthquake occurring about the teim of the demise and abandonment of Quirigua in the late 9th century. Striking evidence for sudden earthquake building collapse at the end of the Mayan Classic Period ~A.D. 889 was found at Benque Viejo (Xunantunich), Belize, located 210 north of Quirigua. It is argued that a M = 7.5 to 7.9 earthquake at the end of the Maya Classic period centered in the vicinity of the Chixoy-Polochic and Motagua fault zones cound have produced the contemporaneous earthquake damage to the above sites. As a consequences this earthquake may have accelerated the

  8. Spatial distribution of earthquake hypocenters in the Crimea—Black Sea region

    NASA Astrophysics Data System (ADS)

    Burmin, V. Yu; Shumlianska, L. O.

    2018-03-01

    Some aspects of the seismicity the Crime—Black Sea region are considered on the basis of the catalogued data on earthquakes that have occurred between 1970 and 2012. The complete list of the Crimean earthquakes for this period contains about 2140 events with magnitude ranging from -1.5 to 5.5. Bulletins contain information about compressional and shear waves arrival times regarding nearly 2000 earthquakes. A new approach to the definition of the coordinates of all of the events was applied to re-establish the hypocenters of the catalogued earthquakes. The obtained results indicate that the bulk of the earthquakes' foci in the region are located in the crust. However, some 2.5% of the foci are located at the depths ranging from 50 to 250 km. The new distribution of foci of earthquakes shows the concentration of foci in the form of two inclined branches, the center of which is located under the Yalto-Alushta seismic focal zone. The whole distribution of foci in depth corresponds to the relief of the lithosphere.

  9. The threat of silent earthquakes

    USGS Publications Warehouse

    Cervelli, Peter

    2004-01-01

    Not all earthquakes shake the ground. The so-called silent types are forcing scientists to rethink their understanding of the way quake-prone faults behave. In rare instances, silent earthquakes that occur along the flakes of seaside volcanoes may cascade into monstrous landslides that crash into the sea and trigger towering tsunamis. Silent earthquakes that take place within fault zones created by one tectonic plate diving under another may increase the chance of ground-shaking shocks. In other locations, however, silent slip may decrease the likelihood of destructive quakes, because they release stress along faults that might otherwise seem ready to snap.

  10. A flatfile of ground motion intensity measurements from induced earthquakes in Oklahoma and Kansas

    USGS Publications Warehouse

    Rennolet, Steven B.; Moschetti, Morgan P.; Thompson, Eric M.; Yeck, William

    2018-01-01

    We have produced a uniformly processed database of orientation-independent (RotD50, RotD100) ground motion intensity measurements containing peak horizontal ground motions (accelerations and velocities) and 5-percent-damped pseudospectral accelerations (0.1–10 s) from more than 3,800 M ≥ 3 earthquakes in Oklahoma and Kansas that occurred between January 2009 and December 2016. Ground motion time series were collected from regional, national, and temporary seismic arrays out to 500 km. We relocated the majority of the earthquake hypocenters using a multiple-event relocation algorithm to produce a set of near-uniformly processed hypocentral locations. Ground motion processing followed standard methods, with the primary objective of reducing the effects of noise on the measurements. Regional wave-propagation features and the high seismicity rate required careful selection of signal windows to ensure that we captured the entire ground motion record and that contaminating signals from extraneous earthquakes did not contribute to the database. Processing was carried out with an automated scheme and resulted in a database comprising more than 174,000 records (https://dx.doi.org/10.5066/F73B5X8N). We anticipate that these results will be useful for improved understanding of earthquake ground motions and for seismic hazard applications.

  11. The Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2)

    USGS Publications Warehouse

    ,

    2008-01-01

    California?s 35 million people live among some of the most active earthquake faults in the United States. Public safety demands credible assessments of the earthquake hazard to maintain appropriate building codes for safe construction and earthquake insurance for loss protection. Seismic hazard analysis begins with an earthquake rupture forecast?a model of probabilities that earthquakes of specified magnitudes, locations, and faulting types will occur during a specified time interval. This report describes a new earthquake rupture forecast for California developed by the 2007 Working Group on California Earthquake Probabilities (WGCEP 2007).

  12. The May 20 (MW 6.1) and 29 (MW 6.0), 2012, Emilia (Po Plain, northern Italy) earthquakes: New seismotectonic implications from subsurface geology and high-quality hypocenter location

    NASA Astrophysics Data System (ADS)

    Carannante, Simona; Argnani, Andrea; Massa, Marco; D'Alema, Ezio; Lovati, Sara; Moretti, Milena; Cattaneo, Marco; Augliera, Paolo

    2015-08-01

    This study presents new geological and seismological data that are used to assess the seismic hazard of a sector of the Po Plain (northern Italy), a large alluvial basin hit by two strong earthquakes on May 20 (MW 6.1) and May 29 (MW 6.0), 2012. The proposed interpretation is based on high-quality relocation of 5369 earthquakes ('Emilia sequence') and a dense grid of seismic profiles and exploration wells. The analyzed seismicity was recorded by 44 seismic stations, and initially used to calibrate new one-dimensional and three-dimensional local Vp and Vs velocity models for the area. Considering these new models, the initial sparse hypocenters were then relocated in absolute mode and adjusted using the double-difference relative location algorithm. These data define a seismicity that is elongated in the W-NW to E-SE directions. The aftershocks of the May 20 mainshock appear to be distributed on a rupture surface that dips ~ 45° SSW, and the surface projection indicates an area ~ 10 km wide and 23 km long. The aftershocks of the May 29 mainshock followed a steep rupture surface that is well constrained within the investigated volume, whereby the surface projection of the blind source indicates an area ~ 6 km wide and 33 km long. Multichannel seismic profiles highlight the presence of relevant lateral variations in the structural style of the Ferrara folds that developed during the Pliocene and Pleistocene. There is also evidence of a Mesozoic extensional fault system in the Ferrara arc, with faults that in places have been seismically reactivated. These geological and seismological observations suggest that the 2012 Emilia earthquakes were related to ruptures along blind fault surfaces that are not part of the Pliocene-Pleistocene structural system, but are instead related to a deeper system that is itself closely related to re-activation of a Mesozoic extensional fault system.

  13. User's guide to HYPOINVERSE-2000, a Fortran program to solve for earthquake locations and magnitudes

    USGS Publications Warehouse

    Klein, Fred W.

    2002-01-01

    Hypoinverse is a computer program that processes files of seismic station data for an earthquake (like p wave arrival times and seismogram amplitudes and durations) into earthquake locations and magnitudes. It is one of a long line of similar USGS programs including HYPOLAYR (Eaton, 1969), HYPO71 (Lee and Lahr, 1972), and HYPOELLIPSE (Lahr, 1980). If you are new to Hypoinverse, you may want to start by glancing at the section “SOME SIMPLE COMMAND SEQUENCES” to get a feel of some simpler sessions. This document is essentially an advanced user’s guide, and reading it sequentially will probably plow the reader into more detail than he/she needs. Every user must have a crust model, station list and phase data input files, and glancing at these sections is a good place to begin. The program has many options because it has grown over the years to meet the needs of one the largest seismic networks in the world, but small networks with just a few stations do use the program and can ignore most of the options and commands. History and availability. Hypoinverse was originally written for the Eclipse minicomputer in 1978 (Klein, 1978). A revised version for VAX and Pro-350 computers (Klein, 1985) was later expanded to include multiple crustal models and other capabilities (Klein, 1989). This current report documents the expanded Y2000 version and it supercedes the earlier documents. It serves as a detailed user's guide to the current version running on unix and VAX-alpha computers, and to the version supplied with the Earthworm earthquake digitizing system. Fortran-77 source code (Sun and VAX compatible) and copies of this documentation is available via anonymous ftp from computers in Menlo Park. At present, the computer is swave.wr.usgs.gov and the directory is /ftp/pub/outgoing/klein/hyp2000. If you are running Hypoinverse on one of the Menlo Park EHZ or NCSN unix computers, the executable currently is ~klein/hyp2000/hyp2000. New features. The Y2000 version of

  14. The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?

    USGS Publications Warehouse

    Kilb, Debi; Gomberg, J.

    1999-01-01

    We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

  15. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    PubMed

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  16. An Optimal Algorithm towards Successive Location Privacy in Sensor Networks with Dynamic Programming

    NASA Astrophysics Data System (ADS)

    Zhao, Baokang; Wang, Dan; Shao, Zili; Cao, Jiannong; Chan, Keith C. C.; Su, Jinshu

    In wireless sensor networks, preserving location privacy under successive inference attacks is extremely critical. Although this problem is NP-complete in general cases, we propose a dynamic programming based algorithm and prove it is optimal in special cases where the correlation only exists between p immediate adjacent observations.

  17. Purposes and methods of scoring earthquake forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, J.

    2010-12-01

    There are two kinds of purposes in the studies on earthquake prediction or forecasts: one is to give a systematic estimation of earthquake risks in some particular region and period in order to give advice to governments and enterprises for the use of reducing disasters, the other one is to search for reliable precursors that can be used to improve earthquake prediction or forecasts. For the first case, a complete score is necessary, while for the latter case, a partial score, which can be used to evaluate whether the forecasts or predictions have some advantages than a well know model, is necessary. This study reviews different scoring methods for evaluating the performance of earthquake prediction and forecasts. Especially, the gambling scoring method, which is developed recently, shows its capacity in finding good points in an earthquake prediction algorithm or model that are not in a reference model, even if its overall performance is no better than the reference model.

  18. Dynamic 3D simulations of earthquakes on en echelon faults

    USGS Publications Warehouse

    Harris, R.A.; Day, S.M.

    1999-01-01

    One of the mysteries of earthquake mechanics is why earthquakes stop. This process determines the difference between small and devastating ruptures. One possibility is that fault geometry controls earthquake size. We test this hypothesis using a numerical algorithm that simulates spontaneous rupture propagation in a three-dimensional medium and apply our knowledge to two California fault zones. We find that the size difference between the 1934 and 1966 Parkfield, California, earthquakes may be the product of a stepover at the southern end of the 1934 earthquake and show how the 1992 Landers, California, earthquake followed physically reasonable expectations when it jumped across en echelon faults to become a large event. If there are no linking structures, such as transfer faults, then strike-slip earthquakes are unlikely to propagate through stepovers >5 km wide. Copyright 1999 by the American Geophysical Union.

  19. Object-based classification of earthquake damage from high-resolution optical imagery using machine learning

    NASA Astrophysics Data System (ADS)

    Bialas, James; Oommen, Thomas; Rebbapragada, Umaa; Levin, Eugene

    2016-07-01

    Object-based approaches in the segmentation and classification of remotely sensed images yield more promising results compared to pixel-based approaches. However, the development of an object-based approach presents challenges in terms of algorithm selection and parameter tuning. Subjective methods are often used, but yield less than optimal results. Objective methods are warranted, especially for rapid deployment in time-sensitive applications, such as earthquake damage assessment. Herein, we used a systematic approach in evaluating object-based image segmentation and machine learning algorithms for the classification of earthquake damage in remotely sensed imagery. We tested a variety of algorithms and parameters on post-event aerial imagery for the 2011 earthquake in Christchurch, New Zealand. Results were compared against manually selected test cases representing different classes. In doing so, we can evaluate the effectiveness of the segmentation and classification of different classes and compare different levels of multistep image segmentations. Our classifier is compared against recent pixel-based and object-based classification studies for postevent imagery of earthquake damage. Our results show an improvement against both pixel-based and object-based methods for classifying earthquake damage in high resolution, post-event imagery.

  20. Variable anelastic attenuation and site effect in estimating source parameters of various major earthquakes including M w 7.8 Nepal and M w 7.5 Hindu kush earthquake by using far-field strong-motion data

    NASA Astrophysics Data System (ADS)

    Kumar, Naresh; Kumar, Parveen; Chauhan, Vishal; Hazarika, Devajit

    2017-10-01

    Strong-motion records of recent Gorkha Nepal earthquake ( M w 7.8), its strong aftershocks and seismic events of Hindu kush region have been analysed for estimation of source parameters. The M w 7.8 Gorkha Nepal earthquake of 25 April 2015 and its six aftershocks of magnitude range 5.3-7.3 are recorded at Multi-Parametric Geophysical Observatory, Ghuttu, Garhwal Himalaya (India) >600 km west from the epicentre of main shock of Gorkha earthquake. The acceleration data of eight earthquakes occurred in the Hindu kush region also recorded at this observatory which is located >1000 km east from the epicentre of M w 7.5 Hindu kush earthquake on 26 October 2015. The shear wave spectra of acceleration record are corrected for the possible effects of anelastic attenuation at both source and recording site as well as for site amplification. The strong-motion data of six local earthquakes are used to estimate the site amplification and the shear wave quality factor ( Q β) at recording site. The frequency-dependent Q β( f) = 124 f 0.98 is computed at Ghuttu station by using inversion technique. The corrected spectrum is compared with theoretical spectrum obtained from Brune's circular model for the horizontal components using grid search algorithm. Computed seismic moment, stress drop and source radius of the earthquakes used in this work range 8.20 × 1016-5.72 × 1020 Nm, 7.1-50.6 bars and 3.55-36.70 km, respectively. The results match with the available values obtained by other agencies.

  1. Brady's Geothermal Field DAS Earthquake Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt Feigl

    The submitted data correspond to the vibration caused by a 3.4 M earthquake and captured by the DAS horizontal and vertical arrays during the PoroTomo Experiment. Earthquake information : M 4.3 - 23km ESE of Hawthorne, Nevada Time: 2016-03-21 07:37:10 (UTC) Location: 38.479 N 118.366 W Depth: 9.9 km

  2. Design of isolated buildings with S-FBI system subjected to near-fault earthquakes using NSGA-II algorithm

    NASA Astrophysics Data System (ADS)

    Ozbulut, O. E.; Silwal, B.

    2014-04-01

    This study investigates the optimum design parameters of a superelastic friction base isolator (S-FBI) system through a multi-objective genetic algorithm and performance-based evaluation approach. The S-FBI system consists of a flat steel- PTFE sliding bearing and a superelastic NiTi shape memory alloy (SMA) device. Sliding bearing limits the transfer of shear across the isolation interface and provides damping from sliding friction. SMA device provides restoring force capability to the isolation system together with additional damping characteristics. A three-story building is modeled with S-FBI isolation system. Multiple-objective numerical optimization that simultaneously minimizes isolation-level displacements and superstructure response is carried out with a genetic algorithm (GA) in order to optimize S-FBI system. Nonlinear time history analyses of the building with S-FBI system are performed. A set of 20 near-field ground motion records are used in numerical simulations. Results show that S-FBI system successfully control response of the buildings against near-fault earthquakes without sacrificing in isolation efficacy and producing large isolation-level deformations.

  3. Sun, Moon and Earthquakes

    NASA Astrophysics Data System (ADS)

    Kolvankar, V. G.

    2013-12-01

    During a study conducted to find the effect of Earth tides on the occurrence of earthquakes, for small areas [typically 1000km X1000km] of high-seismicity regions, it was noticed that the Sun's position in terms of universal time [GMT] shows links to the sum of EMD [longitude of earthquake location - longitude of Moon's foot print on earth] and SEM [Sun-Earth-Moon angle]. This paper provides the details of this relationship after studying earthquake data for over forty high-seismicity regions of the world. It was found that over 98% of the earthquakes for these different regions, examined for the period 1973-2008, show a direct relationship between the Sun's position [GMT] and [EMD+SEM]. As the time changes from 00-24 hours, the factor [EMD+SEM] changes through 360 degree, and plotting these two variables for earthquakes from different small regions reveals a simple 45 degree straight-line relationship between them. This relationship was tested for all earthquakes and earthquake sequences for magnitude 2.0 and above. This study conclusively proves how Sun and the Moon govern all earthquakes. Fig. 12 [A+B]. The left-hand figure provides a 24-hour plot for forty consecutive days including the main event (00:58:23 on 26.12.2004, Lat.+3.30, Long+95.980, Mb 9.0, EQ count 376). The right-hand figure provides an earthquake plot for (EMD+SEM) vs GMT timings for the same data. All the 376 events including the main event faithfully follow the straight-line curve.

  4. Large earthquakes and creeping faults

    USGS Publications Warehouse

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  5. Unexpected earthquake of June 25th, 2015 in Madiun, East Java

    NASA Astrophysics Data System (ADS)

    Nugraha, Andri Dian; Supendi, Pepen; Shiddiqi, Hasbi Ash; Widiyantoro, Sri

    2016-05-01

    An earthquake with magnitude 4.2 struck Madiun and its vicinity on June 25, 2015. According to Indonesian Meteorology, Climatology, and Geophysics Agency (BMKG), the earthquake occurred at 10:35:29 GMT+7 and was located in 7.73° S, 111.69 ° E, with a depth of 10 km. At least 57 houses suffered from light to medium damages. We reprocessed earthquake waveform data to obtain an accurate hypocenter location. We manually picked P- and S-waves arrival times from 12 seismic stations in the eastern part of Java. Earthquake location was determined by using Hypoellipse code that employs a single event determination method. Our inversion is able to resolve the fix-depth and shows that the earthquake occurred at 10:35:27.6 GMT+7 and was located in 7.6305° S, 111.7529 ° E with 14.81 km focus depth. Our location depicts a smaller travel time residual compared to that based on the BMKG result. Focal mechanism of the earthquake was determined by using HASH code. We used first arrival polarity of 9 seismic records with azimuthal gap less than 90°, and estimated take-off angles by using assumption of homogenous medium. Our focal mechanism solution shows a strike-slip mechanism with strike direction of 163o, which may be related to a strike-fault in Klangon, an area to the east of Madiun.

  6. Synthetic earthquake catalogs simulating seismic activity in the Corinth Gulf, Greece, fault system

    NASA Astrophysics Data System (ADS)

    Console, Rodolfo; Carluccio, Roberto; Papadimitriou, Eleftheria; Karakostas, Vassilis

    2015-01-01

    The characteristic earthquake hypothesis is the basis of time-dependent modeling of earthquake recurrence on major faults. However, the characteristic earthquake hypothesis is not strongly supported by observational data. Few fault segments have long historical or paleoseismic records of individually dated ruptures, and when data and parameter uncertainties are allowed for, the form of the recurrence distribution is difficult to establish. This is the case, for instance, of the Corinth Gulf Fault System (CGFS), for which documents about strong earthquakes exist for at least 2000 years, although they can be considered complete for M ≥ 6.0 only for the latest 300 years, during which only few characteristic earthquakes are reported for individual fault segments. The use of a physics-based earthquake simulator has allowed the production of catalogs lasting 100,000 years and containing more than 500,000 events of magnitudes ≥ 4.0. The main features of our simulation algorithm are (1) an average slip rate released by earthquakes for every single segment in the investigated fault system, (2) heuristic procedures for rupture growth and stop, leading to a self-organized earthquake magnitude distribution, (3) the interaction between earthquake sources, and (4) the effect of minor earthquakes in redistributing stress. The application of our simulation algorithm to the CGFS has shown realistic features in time, space, and magnitude behavior of the seismicity. These features include long-term periodicity of strong earthquakes, short-term clustering of both strong and smaller events, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the higher-magnitude range.

  7. Significant earthquakes on the Enriquillo fault system, Hispaniola, 1500-2010: Implications for seismic hazard

    USGS Publications Warehouse

    Bakun, William H.; Flores, Claudia H.; ten Brink, Uri S.

    2012-01-01

    Historical records indicate frequent seismic activity along the north-east Caribbean plate boundary over the past 500 years, particularly on the island of Hispaniola. We use accounts of historical earthquakes to assign intensities and the intensity assignments for the 2010 Haiti earthquakes to derive an intensity attenuation relation for Hispaniola. The intensity assignments and the attenuation relation are used in a grid search to find source locations and magnitudes that best fit the intensity assignments. Here we describe a sequence of devastating earthquakes on the Enriquillo fault system in the eighteenth century. An intensity magnitude MI 6.6 earthquake in 1701 occurred near the location of the 2010 Haiti earthquake, and the accounts of the shaking in the 1701 earthquake are similar to those of the 2010 earthquake. A series of large earthquakes migrating from east to west started with the 18 October 1751 MI 7.4–7.5 earthquake, probably located near the eastern end of the fault in the Dominican Republic, followed by the 21 November 1751 MI 6.6 earthquake near Port-au-Prince, Haiti, and the 3 June 1770 MI 7.5 earthquake west of the 2010 earthquake rupture. The 2010 Haiti earthquake may mark the beginning of a new cycle of large earthquakes on the Enriquillo fault system after 240 years of seismic quiescence. The entire Enriquillo fault system appears to be seismically active; Haiti and the Dominican Republic should prepare for future devastating earthquakes.

  8. Earthquakes: Recurrence and Interoccurrence Times

    NASA Astrophysics Data System (ADS)

    Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.; Yakovlev, G.; Goltz, C.; Newman, W. I.

    2008-04-01

    The purpose of this paper is to discuss the statistical distributions of recurrence times of earthquakes. Recurrence times are the time intervals between successive earthquakes at a specified location on a specified fault. Although a number of statistical distributions have been proposed for recurrence times, we argue in favor of the Weibull distribution. The Weibull distribution is the only distribution that has a scale-invariant hazard function. We consider three sets of characteristic earthquakes on the San Andreas fault: (1) The Parkfield earthquakes, (2) the sequence of earthquakes identified by paleoseismic studies at the Wrightwood site, and (3) an example of a sequence of micro-repeating earthquakes at a site near San Juan Bautista. In each case we make a comparison with the applicable Weibull distribution. The number of earthquakes in each of these sequences is too small to make definitive conclusions. To overcome this difficulty we consider a sequence of earthquakes obtained from a one million year “Virtual California” simulation of San Andreas earthquakes. Very good agreement with a Weibull distribution is found. We also obtain recurrence statistics for two other model studies. The first is a modified forest-fire model and the second is a slider-block model. In both cases good agreements with Weibull distributions are obtained. Our conclusion is that the Weibull distribution is the preferred distribution for estimating the risk of future earthquakes on the San Andreas fault and elsewhere.

  9. Some facts about aftershocks to large earthquakes in California

    USGS Publications Warehouse

    Jones, Lucile M.; Reasenberg, Paul A.

    1996-01-01

    Earthquakes occur in clusters. After one earthquake happens, we usually see others at nearby (or identical) locations. To talk about this phenomenon, seismologists coined three terms foreshock , mainshock , and aftershock. In any cluster of earthquakes, the one with the largest magnitude is called the mainshock; earthquakes that occur before the mainshock are called foreshocks while those that occur after the mainshock are called aftershocks. A mainshock will be redefined as a foreshock if a subsequent event in the cluster has a larger magnitude. Aftershock sequences follow predictable patterns. That is, a sequence of aftershocks follows certain global patterns as a group, but the individual earthquakes comprising the group are random and unpredictable. This relationship between the pattern of a group and the randomness (stochastic nature) of the individuals has a close parallel in actuarial statistics. We can describe the pattern that aftershock sequences tend to follow with well-constrained equations. However, we must keep in mind that the actual aftershocks are only probabilistically described by these equations. Once the parameters in these equations have been estimated, we can determine the probability of aftershocks occurring in various space, time and magnitude ranges as described below. Clustering of earthquakes usually occurs near the location of the mainshock. The stress on the mainshock's fault changes drastically during the mainshock and that fault produces most of the aftershocks. This causes a change in the regional stress, the size of which decreases rapidly with distance from the mainshock. Sometimes the change in stress caused by the mainshock is great enough to trigger aftershocks on other, nearby faults. While there is no hard "cutoff" distance beyond which an earthquake is totally incapable of triggering an aftershock, the vast majority of aftershocks are located close to the mainshock. As a rule of thumb, we consider earthquakes to be

  10. Catalog of significant historical earthquakes in the Central United States

    USGS Publications Warehouse

    Bakun, W.H.; Hopper, M.G.

    2004-01-01

    We use Modified Mercalli intensity assignments to estimate source locations and moment magnitude M for eighteen 19th-century and twenty early- 20th-century earthquakes in the central United States (CUS) for which estimates of M are otherwise not available. We use these estimates, and locations and M estimated elsewhere, to compile a catelog of significant historical earthquakes in the CUS. The 1811-1812 New Madrid earthquakes apparently dominated CUS seismicity in the first two decades of the 19th century. M5-6 earthquakes occurred in the New Madrid Seismic Zone in 1843 and 1878, but none have occurred since 1878. There has been persistent seismic activity in the Illinois Basin in southern Illinois and Indiana, with M > 5.0 earthquakes in 1895, 1909, 1917, 1968, and 1987. Four other M > 5.0 CUS historical earthquakes have occurred: in Kansas in 1867, in Nebraska in 1877, in Oklahoma in 1882, and in Kentucky in 1980.

  11. Induced Earthquakes Are Not All Alike: Examples from Texas Since 2008 (Invited)

    NASA Astrophysics Data System (ADS)

    Frohlich, C.

    2013-12-01

    The EarthScope Transportable Array passed through Texas between 2008 and 2011, providing an opportunity to identify and accurately locate earthquakes near and/or within oil/gas fields and injection waste disposal operations. In five widely separated geographical locations, the results suggest seismic activity may be induced/triggered. However, the different regions exhibit different relationships between injection/production operations and seismic activity: In the Barnett Shale of northeast Texas, small earthquakes occurred only near higher-volume (volume rate > 150,000 BWPM) injection disposal wells. These included widely reported earthquakes occurring near Dallas-Fort Worth and Cleburne in 2008 and 2009. Near Alice in south Texas, M3.9 earthquakes occurred in 1997 and 2010 on the boundary of the Stratton Field, which had been highly productive for both oil and gas since the 1950's. Both earthquakes occurred during an era of net declining production, but their focal depths and location at the field boundary suggest an association with production activity. In the Eagle Ford of south central Texas, earthquakes occurred near wells following significant increases in extraction (water+produced oil) volumes as well as injection. The largest earthquake, the M4.8 Fashing earthquake of 20 October 2011, occurred after significant increases in extraction. In the Cogdell Field near Snyder (west Texas), a sequence of earthquakes beginning in 2006 followed significant increases in the injection of CO2 at nearby wells. The largest with M4.4 occurred on 11 September 2011. This is the largest known earthquake possibly attributable to CO2 injection. Near Timpson in east Texas a sequence of earthquakes beginning in 2008, including an M4.8 earthquake on 17 May 2012, occurred within three km of two high-volume injection disposal wells that had begun operation in 2007. These were the first known earthquakes at this location. In summary, the observations find possible induced

  12. Fault Branching and Long-Term Earthquake Rupture Scenario for Strike-Slip Earthquake

    NASA Astrophysics Data System (ADS)

    Klinger, Y.; CHOI, J. H.; Vallage, A.

    2017-12-01

    Careful examination of surface rupture for large continental strike-slip earthquakes reveals that for the majority of earthquakes, at least one major branch is involved in the rupture pattern. Often, branching might be either related to the location of the epicenter or located toward the end of the rupture, and possibly related to the stopping of the rupture. In this work, we examine large continental earthquakes that show significant branches at different scales and for which ground surface rupture has been mapped in great details. In each case, rupture conditions are described, including dynamic parameters, past earthquakes history, and regional stress orientation, to see if the dynamic stress field would a priori favor branching. In one case we show that rupture propagation and branching are directly impacted by preexisting geological structures. These structures serve as pathways for the rupture attempting to propagate out of its shear plane. At larger scale, we show that in some cases, rupturing a branch might be systematic, hampering possibilities for the development of a larger seismic rupture. Long-term geomorphology hints at the existence of a strong asperity in the zone where the rupture branched off the main fault. There, no evidence of throughgoing rupture could be seen along the main fault, while the branch is well connected to the main fault. This set of observations suggests that for specific configurations, some rupture scenarios involving systematic branching are more likely than others.

  13. The Dallas-Fort Worth Airport Earthquake Sequence: Seismicity Beyond Injection Period

    NASA Astrophysics Data System (ADS)

    Ogwari, Paul O.; DeShon, Heather R.; Hornbach, Matthew J.

    2018-01-01

    The 2008 Dallas-Fort Worth Airport earthquakes mark the beginning of seismicity rate changes linked to oil and gas operations in the central United States. We assess the spatial and temporal evolution of the sequence through December 2015 using template-based waveform correlation and relative location methods. We locate 400 earthquakes spanning 2008-2015 along a basement fault mapped as the Airport fault. The sequence exhibits temporally variable b values, and small-magnitude (m < 3.4) earthquakes spread northeast along strike over time. Pore pressure diffusion models indicate that the high-volume brine injection well located within 1 km of the 2008 earthquakes, although only operating from September 2008 to August 2009, contributes most significantly to long-term pressure perturbations, and hence stress changes, along the fault; a second long-operating, low-volume injector located 10 km north causes insufficient pressure changes. High-volume injection for a short time period near a critically stressed fault can induce long-lasting seismicity.

  14. Earthquakes of Loihi submarine volcano and the Hawaiian hot spot.

    USGS Publications Warehouse

    Klein, F.W.

    1982-01-01

    Loihi is an active submarine volcano located 35km S of the island of Hawaii and may eventually grow to be the next and S most island in the Hawaiian chain. The Hawaiian Volcano Observatory recorded two major earthquake swarms located there in 1971-1972 and 1975 which were probably associated with submarine eruptions or intrusions. The swarms were located very close to Loihi's bathymetric summit, except for earthquakes during the second stage of the 1971-1972 swarm, which occurred well onto Loihi's SW flank. The flank earthquakes appear to have been triggered by the preceding activity and possible rifting along Loihi's long axis, similar to the rift-flank relationship at Kilauea volcano. Other changes accompanied the shift in locations from Loihi's summit to its flank, including a shift from burst to continuous seismicity, a rise in maximum magnitude, a change from small earthquake clusters to a larger elongated zone, a drop in b value, and a presumed shift from concentrated volcanic stresses to a more diffuse tectonic stress on Loihi's flank. - Author

  15. Safety and survival in an earthquake

    USGS Publications Warehouse

    ,

    1969-01-01

    Many earth scientists in this country and abroad are focusing their studies on the search for means of predicting impending earthquakes, but, as yet, an accurate prediction of the time and place of such an event cannot be made. From past experience, however, one can assume that earthquakes will continue to harass mankind and that they will occur most frequently in the areas where they have been relatively common in the past. In the United States, earthquakes can be expected to occur most frequently in the western states, particularly in Alaska, California, Washington, Oregon, Nevada, Utah, and Montana. The danger, however, is not confined to any one part of the country; major earthquakes have occurred at widely scattered locations.

  16. Revised Earthquake Catalog and Relocated Hypocenters Near Fluid Injection Wells and the Waste Isolation Pilot Plant (WIPP) in Southeastern New Mexico

    NASA Astrophysics Data System (ADS)

    Edel, S.; Bilek, S. L.; Garcia, K.

    2014-12-01

    Induced seismicity is a class of crustal earthquakes resulting from human activities such as surface and underground mining, impoundment of reservoirs, withdrawal of fluids and gas from the subsurface, and injection of fluids into underground cavities. Within the Permian basin in southeastern New Mexico lies an active area of oil and gas production, as well as the Waste Isolation Pilot Plant (WIPP), a geologic nuclear waste repository located just east of Carlsbad, NM. Small magnitude earthquakes have been recognized in the area for many years, recorded by a network of short period vertical component seismometers operated by New Mexico Tech. However, for robust comparisons between the seismicity patterns and the injection well locations and rates, improved locations and a more complete catalog over time are necessary. We present results of earthquake relocations for this area by using data from the 3-component broadband EarthScope Flexible Array SIEDCAR experiment that operated in the area between 2008-2011. Relocated event locations tighten into a small cluster of ~38 km2, approximately 10 km from the nearest injection wells. The majority of events occurred at 10-12 km depth, given depth residuals of 1.7-3.6 km. We also present a newly developed more complete catalog of events from this area by using a waveform cross-correlation algorithm and the relocated events as templates. This allows us to detect smaller magnitude events that were previously undetected with the short period network data. The updated earthquake catalog is compared with geologic maps and cross sections to identify possible fault locations. The catalog is also compared with available well data on fluid injection and production. Our preliminary results suggest no obvious connection between seismic moment release, fluid injection, or production given the available monthly industry data. We do see evidence in the geologic and well data of previously unidentified faults in the area.

  17. Catalog of earthquake hypocenters at Alaskan volcanoes: January 1 through December 31, 2002

    USGS Publications Warehouse

    Dixon, James P.; Stihler, Scott D.; Power, John A.; Tytgat, Guy; Moran, Seth C.; Sánchez, John; Estes, Steve; McNutt, Stephen R.; Paskievitch, John

    2003-01-01

    an EARTHWORM detection system. AVO located 7430 earthquakes during 2002 in the vicinity of the monitored volcanoes. This catalog includes: (1) a description of instruments deployed in the field and their locations; (2) a description of earthquake detection, recording, analysis, and data archival systems; (3) a description of velocity models used for earthquake locations; (4) a summary of earthquakes located in 2002; and (5) an accompanying UNIX tar-file with a summary of earthquake origin times, hypocenters, magnitudes, and location quality statistics; daily station usage statistics; and all HYPOELLIPSE files used to determine the earthquake locations in 2002.The AVO seismic network was used to monitor twenty-four volcanoes in real time in 2002. These include Mount Wrangell, Mount Spurr, Redoubt Volcano, Iliamna Volcano, Augustine Volcano, Katmai Volcanic Group (Snowy Mountain, Mount Griggs, Mount Katmai, Novarupta, Trident Volcano, Mount Mageik, Mount Martin), Aniakchak Crater, Mount Veniaminof, Pavlof Volcano, Mount Dutton, Isanotski Peaks, Shishaldin Volcano, Fisher Caldera, Westdahl Peak, Akutan Peak, Makushin Volcano, Great Sitkin Volcano, and Kanaga Volcano (Figure 1). Monitoring highlights in 2002 include an earthquake swarm at Great Sitkin Volcano in May-June; an earthquake swarm near Snowy Mountain in July-September; low frequency (1-3 Hz) tremor and long-period events at Mount Veniaminof in September-October and in December; and continuing volcanogenic seismic swarms at Shishaldin Volcano throughout the year. Instrumentation and data acquisition highlights in 2002 were the installation of a subnetwork on Okmok Volcano, the establishment of telemetry for the Mount Veniaminof subnetwork, and the change in the data acquisition system to an EARTHWORM detection system. AVO located 7430 earthquakes during 2002 in the vicinity of the monitored volcanoes.This catalog includes: (1) a description of instruments deployed in the field and their locations; (2) a

  18. Location-Aware Mobile Learning of Spatial Algorithms

    ERIC Educational Resources Information Center

    Karavirta, Ville

    2013-01-01

    Learning an algorithm--a systematic sequence of operations for solving a problem with given input--is often difficult for students due to the abstract nature of the algorithms and the data they process. To help students understand the behavior of algorithms, a subfield in computing education research has focused on algorithm…

  19. Real Time Earthquake Information System in Japan

    NASA Astrophysics Data System (ADS)

    Doi, K.; Kato, T.

    2003-12-01

    An early earthquake notification system in Japan had been developed by the Japan Meteorological Agency (JMA) as a governmental organization responsible for issuing earthquake information and tsunami forecasts. The system was primarily developed for prompt provision of a tsunami forecast to the public with locating an earthquake and estimating its magnitude as quickly as possible. Years after, a system for a prompt provision of seismic intensity information as indices of degrees of disasters caused by strong ground motion was also developed so that concerned governmental organizations can decide whether it was necessary for them to launch emergency response or not. At present, JMA issues the following kinds of information successively when a large earthquake occurs. 1) Prompt report of occurrence of a large earthquake and major seismic intensities caused by the earthquake in about two minutes after the earthquake occurrence. 2) Tsunami forecast in around three minutes. 3) Information on expected arrival times and maximum heights of tsunami waves in around five minutes. 4) Information on a hypocenter and a magnitude of the earthquake, the seismic intensity at each observation station, the times of high tides in addition to the expected tsunami arrival times in 5-7 minutes. To issue information above, JMA has established; - An advanced nationwide seismic network with about 180 stations for seismic wave observation and about 3,400 stations for instrumental seismic intensity observation including about 2,800 seismic intensity stations maintained by local governments, - Data telemetry networks via landlines and partly via a satellite communication link, - Real-time data processing techniques, for example, the automatic calculation of earthquake location and magnitude, the database driven method for quantitative tsunami estimation, and - Dissemination networks, via computer-to-computer communications and facsimile through dedicated telephone lines. JMA operationally

  20. Determining Hypocentral Parameters for Local Earthquakes in 1-D Using a Genetic Algorithm and Two-point ray tracing

    NASA Astrophysics Data System (ADS)

    Kim, W.; Hahm, I.; Ahn, S. J.; Lim, D. H.

    2005-12-01

    This paper introduces a powerful method for determining hypocentral parameters for local earthquakes in 1-D using a genetic algorithm (GA) and two-point ray tracing. Using existing algorithms to determine hypocentral parameters is difficult, because these parameters can vary based on initial velocity models. We developed a new method to solve this problem by applying a GA to an existing algorithm, HYPO-71 (Lee and Larh, 1975). The original HYPO-71 algorithm was modified by applying two-point ray tracing and a weighting factor with respect to the takeoff angle at the source to reduce errors from the ray path and hypocenter depth. Artificial data, without error, were generated by computer using two-point ray tracing in a true model, in which velocity structure and hypocentral parameters were known. The accuracy of the calculated results was easily determined by comparing calculated and actual values. We examined the accuracy of this method for several cases by changing the true and modeled layer numbers and thicknesses. The computational results show that this method determines nearly exact hypocentral parameters without depending on initial velocity models. Furthermore, accurate and nearly unique hypocentral parameters were obtained, although the number of modeled layers and thicknesses differed from those in the true model. Therefore, this method can be a useful tool for determining hypocentral parameters in regions where reliable local velocity values are unknown. This method also provides the basic a priori information for 3-D studies. KEY -WORDS: hypocentral parameters, genetic algorithm (GA), two-point ray tracing

  1. Ground Motion Response to a ML 4.3 Earthquake Using Co-Located Distributed Acoustic Sensing and Seismometer Arrays

    DOE PAGES

    Wang, Herbert F.; Zeng, Xiangfang; Miller, Douglas E.; ...

    2018-03-17

    The PoroTomo research team deployed two arrays of seismic sensors in a natural laboratory at Brady Hot Springs, Nevada in March 2016. The 1500 m (length) by 500 m (width) by 400 m (depth) volume of the laboratory overlies a geothermal reservoir. The surface Distributed Acoustic Sensing (DAS) array consisted of 8700 m of fiber-optic cable in a shallow trench, including 340 m in a well. The conventional seismometer array consisted of 238 three- component geophones. The DAS cable was laid out in three parallel zig-zag lines with line segments approximately 100 meters in length and geophones were spaced atmore » approximately 60- meter intervals. Both DAS and conventional geophones recorded continuously over 15 days during which a moderate-sized earthquake with a local magnitude of 4.3 was recorded on March 21, 2016. Its epicenter was approximately 150-km south-southeast of the laboratory. Several DAS line segments with co-located geophone stations were used to compare signal-to-noise (SNR) ratios in both time and frequency domains and to test relationships between DAS and geophone data. The ratios were typically within a factor of five of each other with DAS SNR often greater for P-wave but smaller for S-wave relative to geophone SNR. The SNRs measured for an earthquake can be better than for active sources, because the earthquake signal contains more low frequency energy and the noise level is also lower at those lower frequencies. Amplitudes of the sum of several DAS strain-rate waveforms matched the finite difference of two geophone waveforms reasonably well, as did the amplitudes of DAS strain waveforms with particle-velocity waveforms recorded by geophones. Similar agreement was found between DAS and geophone observations and synthetic strain seismograms. In conclusion, the combination of good SNR in the seismic frequency band, high-spatial density, large N, and highly accurate time control among individual sensors suggests that DAS arrays have potential to

  2. Ground Motion Response to a ML 4.3 Earthquake Using Co-Located Distributed Acoustic Sensing and Seismometer Arrays

    NASA Astrophysics Data System (ADS)

    Wang, Herbert F.; Zeng, Xiangfang; Miller, Douglas E.; Fratta, Dante; Feigl, Kurt L.; Thurber, Clifford H.; Mellors, Robert J.

    2018-03-01

    The PoroTomo research team deployed two arrays of seismic sensors in a natural laboratory at Brady Hot Springs, Nevada in March 2016. The 1500 m (length) by 500 m (width) by 400 m (depth) volume of the laboratory overlies a geothermal reservoir. The surface Distributed Acoustic Sensing (DAS) array consisted of 8700 m of fiber-optic cable in a shallow trench, including 340 m in a well. The conventional seismometer array consisted of 238 three-component geophones. The DAS cable was laid out in three parallel zig-zag lines with line segments approximately 100 meters in length and geophones were spaced at approximately 60-m intervals. Both DAS and conventional geophones recorded continuously over 15 days during which a moderate-sized earthquake with a local magnitude of 4.3 was recorded on March 21, 2016. Its epicenter was approximately 150-km south-southeast of the laboratory. Several DAS line segments with co-located geophone stations were used to compare signal-to-noise (SNR) ratios in both time and frequency domains and to test relationships between DAS and geophone data. The ratios were typically within a factor of five of each other with DAS SNR often greater for P-wave but smaller for S-wave relative to geophone SNR. The SNRs measured for an earthquake can be better than for active sources, because the earthquake signal contains more low frequency energy and the noise level is also lower at those lower frequencies. Amplitudes of the sum of several DAS strain-rate waveforms matched the finite difference of two geophone waveforms reasonably well, as did the amplitudes of DAS strain waveforms with particle-velocity waveforms recorded by geophones. Similar agreement was found between DAS and geophone observations and synthetic strain seismograms. The combination of good SNR in the seismic frequency band, high-spatial density, large N, and highly accurate time control among individual sensors suggests that DAS arrays have potential to assume a role in earthquake

  3. Ground Motion Response to a ML 4.3 Earthquake Using Co-Located Distributed Acoustic Sensing and Seismometer Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Herbert F.; Zeng, Xiangfang; Miller, Douglas E.

    The PoroTomo research team deployed two arrays of seismic sensors in a natural laboratory at Brady Hot Springs, Nevada in March 2016. The 1500 m (length) by 500 m (width) by 400 m (depth) volume of the laboratory overlies a geothermal reservoir. The surface Distributed Acoustic Sensing (DAS) array consisted of 8700 m of fiber-optic cable in a shallow trench, including 340 m in a well. The conventional seismometer array consisted of 238 three- component geophones. The DAS cable was laid out in three parallel zig-zag lines with line segments approximately 100 meters in length and geophones were spaced atmore » approximately 60- meter intervals. Both DAS and conventional geophones recorded continuously over 15 days during which a moderate-sized earthquake with a local magnitude of 4.3 was recorded on March 21, 2016. Its epicenter was approximately 150-km south-southeast of the laboratory. Several DAS line segments with co-located geophone stations were used to compare signal-to-noise (SNR) ratios in both time and frequency domains and to test relationships between DAS and geophone data. The ratios were typically within a factor of five of each other with DAS SNR often greater for P-wave but smaller for S-wave relative to geophone SNR. The SNRs measured for an earthquake can be better than for active sources, because the earthquake signal contains more low frequency energy and the noise level is also lower at those lower frequencies. Amplitudes of the sum of several DAS strain-rate waveforms matched the finite difference of two geophone waveforms reasonably well, as did the amplitudes of DAS strain waveforms with particle-velocity waveforms recorded by geophones. Similar agreement was found between DAS and geophone observations and synthetic strain seismograms. In conclusion, the combination of good SNR in the seismic frequency band, high-spatial density, large N, and highly accurate time control among individual sensors suggests that DAS arrays have potential to

  4. Ground motion response to an ML 4.3 earthquake using co-located distributed acoustic sensing and seismometer arrays

    NASA Astrophysics Data System (ADS)

    Wang, Herbert F.; Zeng, Xiangfang; Miller, Douglas E.; Fratta, Dante; Feigl, Kurt L.; Thurber, Clifford H.; Mellors, Robert J.

    2018-06-01

    The PoroTomo research team deployed two arrays of seismic sensors in a natural laboratory at Brady Hot Springs, Nevada in March 2016. The 1500 m (length) × 500 m (width) × 400 m (depth) volume of the laboratory overlies a geothermal reservoir. The distributed acoustic sensing (DAS) array consisted of about 8400 m of fiber-optic cable in a shallow trench and 360 m in a well. The conventional seismometer array consisted of 238 shallowly buried three-component geophones. The DAS cable was laid out in three parallel zig-zag lines with line segments approximately 100 m in length and geophones were spaced at approximately 60 m intervals. Both DAS and conventional geophones recorded continuously over 15 d during which a moderate-sized earthquake with a local magnitude of 4.3 was recorded on 2016 March 21. Its epicentre was approximately 150 km south-southeast of the laboratory. Several DAS line segments with co-located geophone stations were used to compare signal-to-noise ratios (SNRs) in both time and frequency domains and to test relationships between DAS and geophone data. The ratios were typically within a factor of five of each other with DAS SNR often greater for P-wave but smaller for S-wave relative to geophone SNR. The SNRs measured for an earthquake can be better than for active sources because the earthquake signal contains more low-frequency energy and the noise level is also lower at those lower frequencies. Amplitudes of the sum of several DAS strain-rate waveforms matched the finite difference of two geophone waveforms reasonably well, as did the amplitudes of DAS strain waveforms with particle-velocity waveforms recorded by geophones. Similar agreement was found between DAS and geophone observations and synthetic strain seismograms. The combination of good SNR in the seismic frequency band, high-spatial density, large N and highly accurate time control among individual sensors suggests that DAS arrays have potential to assume a role in earthquake

  5. Real-time earthquake monitoring using a search engine method

    PubMed Central

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-01-01

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake’s parameters in <1 s after receiving the long-period surface wave data. PMID:25472861

  6. Earthquakes triggered by silent slip events on Kīlauea volcano, Hawaii

    USGS Publications Warehouse

    Segall, Paul; Desmarais, Emily K.; Shelly, David; Miklius, Asta; Cervelli, Peter F.

    2006-01-01

    Slow-slip events, or ‘silent earthquakes’, have recently been discovered in a number of subduction zones including the Nankai trough1, 2, 3 in Japan, Cascadia4, 5, and Guerrero6 in Mexico, but the depths of these events have been difficult to determine from surface deformation measurements. Although it is assumed that these silent earthquakes are located along the plate megathrust, this has not been proved. Slow slip in some subduction zones is associated with non-volcanic tremor7, 8, but tremor is difficult to locate and may be distributed over a broad depth range9. Except for some events on the San Andreas fault10, slow-slip events have not yet been associated with high-frequency earthquakes, which are easily located. Here we report on swarms of high-frequency earthquakes that accompany otherwise silent slips on Kīlauea volcano, Hawaii. For the most energetic event, in January 2005, the slow slip began before the increase in seismicity. The temporal evolution of earthquakes is well explained by increased stressing caused by slow slip, implying that the earthquakes are triggered. The earthquakes, located at depths of 7–8 km, constrain the slow slip to be at comparable depths, because they must fall in zones of positive Coulomb stress change. Triggered earthquakes accompanying slow-slip events elsewhere might go undetected if background seismicity rates are low. Detection of such events would help constrain the depth of slow slip, and could lead to a method for quantifying the increased hazard during slow-slip events, because triggered events have the potential to grow into destructive earthquakes.

  7. Toward real-time regional earthquake simulation of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.

    2013-12-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  8. Pre-earthquake magnetic pulses

    NASA Astrophysics Data System (ADS)

    Scoville, J.; Heraud, J.; Freund, F.

    2015-08-01

    A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are suspected to be generated deep in the Earth's crust, in and around the hypocentral volume, days or even weeks before earthquakes. Their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, when the sources of these pulses are triangulated, the locations coincide with the epicenters of future earthquakes. We couple a drift-diffusion semiconductor model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.

  9. Earthquake Early Warning: User Education and Designing Effective Messages

    NASA Astrophysics Data System (ADS)

    Burkett, E. R.; Sellnow, D. D.; Jones, L.; Sellnow, T. L.

    2014-12-01

    The U.S. Geological Survey (USGS) and partners are transitioning from test-user trials of a demonstration earthquake early warning system (ShakeAlert) to deciding and preparing how to implement the release of earthquake early warning information, alert messages, and products to the public and other stakeholders. An earthquake early warning system uses seismic station networks to rapidly gather information about an occurring earthquake and send notifications to user devices ahead of the arrival of potentially damaging ground shaking at their locations. Earthquake early warning alerts can thereby allow time for actions to protect lives and property before arrival of damaging shaking, if users are properly educated on how to use and react to such notifications. A collaboration team of risk communications researchers and earth scientists is researching the effectiveness of a chosen subset of potential earthquake early warning interface designs and messages, which could be displayed on a device such as a smartphone. Preliminary results indicate, for instance, that users prefer alerts that include 1) a map to relate their location to the earthquake and 2) instructions for what to do in response to the expected level of shaking. A number of important factors must be considered to design a message that will promote appropriate self-protective behavior. While users prefer to see a map, how much information can be processed in limited time? Are graphical representations of wavefronts helpful or confusing? The most important factor to promote a helpful response is the predicted earthquake intensity, or how strong the expected shaking will be at the user's location. Unlike Japanese users of early warning, few Californians are familiar with the earthquake intensity scale, so we are exploring how differentiating instructions between intensity levels (e.g., "Be aware" for lower shaking levels and "Drop, cover, hold on" at high levels) can be paired with self-directed supplemental

  10. Turkish Compulsory Earthquake Insurance (TCIP)

    NASA Astrophysics Data System (ADS)

    Erdik, M.; Durukal, E.; Sesetyan, K.

    2009-04-01

    Through a World Bank project a government-sponsored Turkish Catastrophic Insurance Pool (TCIP) is created in 2000 with the essential aim of transferring the government's financial burden of replacing earthquake-damaged housing to international reinsurance and capital markets. Providing coverage to about 2.9 Million homeowners TCIP is the largest insurance program in the country with about 0.5 Billion USD in its own reserves and about 2.3 Billion USD in total claims paying capacity. The total payment for earthquake damage since 2000 (mostly small, 226 earthquakes) amounts to about 13 Million USD. The country-wide penetration rate is about 22%, highest in the Marmara region (30%) and lowest in the south-east Turkey (9%). TCIP is the sole-source provider of earthquake loss coverage up to 90,000 USD per house. The annual premium, categorized on the basis of earthquake zones type of structure, is about US90 for a 100 square meter reinforced concrete building in the most hazardous zone with 2% deductible. The earthquake engineering related shortcomings of the TCIP is exemplified by fact that the average rate of 0.13% (for reinforced concrete buildings) with only 2% deductible is rather low compared to countries with similar earthquake exposure. From an earthquake engineering point of view the risk underwriting (Typification of housing units to be insured, earthquake intensity zonation and the sum insured) of the TCIP needs to be overhauled. Especially for large cities, models can be developed where its expected earthquake performance (and consequently the insurance premium) can be can be assessed on the basis of the location of the unit (microzoned earthquake hazard) and basic structural attributes (earthquake vulnerability relationships). With such an approach, in the future the TCIP can contribute to the control of construction through differentiation of premia on the basis of earthquake vulnerability.

  11. Linking giant earthquakes with the subduction of oceanic fracture zones

    NASA Astrophysics Data System (ADS)

    Landgrebe, T. C.; Müller, R. D.; EathByte Group

    2011-12-01

    Giant subduction earthquakes are known to occur in areas not previously identified as prone to high seismic risk. This highlights the need to better identify subduction zone segments potentially dominated by relatively long (up to 1000 years and more) recurrence times of giant earthquakes. Global digital data sets represent a promising source of information for a multi-dimensional earthquake hazard analysis. We combine the NGDC global Significant Earthquakes database with a global strain rate map, gridded ages of the ocean floor, and a recently produced digital data set for oceanic fracture zones, major aseismic ridges and volcanic chains to investigate the association of earthquakes as a function of magnitude with age of the downgoing slab and convergence rates. We use a so-called Top-N recommendation method, a technology originally developed to search, sort, classify, and filter very large and often statistically skewed data sets on the internet, to analyse the association of subduction earthquakes sorted by magnitude with key parameters. The Top-N analysis is used to progressively assess how strongly particular "tectonic niche" locations (e.g. locations along subduction zones intersected with aseismic ridges or volcanic chains) are associated with sets of earthquakes in sorted order in a given magnitude range. As the total number N of sorted earthquakes is increased, by progressively including smaller-magnitude events, the so-called recall is computed, defined as the number of Top-N earthquakes associated with particular target areas divided by N. The resultant statistical measure represents an intuitive description of the effectiveness of a given set of parameters to account for the location of significant earthquakes on record. We use this method to show that the occurrence of great (magnitude ≥ 8) earthquakes on overriding plate segments is strongly biased towards intersections of oceanic fracture zones with subduction zones. These intersection regions are

  12. Limiting the effects of earthquakes on gravitational-wave interferometers

    USGS Publications Warehouse

    Coughlin, Michael; Earle, Paul; Harms, Jan; Biscans, Sebastien; Buchanan, Christopher; Coughlin, Eric; Donovan, Fred; Fee, Jeremy; Gabbard, Hunter; Guy, Michelle; Mukund, Nikhil; Perry, Matthew

    2017-01-01

    Ground-based gravitational wave interferometers such as the Laser Interferometer Gravitational-wave Observatory (LIGO) are susceptible to ground shaking from high-magnitude teleseismic events, which can interrupt their operation in science mode and significantly reduce their duty cycle. It can take several hours for a detector to stabilize enough to return to its nominal state for scientific observations. The down time can be reduced if advance warning of impending shaking is received and the impact is suppressed in the isolation system with the goal of maintaining stable operation even at the expense of increased instrumental noise. Here, we describe an early warning system for modern gravitational-wave observatories. The system relies on near real-time earthquake alerts provided by the U.S. Geological Survey (USGS) and the National Oceanic and Atmospheric Administration (NOAA). Preliminary low latency hypocenter and magnitude information is generally available in 5 to 20 min of a significant earthquake depending on its magnitude and location. The alerts are used to estimate arrival times and ground velocities at the gravitational-wave detectors. In general, 90% of the predictions for ground-motion amplitude are within a factor of 5 of measured values. The error in both arrival time and ground-motion prediction introduced by using preliminary, rather than final, hypocenter and magnitude information is minimal. By using a machine learning algorithm, we develop a prediction model that calculates the probability that a given earthquake will prevent a detector from taking data. Our initial results indicate that by using detector control configuration changes, we could prevent interruption of operation from 40 to 100 earthquake events in a 6-month time-period.

  13. Limiting the effects of earthquakes on gravitational-wave interferometers

    NASA Astrophysics Data System (ADS)

    Coughlin, Michael; Earle, Paul; Harms, Jan; Biscans, Sebastien; Buchanan, Christopher; Coughlin, Eric; Donovan, Fred; Fee, Jeremy; Gabbard, Hunter; Guy, Michelle; Mukund, Nikhil; Perry, Matthew

    2017-02-01

    Ground-based gravitational wave interferometers such as the Laser Interferometer Gravitational-wave Observatory (LIGO) are susceptible to ground shaking from high-magnitude teleseismic events, which can interrupt their operation in science mode and significantly reduce their duty cycle. It can take several hours for a detector to stabilize enough to return to its nominal state for scientific observations. The down time can be reduced if advance warning of impending shaking is received and the impact is suppressed in the isolation system with the goal of maintaining stable operation even at the expense of increased instrumental noise. Here, we describe an early warning system for modern gravitational-wave observatories. The system relies on near real-time earthquake alerts provided by the U.S. Geological Survey (USGS) and the National Oceanic and Atmospheric Administration (NOAA). Preliminary low latency hypocenter and magnitude information is generally available in 5 to 20 min of a significant earthquake depending on its magnitude and location. The alerts are used to estimate arrival times and ground velocities at the gravitational-wave detectors. In general, 90% of the predictions for ground-motion amplitude are within a factor of 5 of measured values. The error in both arrival time and ground-motion prediction introduced by using preliminary, rather than final, hypocenter and magnitude information is minimal. By using a machine learning algorithm, we develop a prediction model that calculates the probability that a given earthquake will prevent a detector from taking data. Our initial results indicate that by using detector control configuration changes, we could prevent interruption of operation from 40 to 100 earthquake events in a 6-month time-period.

  14. Geometry and earthquake potential of the shoreline fault, central California

    USGS Publications Warehouse

    Hardebeck, Jeanne L.

    2013-01-01

    The Shoreline fault is a vertical strike‐slip fault running along the coastline near San Luis Obispo, California. Much is unknown about the Shoreline fault, including its slip rate and the details of its geometry. Here, I study the geometry of the Shoreline fault at seismogenic depth, as well as the adjacent section of the offshore Hosgri fault, using seismicity relocations and earthquake focal mechanisms. The Optimal Anisotropic Dynamic Clustering (OADC) algorithm (Ouillon et al., 2008) is used to objectively identify the simplest planar fault geometry that fits all of the earthquakes to within their location uncertainty. The OADC results show that the Shoreline fault is a single continuous structure that connects to the Hosgri fault. Discontinuities smaller than about 1 km may be undetected, but would be too small to be barriers to earthquake rupture. The Hosgri fault dips steeply to the east, while the Shoreline fault is essentially vertical, so the Hosgri fault dips towards and under the Shoreline fault as the two faults approach their intersection. The focal mechanisms generally agree with pure right‐lateral strike‐slip on the OADC planes, but suggest a non‐planar Hosgri fault or another structure underlying the northern Shoreline fault. The Shoreline fault most likely transfers strike‐slip motion between the Hosgri fault and other faults of the Pacific–North America plate boundary system to the east. A hypothetical earthquake rupturing the entire known length of the Shoreline fault would have a moment magnitude of 6.4–6.8. A hypothetical earthquake rupturing the Shoreline fault and the section of the Hosgri fault north of the Hosgri–Shoreline junction would have a moment magnitude of 7.2–7.5.

  15. Aftershocks and triggered events of the Great 1906 California earthquake

    USGS Publications Warehouse

    Meltzner, A.J.; Wald, D.J.

    2003-01-01

    The San Andreas fault is the longest fault in California and one of the longest strike-slip faults in the world, yet little is known about the aftershocks following the most recent great event on the San Andreas, the Mw 7.8 San Francisco earthquake on 18 April 1906. We conducted a study to locate and to estimate magnitudes for the largest aftershocks and triggered events of this earthquake. We examined existing catalogs and historical documents for the period April 1906 to December 1907, compiling data on the first 20 months of the aftershock sequence. We grouped felt reports temporally and assigned modified Mercalli intensities for the larger events based on the descriptions judged to be the most reliable. For onshore and near-shore events, a grid-search algorithm (derived from empirical analysis of modern earthquakes) was used to find the epicentral location and magnitude most consistent with the assigned intensities. For one event identified as far offshore, the event's intensity distribution was compared with those of modern events, in order to contrain the event's location and magnitude. The largest aftershock within the study period, an M ???6.7 event, occurred ???100 km west of Eureka on 23 April 1906. Although not within our study period, another M ???6.7 aftershock occurred near Cape Mendocino on 28 October 1909. Other significant aftershocks included an M ???5.6 event near San Juan Bautista on 17 May 1906 and an M ???6.3 event near Shelter Cove on 11 August 1907. An M ???4.9 aftershock occurred on the creeping segment of the San Andreas fault (southeast of the mainshock rupture) on 6 July 1906. The 1906 San Francisco earthquake also triggered events in southern California (including separate events in or near the Imperial Valley, the Pomona Valley, and Santa Monica Bay), in western Nevada, in southern central Oregon, and in western Arizona, all within 2 days of the mainshock. Of these trigerred events, the largest were an M ???6.1 earthquake near Brawley

  16. Space technologies for short-term earthquake warning

    NASA Astrophysics Data System (ADS)

    Pulinets, S.

    Recent theoretical and experimental studies explicitly demonstrated the ability of space technologies to identify and monitor the specific variations at near-earth space plasma, atmosphere and ground surface associated with approaching severe earthquakes (named as earthquake precursors) appearing several days (from 1 to 5) before the seismic shock over the seismically active areas. Several countries and private companies are in the stage of preparation (or already launched) the dedicated spacecrafts for monitoring of the earthquake precursors from space and for short-term earthquake prediction. The present paper intends to outline the optimal algorithm for creation of the space-borne system for the earthquake precursors monitoring and for short-term earthquake prediction. It takes into account the following considerations: Selection of the precursors in the terms of priority, taking into account their statistical and physical parameters Configuration of the spacecraft payload Configuration of the satellite constellation (orbit selection, satellite distribution, operation schedule) Proposal of different options (cheap microsatellite or comprehensive multisatellite constellation) Taking into account that the most promising are the ionospheric precursors of earthquakes, the special attention will be devoted to the radiophysical techniques of the ionosphere monitoring. The advantages and disadvantages of such technologies as vertical sounding, in-situ probes, ionosphere tomography, GPS TEC and GPS MET technologies will be considered.

  17. Space technologies for short-term earthquake warning

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.

    Recent theoretical and experimental studies explicitly demonstrated the ability of space technologies to identify and monitor the specific variations at near-earth space plasma, atmosphere and ground surface associated with approaching severe earthquakes (named as earthquake precursors) which appear several days (from 1 to 5) before the seismic shock over the seismically active areas. Several countries and private companies are in the stage of preparation (or already launched) the dedicated spacecrafts for monitoring of the earthquake precursors from space and for short-term earthquake prediction. The present paper intends to outline the optimal algorithm for creation of the space-borne system for the earthquake precursors monitoring and for short-term earthquake prediction. It takes into account the following: Selection of the precursors in the terms of priority, considering their statistical and physical parameters.Configuration of the spacecraft payload.Configuration of the satellite constellation (orbit selection, satellite distribution, operation schedule).Different options of the satellite systems (cheap microsatellite or comprehensive multisatellite constellation). Taking into account that the most promising are the ionospheric precursors of earthquakes, the special attention is devoted to the radiophysical techniques of the ionosphere monitoring. The advantages and disadvantages of such technologies as vertical sounding, in-situ probes, ionosphere tomography, GPS TEC and GPS MET technologies are considered.

  18. Pre-Earthquake Unipolar Electromagnetic Pulses

    NASA Astrophysics Data System (ADS)

    Scoville, J.; Freund, F.

    2013-12-01

    Transient ultralow frequency (ULF) electromagnetic (EM) emissions have been reported to occur before earthquakes [1,2]. They suggest powerful transient electric currents flowing deep in the crust [3,4]. Prior to the M=5.4 Alum Rock earthquake of Oct. 21, 2007 in California a QuakeFinder triaxial search-coil magnetometer located about 2 km from the epicenter recorded unusual unipolar pulses with the approximate shape of a half-cycle of a sine wave, reaching amplitudes up to 30 nT. The number of these unipolar pulses increased as the day of the earthquake approached. These pulses clearly originated around the hypocenter. The same pulses have since been recorded prior to several medium to moderate earthquakes in Peru, where they have been used to triangulate the location of the impending earthquakes [5]. To understand the mechanism of the unipolar pulses, we first have to address the question how single current pulses can be generated deep in the Earth's crust. Key to this question appears to be the break-up of peroxy defects in the rocks in the hypocenter as a result of the increase in tectonic stresses prior to an earthquake. We investigate the mechanism of the unipolar pulses by coupling the drift-diffusion model of semiconductor theory to Maxwell's equations, thereby producing a model describing the rock volume that generates the pulses in terms of electromagnetism and semiconductor physics. The system of equations is then solved numerically to explore the electromagnetic radiation associated with drift-diffusion currents of electron-hole pairs. [1] Sharma, A. K., P. A. V., and R. N. Haridas (2011), Investigation of ULF magnetic anomaly before moderate earthquakes, Exploration Geophysics 43, 36-46. [2] Hayakawa, M., Y. Hobara, K. Ohta, and K. Hattori (2011), The ultra-low-frequency magnetic disturbances associated with earthquakes, Earthquake Science, 24, 523-534. [3] Bortnik, J., T. E. Bleier, C. Dunson, and F. Freund (2010), Estimating the seismotelluric current

  19. The persistence of directivity in small earthquakes

    USGS Publications Warehouse

    Boatwright, J.

    2007-01-01

    We derive a simple inversion of peak ground acceleration (PGA) or peak ground velocity (PGV) for rupture direction and rupture velocity and then test this inversion on the peak motions obtained from seven 3.5 ??? M ??? 4.1 earthquakes that occurred in two clusters in November 2002 and February 2003 near San Ramon, California. These clusters were located on two orthogonal strike-slip faults so that the events share the same approximate focal mechanism but not the same fault plane. Three earthquakes exhibit strong directivity, but the other four earthquakes exhibit relatively weak directivity. We use the residual PGAs and PGVs from the other six events to determine station corrections for each earthquake. The inferred rupture directions unambiguously identify the fault plane for the three earthquakes with strong directivity and for three of the four earthquakes with weak directivity. The events with strong directivity have fast rupture velocities (0.63????? v ??? 0.87??); the events with weak directivity either rupture more slowly (0.17????? v ???0.35??) or bilaterally. The simple unilateral inversion cannot distinguish between slow and bilateral ruptures: adding a bilateral rupture component degrades the fit of the rupture directions to the fault planes. By comparing PGAs from the events with strong and weak directivity, we show how an up-dip rupture in small events can distort the attenuation of peak ground motion with distance. When we compare the rupture directions of the earthquakes to the location of aftershocks in the two clusters, we find than almost all the aftershocks of the three earthquakes with strong directivity occur within 70?? of the direction of rupture.

  20. Application of τc*Pd for identifying damaging earthquakes for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Huang, P. L.; Lin, T. L.; Wu, Y. M.

    2014-12-01

    Earthquake Early Warning System (EEWS) is an effective approach to mitigate earthquake damage. In this study, we used the seismic record by the Kiban Kyoshin network (KiK-net), because it has dense station coverage and co-located borehole strong-motion seismometers along with the free-surface strong-motion seismometers. We used inland earthquakes with moment magnitude (Mw) from 5.0 to 7.3 between 1998 and 2012. We choose 135 events and 10950 strong ground accelerograms recorded by the 696 strong ground accelerographs. Both the free-surface and the borehole data are used to calculate τc and Pd, respectively. The results show that τc*Pd has a good correlation with PGV and is a robust parameter for assessing the potential of damaging earthquake. We propose the value of τc*Pd determined from seconds after the arrival of P wave could be a threshold for the on-site type of EEW.

  1. The pathway to earthquake early warning in the US

    NASA Astrophysics Data System (ADS)

    Allen, R. M.; Given, D. D.; Heaton, T. H.; Vidale, J. E.; West Coast Earthquake Early Warning Development Team

    2013-05-01

    The development of earthquake early warning capabilities in the United States is now accelerating and expanding as the technical capability to provide warning is demonstrated and additional funding resources are making it possible to expand the current testing region to the entire west coast (California, Oregon and Washington). Over the course of the next two years we plan to build a prototype system that will provide a blueprint for a full public system in the US. California currently has a demonstrations warning system, ShakeAlert, that provides alerts to a group of test users from the public and private sector. These include biotech companies, technology companies, the entertainment industry, the transportation sector, and the emergency planning and response community. Most groups are currently in an evaluation mode, receiving the alerts and developing protocols for future response. The Bay Area Rapid Transit (BART) system is the one group who has now implemented an automated response to the warning system. BART now stops trains when an earthquake of sufficient size is detected. Research and development also continues to develop improved early warning algorithms to better predict the distribution of shaking in large earthquakes when the finiteness of the source becomes important. The algorithms under development include the use of both seismic and GPS instrumentation and integration with existing point source algorithms. At the same time, initial testing and development of algorithms in and for the Pacific Northwest is underway. In this presentation we will review the current status of the systems, highlight the new research developments, and lay out a pathway to a full public system for the US west coast. The research and development described is ongoing at Caltech, UC Berkeley, University of Washington, ETH Zurich, Southern California Earthquake Center, and the US Geological Survey, and is funded by the Gordon and Betty Moore Foundation and the US Geological

  2. A mobile agent-based moving objects indexing algorithm in location based service

    NASA Astrophysics Data System (ADS)

    Fang, Zhixiang; Li, Qingquan; Xu, Hong

    2006-10-01

    This paper will extends the advantages of location based service, specifically using their ability to management and indexing the positions of moving object, Moreover with this objective in mind, a mobile agent-based moving objects indexing algorithm is proposed in this paper to efficiently process indexing request and acclimatize itself to limitation of location based service environment. The prominent feature of this structure is viewing moving object's behavior as the mobile agent's span, the unique mapping between the geographical position of moving objects and span point of mobile agent is built to maintain the close relationship of them, and is significant clue for mobile agent-based moving objects indexing to tracking moving objects.

  3. Earthquake damage orientation to infer seismic parameters in archaeological sites and historical earthquakes

    NASA Astrophysics Data System (ADS)

    Martín-González, Fidel

    2018-01-01

    Studies to provide information concerning seismic parameters and seismic sources of historical and archaeological seismic events are used to better evaluate the seismic hazard of a region. This is of especial interest when no surface rupture is recorded or the seismogenic fault cannot be identified. The orientation pattern of the earthquake damage (ED) (e.g., fallen columns, dropped key stones) that affected architectonic elements of cities after earthquakes has been traditionally used in historical and archaeoseismological studies to infer seismic parameters. However, in the literature depending on the authors, the parameters that can be obtained are contradictory (it has been proposed: the epicenter location, the orientation of the P-waves, the orientation of the compressional strain and the fault kinematics) and authors even question these relations with the earthquake damage. The earthquakes of Lorca in 2011, Christchurch in 2011 and Emilia Romagna in 2012 present an opportunity to measure systematically a large number and wide variety of earthquake damage in historical buildings (the same structures that are used in historical and archaeological studies). The damage pattern orientation has been compared with modern instrumental data, which is not possible in historical and archaeoseismological studies. From measurements and quantification of the orientation patterns in the studied earthquakes, it is observed that there is a systematic pattern of the earthquake damage orientation (EDO) in the proximity of the seismic source (fault trace) (<10 km). The EDO in these earthquakes is normal to the fault trend (±15°). This orientation can be generated by a pulse of motion that in the near fault region has a distinguishable acceleration normal to the fault due to the polarization of the S-waves. Therefore, the earthquake damage orientation could be used to estimate the seismogenic fault trend of historical earthquakes studies where no instrumental data are available.

  4. An artificial bee colony algorithm for locating the critical slip surface in slope stability analysis

    NASA Astrophysics Data System (ADS)

    Kang, Fei; Li, Junjie; Ma, Zhenyue

    2013-02-01

    Determination of the critical slip surface with the minimum factor of safety of a slope is a difficult constrained global optimization problem. In this article, an artificial bee colony algorithm with a multi-slice adjustment method is proposed for locating the critical slip surfaces of soil slopes, and the Spencer method is employed to calculate the factor of safety. Six benchmark examples are presented to illustrate the reliability and efficiency of the proposed technique, and it is also compared with some well-known or recent algorithms for the problem. The results show that the new algorithm is promising in terms of accuracy and efficiency.

  5. Data mining of atmospheric parameters associated with coastal earthquakes

    NASA Astrophysics Data System (ADS)

    Cervone, Guido

    Earthquakes are natural hazards that pose a serious threat to society and the environment. A single earthquake can claim thousands of lives, cause damages for billions of dollars, destroy natural landmarks and render large territories uninhabitable. Studying earthquakes and the processes that govern their occurrence, is of fundamental importance to protect lives, properties and the environment. Recent studies have shown that anomalous changes in land, ocean and atmospheric parameters occur prior to earthquakes. The present dissertation introduces an innovative methodology and its implementation to identify anomalous changes in atmospheric parameters associated with large coastal earthquakes. Possible geophysical mechanisms are discussed in view of the close interaction between the lithosphere, the hydrosphere and the atmosphere. The proposed methodology is a multi strategy data mining approach which combines wavelet transformations, evolutionary algorithms, and statistical analysis of atmospheric data to analyze possible precursory signals. One dimensional wavelet transformations and statistical tests are employed to identify significant singularities in the data, which may correspond to anomalous peaks due to the earthquake preparatory processes. Evolutionary algorithms and other localized search strategies are used to analyze the spatial and temporal continuity of the anomalies detected over a large area (about 2000 km2), to discriminate signals that are most likely associated with earthquakes from those due to other, mostly atmospheric, phenomena. Only statistically significant singularities occurring within a very short time of each other, and which tract a rigorous geometrical path related to the geological properties of the epicentral area, are considered to be associated with a seismic event. A program called CQuake was developed to implement and validate the proposed methodology. CQuake is a fully automated, real time semi-operational system, developed to

  6. Combining historical and geomorphological information to investigate earthquake induced landslides

    NASA Astrophysics Data System (ADS)

    Cardinali, M.; Ferrari, G.; Galli, M.; Guidoboni, E.; Guzzetti, F.

    2003-04-01

    Landslides are caused by many different triggers, including earthquakes. In Italy, a detailed new generation catalogue of information on historical earthquakes for the period 461 B.C to 1997 is available (Catalogue of Strong Italian Earthquakes from 461 B.C. to 1997, ING-SGA 2000). The catalogue lists 548 earthquakes and provides information on a total of about 450 mass-movements triggered by 118 seismic events. The information on earthquake-induced landslides listed in the catalogue was obtained through the careful scrutiny of historical documents and chronicles, but was rarely checked in the field. We report on an attempt to combine the available historical information on landslides caused by earthquakes with standard geomorphological techniques, including the interpretation of aerial photographs and field surveys, to better determine the location, type and distribution of seismically induced historical slope failures. We present four examples in the Central Apennines. The first example describes a rock slide triggered by the 1279 April 30 Umbria-Marche Apennines earthquake (Io = IX) at Serravalle, along the Chienti River (Central Italy). The landslide is the oldest known earthquake-induced slope failure in Italy. The second example describes the location of 2 large landslides triggered by the 1584 September 10 earthquake (Io = IX) at San Piero in Bagno, along the Savio River (Northern Italy). The landslides were subsequently largely modified by mass movements occurred on 1855 making the recognition of the original seismically induced failures difficult, if not impossible. In the third example we present the geographical distribution of the available information on landslide events triggered by 8 earthquakes in Central Valnerina, in the period 1703 to 1979. A comparison with the location of landslides triggered by the September-October 1997 Umbria-Marche earthquake sequence is presented. The fourth example describes the geographical distribution of the available

  7. Induced earthquake during the 2016 Kumamoto earthquake (Mw7.0): Importance of real-time shake monitoring for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Hoshiba, M.; Ogiso, M.

    2016-12-01

    hypocenter location and magnitude. Because we want to predict ground shaking in EEW, we should more focus on monitoring of ground shaking. Experience of the induced earthquake also indicates the importance of the real-time monitor of ground shaking for making EEW more rapid and precise.

  8. Migrating swarms of brittle-failure earthquakes in the lower crust beneath Mammoth Mountain, California

    USGS Publications Warehouse

    Shelly, D.R.; Hill, D.P.

    2011-01-01

    Brittle-failure earthquakes in the lower crust, where high pressures and temperatures would typically promote ductile deformation, are relatively rare but occasionally observed beneath active volcanic centers. Where they occur, these earthquakes provide a rare opportunity to observe volcanic processes in the lower crust, such as fluid injection and migration, which may induce brittle faulting under these conditions. Here, we examine recent short-duration earthquake swarms deep beneath the southwestern margin of Long Valley Caldera, near Mammoth Mountain. We focus in particular on a swarm that occurred September 29-30, 2009. To maximally illuminate the spatial-temporal progression, we supplement catalog events by detecting additional small events with similar waveforms in the continuous data, achieving up to a 10-fold increase in the number of locatable events. We then relocate all events, using cross-correlation and a double-difference algorithm. We find that the 2009 swarm exhibits systematically decelerating upward migration, with hypocenters shallowing from 21 to 19 km depth over approximately 12 hours. This relatively high migration rate, combined with a modest maximum magnitude of 1.4 in this swarm, suggests the trigger might be ascending CO2 released from underlying magma.

  9. Investigation of earthquake factor for optimum tuned mass dampers

    NASA Astrophysics Data System (ADS)

    Nigdeli, Sinan Melih; Bekdaş, Gebrail

    2012-09-01

    In this study the optimum parameters of tuned mass dampers (TMD) are investigated under earthquake excitations. An optimization strategy was carried out by using the Harmony Search (HS) algorithm. HS is a metaheuristic method which is inspired from the nature of musical performances. In addition to the HS algorithm, the results of the optimization objective are compared with the results of the other documented method and the corresponding results are eliminated. In that case, the best optimum results are obtained. During the optimization, the optimum TMD parameters were searched for single degree of freedom (SDOF) structure models with different periods. The optimization was done for different earthquakes separately and the results were compared.

  10. Fault parameter constraints using relocated earthquakes: A validation of first-motion focal-mechanism data

    USGS Publications Warehouse

    Kilb, Debi; Hardebeck, J.L.

    2006-01-01

    We estimate the strike and dip of three California fault segments (Calaveras, Sargent, and a portion of the San Andreas near San Jaun Bautistia) based on principle component analysis of accurately located microearthquakes. We compare these fault orientations with two different first-motion focal mechanism catalogs: the Northern California Earthquake Data Center (NCEDC) catalog, calculated using the FPFIT algorithm (Reasenberg and Oppenheimer, 1985), and a catalog created using the HASH algorithm that tests mechanism stability relative to seismic velocity model variations and earthquake location (Hardebeck and Shearer, 2002). We assume any disagreement (misfit >30° in strike, dip, or rake) indicates inaccurate focal mechanisms in the catalogs. With this assumption, we can quantify the parameters that identify the most optimally constrained focal mechanisms. For the NCEDC/FPFIT catalogs, we find that the best quantitative discriminator of quality focal mechanisms is the station distribution ratio (STDR) parameter, an indicator of how the stations are distributed about the focal sphere. Requiring STDR > 0.65 increases the acceptable mechanisms from 34%–37% to 63%–68%. This suggests stations should be uniformly distributed surrounding, rather than aligning, known fault traces. For the HASH catalogs, the fault plane uncertainty (FPU) parameter is the best discriminator, increasing the percent of acceptable mechanisms from 63%–78% to 81%–83% when FPU ≤ 35°. The overall higher percentage of acceptable mechanisms and the usefulness of the formal uncertainty in identifying quality mechanisms validate the HASH approach of testing for mechanism stability.

  11. Statistical earthquake focal mechanism forecasts

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.; Jackson, David D.

    2014-04-01

    Forecasts of the focal mechanisms of future shallow (depth 0-70 km) earthquakes are important for seismic hazard estimates and Coulomb stress, and other models of earthquake occurrence. Here we report on a high-resolution global forecast of earthquake rate density as a function of location, magnitude and focal mechanism. In previous publications we reported forecasts of 0.5° spatial resolution, covering the latitude range from -75° to +75°, based on the Global Central Moment Tensor earthquake catalogue. In the new forecasts we have improved the spatial resolution to 0.1° and the latitude range from pole to pole. Our focal mechanism estimates require distance-weighted combinations of observed focal mechanisms within 1000 km of each gridpoint. Simultaneously, we calculate an average rotation angle between the forecasted mechanism and all the surrounding mechanisms, using the method of Kagan & Jackson proposed in 1994. This average angle reveals the level of tectonic complexity of a region and indicates the accuracy of the prediction. The procedure becomes problematical where longitude lines are not approximately parallel, and where shallow earthquakes are so sparse that an adequate sample spans very large distances. North or south of 75°, the azimuths of points 1000 km away may vary by about 35°. We solved this problem by calculating focal mechanisms on a plane tangent to the Earth's surface at each forecast point, correcting for the rotation of the longitude lines at the locations of earthquakes included in the averaging. The corrections are negligible between -30° and +30° latitude, but outside that band uncorrected rotations can be significantly off. Improved forecasts at 0.5° and 0.1° resolution are posted at http://eq.ess.ucla.edu/kagan/glob_gcmt_index.html.

  12. Implementing a C++ Version of the Joint Seismic-Geodetic Algorithm for Finite-Fault Detection and Slip Inversion for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Guillemot, C.; Murray, J. R.

    2015-12-01

    The earthquake early warning (EEW) systems in California and elsewhere can greatly benefit from algorithms that generate estimates of finite-fault parameters. These estimates could significantly improve real-time shaking calculations and yield important information for immediate disaster response. Minson et al. (2015) determined that combining FinDer's seismic-based algorithm (Böse et al., 2012) with BEFORES' geodetic-based algorithm (Minson et al., 2014) yields a more robust and informative joint solution than using either algorithm alone. FinDer examines the distribution of peak ground accelerations from seismic stations and determines the best finite-fault extent and strike from template matching. BEFORES employs a Bayesian framework to search for the best slip inversion over all possible fault geometries in terms of strike and dip. Using FinDer and BEFORES together generates estimates of finite-fault extent, strike, dip, preferred slip, and magnitude. To yield the quickest, most flexible, and open-source version of the joint algorithm, we translated BEFORES and FinDer from Matlab into C++. We are now developing a C++ Application Protocol Interface for these two algorithms to be connected to the seismic and geodetic data flowing from the EEW system. The interface that is being developed will also enable communication between the two algorithms to generate the joint solution of finite-fault parameters. Once this interface is developed and implemented, the next step will be to run test seismic and geodetic data through the system via the Earthworm module, Tank Player. This will allow us to examine algorithm performance on simulated data and past real events.

  13. Seismicity Pattern Changes before the M = 4.8 Aeolian Archipelago (Italy) Earthquake of August 16, 2010

    PubMed Central

    2014-01-01

    We investigated the seismicity patterns associated with an M = 4.8 earthquake recorded in the Aeolian Archipelago on 16, August, 2010, by means of the region-time-length (RTL) algorithm. This earthquake triggered landslides at Lipari; a rock fall on the flanks of the Vulcano, Lipari, and Salina islands, and some damages to the village of Lipari. The RTL algorithm is widely used for investigating precursory seismicity changes before large and moderate earthquakes. We examined both the spatial and temporal characteristics of seismicity changes in the Aeolian Archipelago region before the M = 4.8 earthquake. The results obtained reveal 6-7 months of seismic quiescence which started about 15 months before the earthquake. The spatial distribution shows an extensive area characterized by seismic quiescence that suggests a relationship between quiescence and the Aeolian Archipelago regional tectonics. PMID:24511288

  14. Hiding earthquakes from scrupulous monitoring eyes of dense local seismic networks

    NASA Astrophysics Data System (ADS)

    Bogiatzis, P.; Ishii, M.; Kiser, E.

    2012-12-01

    Accurate and complete cataloguing of aftershocks is essential for a variety of purposes, including the estimation of the mainshock rupture area, the identification of seismic gaps, and seismic hazard assessment. However, immediately following large earthquakes, the seismograms recorded by local networks are noisy, with energy arriving from hundreds of aftershocks, in addition to different seismic phases interfering with one another. This causes deterioration in the performance of detection and location of earthquakes using conventional methods such as the S-P approach. This is demonstrated by results of back-projection analysis of teleseismic data showing that a significant number of events are undetected by the Japan Meteorological Agency, within the first twenty-four hours after the Mw9.0 Tohoku-oki, Japan earthquake. The spatial distribution of the hidden events is not arbitrary. Most of these earthquakes are located close to the trench, while some are located at the outer rise. Furthermore, there is a relatively sharp trench-parallel boundary separating the detected and undetected events. We investigate the cause of these hidden earthquakes using forward modeling. The calculation of raypaths for various source locations and takeoff angles with the "shooting" method suggests that this phenomenon is a consequence of the complexities associated with subducting slab. Laterally varying velocity structure defocuses the seismic energy from shallow earthquakes located near the trench and makes the observation of P and S arrivals difficult at stations situated on mainland Japan. Full waveform simulations confirm these results. Our forward calculations also show that the probability of detection is sensitive to the depth of the event. Shallower events near the trench are more difficult to detect than deeper earthquakes that are located inside the subducting plate for which the shadow-zone effect diminishes. The modeling effort is expanded to include three

  15. Borehole strain observations of very low frequency earthquakes

    NASA Astrophysics Data System (ADS)

    Hawthorne, J. C.; Ghosh, A.; Hutchinson, A. A.

    2016-12-01

    We examine the signals of very low frequency earthquakes (VLFEs) in PBO borehole strain data in central Cascadia. These MW 3.3 - 4.1 earthquakes are best observed in seismograms at periods of 20 to 50 seconds. We look for the strain they produce on timescales from about 1 to 30 minutes. First, we stack the strain produced by 13 VLFEs identified by a grid search moment tensor inversion algorithm by Ghosh et. al. (2015) and Hutchinson and Ghosh (2016), as well as several thousand VLFEs detected through template matching these events. The VLFEs are located beneath southernmost Vancouver Island and the eastern Olympic Peninsula, and are best recorded at co-located stations B005 and B007. However, even at these stations, the signal to noise in the stack is often low, and the records are difficult to interpret. Therefore we also combine data from multiple stations and VLFE locations, and simply look for increases in the strain rate at the VLFE times, as increases in strain rate would suggest an increase in the moment rate. We compare the background strain rate in the 12 hours centered on the VLFEs with the strain rate in the 10 minutes centered on the VLFEs. The 10-minute duration is chosen as a compromise that averages out some instrumental noise without introducing too much longer-period random walk noise. Our results suggest a factor of 2 increase in strain rate--and thus moment rate--during the 10-minute VLFE intervals. The increase gives an average VLFE magnitude around M 3.5, within the range of magnitudes obtained with seismology. Further analyses are currently being carried out to better understand the evolution of moment release before, during, and after the VLFEs.

  16. Long aftershock sequences within continents and implications for earthquake hazard assessment.

    PubMed

    Stein, Seth; Liu, Mian

    2009-11-05

    One of the most powerful features of plate tectonics is that the known plate motions give insight into both the locations and average recurrence interval of future large earthquakes on plate boundaries. Plate tectonics gives no insight, however, into where and when earthquakes will occur within plates, because the interiors of ideal plates should not deform. As a result, within plate interiors, assessments of earthquake hazards rely heavily on the assumption that the locations of small earthquakes shown by the short historical record reflect continuing deformation that will cause future large earthquakes. Here, however, we show that many of these recent earthquakes are probably aftershocks of large earthquakes that occurred hundreds of years ago. We present a simple model predicting that the length of aftershock sequences varies inversely with the rate at which faults are loaded. Aftershock sequences within the slowly deforming continents are predicted to be significantly longer than the decade typically observed at rapidly loaded plate boundaries. These predictions are in accord with observations. So the common practice of treating continental earthquakes as steady-state seismicity overestimates the hazard in presently active areas and underestimates it elsewhere.

  17. Composite Earthquake Catalog of the Yellow Sea for Seismic Hazard Studies

    NASA Astrophysics Data System (ADS)

    Kang, S. Y.; Kim, K. H.; LI, Z.; Hao, T.

    2017-12-01

    The Yellow Sea (a.k.a West Sea in Korea) is an epicontinental and semi-closed sea located between Korea and China. Recent earthquakes in the Yellow Sea including, but not limited to, the Seogyuckryulbi-do (1 April 2014, magnitude 5.1), Heuksan-do (21 April 2013, magnitude 4.9), Baekryung-do (18 May 2013, magnitude 4.9) earthquakes, and the earthquake swarm in the Boryung offshore region in 2013, remind us of the seismic hazards affecting east Asia. This series of earthquakes in the Yellow Sea raised numerous questions. Unfortunately, both governments have trouble in monitoring seismicity in the Yellow Sea because earthquakes occur beyond their seismic networks. For example, the epicenters of the magnitude 5.1 earthquake in the Seogyuckryulbi-do region in 2014 reported by the Korea Meteorological Administration and China Earthquake Administration differed by approximately 20 km. This illustrates the difficulty with seismic monitoring and locating earthquakes in the region, despite the huge effort made by both governments. Joint effort is required not only to overcome the limits posed by political boundaries and geographical location but also to study seismicity and the underground structures responsible. Although the well-established and developing seismic networks in Korea and China have provided unprecedented amount and quality of seismic data, high quality catalog is limited to the recent 10s of years, which is far from major earthquake cycle. It is also noticed the earthquake catalog from either country is biased to its own and cannot provide complete picture of seismicity in the Yellow Sea. In order to understand seismic hazard and tectonics in the Yellow Sea, a composite earthquake catalog has been developed. We gathered earthquake information during last 5,000 years from various sources. There are good reasons to believe that some listings account for same earthquake, but in different source parameters. We established criteria in order to provide consistent

  18. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering.

    PubMed

    Luo, Junhai; Fu, Liang

    2017-06-09

    With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS), which is collected from Access Points (APs). The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA) is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC) algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML) estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  19. Toward real-time regional earthquake simulation II: Real-time Online earthquake Simulation (ROS) of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh

    2014-06-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  20. Bi-directional volcano-earthquake interaction at Mauna Loa Volcano, Hawaii

    NASA Astrophysics Data System (ADS)

    Walter, T. R.; Amelung, F.

    2004-12-01

    At Mauna Loa volcano, Hawaii, large-magnitude earthquakes occur mostly at the west flank (Kona area), at the southeast flank (Hilea area), and at the east flank (Kaoiki area). Eruptions at Mauna Loa occur mostly at the summit region and along fissures at the southwest rift zone (SWRZ), or at the northeast rift zone (NERZ). Although historic earthquakes and eruptions at these zones appear to correlate in space and time, the mechanisms and implications of an eruption-earthquake interaction was not cleared. Our analysis of available factual data reveals the highly statistical significance of eruption-earthquake pairs, with a random probability of 5-to-15 percent. We clarify this correlation with the help of elastic stress-field models, where (i) we simulate earthquakes and calculate the resulting normal stress change at volcanic active zones of Mauna Loa, and (ii) we simulate intrusions in Mauna Loa and calculate the Coulomb stress change at the active fault zones. Our models suggest that Hilea earthquakes encourage dike intrusion in the SWRZ, Kona earthquakes encourage dike intrusion at the summit and in the SWRZ, and Kaoiki earthquakes encourage dike intrusion in the NERZ. Moreover, a dike in the SWRZ encourages earthquakes in the Hilea and Kona areas. A dike in the NERZ may encourage and discourage earthquakes in the Hilea and Kaoiki areas. The modeled stress change patterns coincide remarkably with the patterns of several historic eruption-earthquake pairs, clarifying the mechanisms of bi-directional volcano-earthquake interaction for Mauna Loa. The results imply that at Mauna Loa volcanic activity influences the timing and location of earthquakes, and that earthquakes influence the timing, location and the volume of eruptions. In combination with near real-time geodetic and seismic monitoring, these findings may improve volcano-tectonic risk assessment.

  1. Tsunamigenic earthquake simulations using experimentally derived friction laws

    NASA Astrophysics Data System (ADS)

    Murphy, S.; Di Toro, G.; Romano, F.; Scala, A.; Lorito, S.; Spagnuolo, E.; Aretusini, S.; Festa, G.; Piatanesi, A.; Nielsen, S.

    2018-03-01

    Seismological, tsunami and geodetic observations have shown that subduction zones are complex systems where the properties of earthquake rupture vary with depth as a result of different pre-stress and frictional conditions. A wealth of earthquakes of different sizes and different source features (e.g. rupture duration) can be generated in subduction zones, including tsunami earthquakes, some of which can produce extreme tsunamigenic events. Here, we offer a geological perspective principally accounting for depth-dependent frictional conditions, while adopting a simplified distribution of on-fault tectonic pre-stress. We combine a lithology-controlled, depth-dependent experimental friction law with 2D elastodynamic rupture simulations for a Tohoku-like subduction zone cross-section. Subduction zone fault rocks are dominantly incohesive and clay-rich near the surface, transitioning to cohesive and more crystalline at depth. By randomly shifting along fault dip the location of the high shear stress regions ("asperities"), moderate to great thrust earthquakes and tsunami earthquakes are produced that are quite consistent with seismological, geodetic, and tsunami observations. As an effect of depth-dependent friction in our model, slip is confined to the high stress asperity at depth; near the surface rupture is impeded by the rock-clay transition constraining slip to the clay-rich layer. However, when the high stress asperity is located in the clay-to-crystalline rock transition, great thrust earthquakes can be generated similar to the Mw 9 Tohoku (2011) earthquake.

  2. A pipeline leakage locating method based on the gradient descent algorithm

    NASA Astrophysics Data System (ADS)

    Li, Yulong; Yang, Fan; Ni, Na

    2018-04-01

    A pipeline leakage locating method based on the gradient descent algorithm is proposed in this paper. The method has low computing complexity, which is suitable for practical application. We have built experimental environment in real underground pipeline network. A lot of real data has been gathered in the past three months. Every leak point has been certificated by excavation. Results show that positioning error is within 0.4 meter. Rate of false alarm and missing alarm are both under 20%. The calculating time is not above 5 seconds.

  3. Foreshocks and aftershocks of Pisagua 2014 earthquake: time and space evolution of megathrust event.

    NASA Astrophysics Data System (ADS)

    Fuenzalida Velasco, Amaya; Rietbrock, Andreas; Wollam, Jack; Thomas, Reece; de Lima Neto, Oscar; Tavera, Hernando; Garth, Thomas; Ruiz, Sergio

    2016-04-01

    The 2014 Pisagua earthquake of magnitude 8.2 is the first case in Chile where a foreshock sequence was clearly recorded by a local network, as well all the complete sequence including the mainshock and its aftershocks. The seismicity of the last year before the mainshock include numerous clusters close to the epicentral zone (Ruiz et al; 2014) but it was on 16th March that this activity became stronger with the Mw 6.7 precursory event taking place in front of Iquique coast at 12 km depth. The Pisagua earthquake arrived on 1st April 2015 breaking almost 120 km N-S and two days after a 7.6 aftershock occurred in the south of the rupture, enlarging the zone affected by this sequence. In this work, we analyse the foreshocks and aftershock sequence of Pisagua earthquake, from the spatial and time evolution for a total of 15.764 events that were recorded from the 1st March to 31th May 2015. This event catalogue was obtained from the automatic analyse of seismic raw data of more than 50 stations installed in the north of Chile and the south of Peru. We used the STA/LTA algorithm for the detection of P and S arrival times on the vertical components and then a method of back propagation in a 1D velocity model for the event association and preliminary location of its hypocenters following the algorithm outlined by Rietbrock et al. (2012). These results were then improved by locating with NonLinLoc software using a regional velocity model. We selected the larger events to analyse its moment tensor solution by a full waveform inversion using ISOLA software. In order to understand the process of nucleation and propagation of the Pisagua earthquake, we also analysed the evolution in time of the seismicity of the three months of data. The zone where the precursory events took place was strongly activated two weeks before the mainshock and remained very active until the end of the analysed period with an important quantity of the seismicity located in the upper plate and having

  4. An Algorithm to Identify and Localize Suitable Dock Locations from 3-D LiDAR Scans

    DTIC Science & Technology

    2013-05-10

    Locations from 3-D LiDAR Scans 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Graves, Mitchell Robert 5d. PROJECT NUMBER...Ranging ( LiDAR ) scans. A LiDAR sensor is a sensor that collects range images from a rotating array of vertically aligned lasers. Our solution leverages...Algorithm, Dock, Locations, Point Clouds, LiDAR , Identify 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a

  5. a Collaborative Cyberinfrastructure for Earthquake Seismology

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Roussel, F.; Mazet-Roux, G.; Lefebvre, S.; Steed, R.

    2013-12-01

    One of the challenges in real time seismology is the prediction of earthquake's impact. It is particularly true for moderate earthquake (around magnitude 6) located close to urbanised areas, where the slightest uncertainty in event location, depth, magnitude estimates, and/or misevaluation of propagation characteristics, site effects and buildings vulnerability can dramatically change impact scenario. The Euro-Med Seismological Centre (EMSC) has developed a cyberinfrastructure to collect observations from eyewitnesses in order to provide in-situ constraints on actual damages. This cyberinfrastructure takes benefit of the natural convergence of earthquake's eyewitnesses on EMSC website (www.emsc-csem.org), the second global earthquake information website within tens of seconds of the occurrence of a felt event. It includes classical crowdsourcing tools such as online questionnaires available in 39 languages, and tools to collect geolocated pics. It also comprises information derived from the real time analysis of the traffic on EMSC website, a method named flashsourcing; In case of a felt earthquake, eyewitnesses reach EMSC website within tens of seconds to find out the cause of the shaking they have just been through. By analysing their geographical origin through their IP address, we automatically detect felt earthquakes and in some cases map the damaged areas through the loss of Internet visitors. We recently implemented a Quake Catcher Network (QCN) server in collaboration with Stanford University and the USGS, to collect ground motion records performed by volunteers and are also involved in a project to detect earthquakes from ground motions sensors from smartphones. Strategies have been developed for several social media (Facebook, Twitter...) not only to distribute earthquake information, but also to engage with the Citizens and optimise data collection. A smartphone application is currently under development. We will present an overview of this

  6. Earthquake Warning Performance in Vallejo for the South Napa Earthquake

    NASA Astrophysics Data System (ADS)

    Wurman, G.; Price, M.

    2014-12-01

    In 2002 and 2003, Seismic Warning Systems, Inc. installed first-generation QuakeGuardTM earthquake warning devices at all eight fire stations in Vallejo, CA. These devices are designed to detect the P-wave of an earthquake and initiate predetermined protective actions if the impending shaking is estimated at approximately Modifed Mercalli Intensity V or greater. At the Vallejo fire stations the devices were set up to sound an audio alert over the public address system and to command the equipment bay doors to open. In August 2014, after more than 11 years of operating in the fire stations with no false alarms, the five units that were still in use triggered correctly on the MW 6.0 South Napa earthquake, less than 16 km away. The audio alert sounded in all five stations, providing fire fighters with 1.5 to 2.5 seconds of warning before the arrival of the S-wave, and the equipment bay doors opened in three of the stations. In one station the doors were disconnected from the QuakeGuard device, and another station lost power before the doors opened completely. These problems highlight just a small portion of the complexity associated with realizing actionable earthquake warnings. The issues experienced in this earthquake have already been addressed in subsequent QuakeGuard product generations, with downstream connection monitoring and backup power for critical systems. The fact that the fire fighters in Vallejo were afforded even two seconds of warning at these epicentral distances results from the design of the QuakeGuard devices, which focuses on rapid false positive rejection and ground motion estimates. We discuss the performance of the ground motion estimation algorithms, with an emphasis on the accuracy and timeliness of the estimates at close epicentral distances.

  7. Infrasound Signal Characteristics from Small Earthquakes

    DTIC Science & Technology

    2011-09-01

    INFRASOUND SIGNAL CHARACTERISTICS FROM SMALL EARTHQUAKES Stephen J. Arrowsmith1, J. Mark Hale2, Relu Burlacu2, Kristine L. Pankow2, Brian W. Stump3...ABSTRACT Physical insight into source properties that contribute to the generation of infrasound signals is critical to understanding the...m, with one element being co-located with a seismic station. One of the goals of this project is the recording of infrasound from earthquakes of

  8. Constructing new seismograms from old earthquakes: Retrospective seismology at multiple length scales

    NASA Astrophysics Data System (ADS)

    Entwistle, Elizabeth; Curtis, Andrew; Galetti, Erica; Baptie, Brian; Meles, Giovanni

    2015-04-01

    If energy emitted by a seismic source such as an earthquake is recorded on a suitable backbone array of seismometers, source-receiver interferometry (SRI) is a method that allows those recordings to be projected to the location of another target seismometer, providing an estimate of the seismogram that would have been recorded at that location. Since the other seismometer may not have been deployed at the time the source occurred, this renders possible the concept of 'retrospective seismology' whereby the installation of a sensor at one period of time allows the construction of virtual seismograms as though that sensor had been active before or after its period of installation. Using the benefit of hindsight of earthquake location or magnitude estimates, SRI can establish new measurement capabilities closer to earthquake epicenters, thus potentially improving earthquake location estimates. Recently we showed that virtual SRI seismograms can be constructed on target sensors in both industrial seismic and earthquake seismology settings, using both active seismic sources and ambient seismic noise to construct SRI propagators, and on length scales ranging over 5 orders of magnitude from ~40 m to ~2500 km[1]. Here we present the results from earthquake seismology by comparing virtual earthquake seismograms constructed at target sensors by SRI to those actually recorded on the same sensors. We show that spatial integrations required by interferometric theory can be calculated over irregular receiver arrays by embedding these arrays within 2D spatial Voronoi cells, thus improving spatial interpolation and interferometric results. The results of SRI are significantly improved by restricting the backbone receiver array to include approximately those receivers that provide a stationary phase contribution to the interferometric integrals. We apply both correlation-correlation and correlation-convolution SRI, and show that the latter constructs virtual seismograms with fewer

  9. Shallow moonquakes - How they compare with earthquakes

    NASA Technical Reports Server (NTRS)

    Nakamura, Y.

    1980-01-01

    Of three types of moonquakes strong enough to be detectable at large distances - deep moonquakes, meteoroid impacts and shallow moonquakes - only shallow moonquakes are similar in nature to earthquakes. A comparison of various characteristics of moonquakes with those of earthquakes indeed shows a remarkable similarity between shallow moonquakes and intraplate earthquakes: (1) their occurrences are not controlled by tides; (2) they appear to occur in locations where there is evidence of structural weaknesses; (3) the relative abundances of small and large quakes (b-values) are similar, suggesting similar mechanisms; and (4) even the levels of activity may be close. The shallow moonquakes may be quite comparable in nature to intraplate earthquakes, and they may be of similar origin.

  10. An autocorrelation method to detect low frequency earthquakes within tremor

    USGS Publications Warehouse

    Brown, J.R.; Beroza, G.C.; Shelly, D.R.

    2008-01-01

    Recent studies have shown that deep tremor in the Nankai Trough under western Shikoku consists of a swarm of low frequency earthquakes (LFEs) that occur as slow shear slip on the down-dip extension of the primary seismogenic zone of the plate interface. The similarity of tremor in other locations suggests a similar mechanism, but the absence of cataloged low frequency earthquakes prevents a similar analysis. In this study, we develop a method for identifying LFEs within tremor. The method employs a matched-filter algorithm, similar to the technique used to infer that tremor in parts of Shikoku is comprised of LFEs; however, in this case we do not assume the origin times or locations of any LFEs a priori. We search for LFEs using the running autocorrelation of tremor waveforms for 6 Hi-Net stations in the vicinity of the tremor source. Time lags showing strong similarity in the autocorrelation represent either repeats, or near repeats, of LFEs within the tremor. We test the method on an hour of Hi-Net recordings of tremor and demonstrates that it extracts both known and previously unidentified LFEs. Once identified, we cross correlate waveforms to measure relative arrival times and locate the LFEs. The results are able to explain most of the tremor as a swarm of LFEs and the locations of newly identified events appear to fill a gap in the spatial distribution of known LFEs. This method should allow us to extend the analysis of Shelly et al. (2007a) to parts of the Nankai Trough in Shikoku that have sparse LFE coverage, and may also allow us to extend our analysis to other regions that experience deep tremor, but where LFEs have not yet been identified. Copyright 2008 by the American Geophysical Union.

  11. Localizing Submarine Earthquakes by Listening to the Water Reverberations

    NASA Astrophysics Data System (ADS)

    Castillo, J.; Zhan, Z.; Wu, W.

    2017-12-01

    Mid-Ocean Ridge (MOR) earthquakes generally occur far from any land based station and are of moderate magnitude, making it complicated to detect and in most cases, locate accurately. This limits our understanding of how MOR normal and transform faults move and the manner in which they slip. Different from continental events, seismic records from earthquakes occurring beneath the ocean floor show complex reverberations caused by P-wave energy trapped in the water column that are highly dependent of the source location and the efficiency to which energy propagated to the near-source surface. These later arrivals are commonly considered to be only a nuisance as they might sometimes interfere with the primary arrivals. However, in this study, we take advantage of the wavefield's high sensitivity to small changes in the seafloor topography and the present-day availability of worldwide multi-beam bathymetry to relocate submarine earthquakes by modeling these water column reverberations in teleseismic signals. Using a three-dimensional hybrid method for modeling body wave arrivals, we demonstrate that an accurate hypocentral location of a submarine earthquake (<5 km) can be achieved if the structural complexities near the source region are appropriately accounted for. This presents a novel way of studying earthquake source properties and will serve as a means to explore the influence of physical fault structure on the seismic behavior of transform faults.

  12. Seismicity map tools for earthquake studies

    NASA Astrophysics Data System (ADS)

    Boucouvalas, Anthony; Kaskebes, Athanasios; Tselikas, Nikos

    2014-05-01

    We report on the development of new and online set of tools for use within Google Maps, for earthquake research. We demonstrate this server based and online platform (developped with PHP, Javascript, MySQL) with the new tools using a database system with earthquake data. The platform allows us to carry out statistical and deterministic analysis on earthquake data use of Google Maps and plot various seismicity graphs. The tool box has been extended to draw on the map line segments, multiple straight lines horizontally and vertically as well as multiple circles, including geodesic lines. The application is demonstrated using localized seismic data from the geographic region of Greece as well as other global earthquake data. The application also offers regional segmentation (NxN) which allows the studying earthquake clustering, and earthquake cluster shift within the segments in space. The platform offers many filters such for plotting selected magnitude ranges or time periods. The plotting facility allows statistically based plots such as cumulative earthquake magnitude plots and earthquake magnitude histograms, calculation of 'b' etc. What is novel for the platform is the additional deterministic tools. Using the newly developed horizontal and vertical line and circle tools we have studied the spatial distribution trends of many earthquakes and we here show for the first time the link between Fibonacci Numbers and spatiotemporal location of some earthquakes. The new tools are valuable for examining visualizing trends in earthquake research as it allows calculation of statistics as well as deterministic precursors. We plan to show many new results based on our newly developed platform.

  13. A seismoacoustic study of the 2011 January 3 Circleville earthquake

    NASA Astrophysics Data System (ADS)

    Arrowsmith, Stephen J.; Burlacu, Relu; Pankow, Kristine; Stump, Brian; Stead, Richard; Whitaker, Rod; Hayward, Chris

    2012-05-01

    We report on a unique set of infrasound observations from a single earthquake, the 2011 January 3 Circleville earthquake (Mw 4.7, depth of 8 km), which was recorded by nine infrasound arrays in Utah. Based on an analysis of the signal arrival times and backazimuths at each array, we find that the infrasound arrivals at six arrays can be associated to the same source and that the source location is consistent with the earthquake epicentre. Results of propagation modelling indicate that the lack of associated arrivals at the remaining three arrays is due to path effects. Based on these findings we form the working hypothesis that the infrasound is generated by body waves causing the epicentral region to pump the atmosphere, akin to a baffled piston. To test this hypothesis, we have developed a numerical seismoacoustic model to simulate the generation of epicentral infrasound from earthquakes. We model the generation of seismic waves using a 3-D finite difference algorithm that accounts for the earthquake moment tensor, source time function, depth and local geology. The resultant acceleration-time histories on a 2-D grid at the surface then provide the initial conditions for modelling the near-field infrasonic pressure wave using the Rayleigh integral. Finally, we propagate the near-field source pressure through the Ground-to-Space atmospheric model using a time-domain Parabolic Equation technique. By comparing the resultant predictions with the six epicentral infrasound observations from the 2011 January 3, Circleville earthquake, we show that the observations agree well with our predictions. The predicted and observed amplitudes are within a factor of 2 (on average, the synthetic amplitudes are a factor of 1.6 larger than the observed amplitudes). In addition, arrivals are predicted at all six arrays where signals are observed, and importantly not predicted at the remaining three arrays. Durations are typically predicted to within a factor of 2, and in some cases

  14. Volcano-earthquake interaction at Mauna Loa volcano, Hawaii

    NASA Astrophysics Data System (ADS)

    Walter, Thomas R.; Amelung, Falk

    2006-05-01

    The activity at Mauna Loa volcano, Hawaii, is characterized by eruptive fissures that propagate into the Southwest Rift Zone (SWRZ) or into the Northeast Rift Zone (NERZ) and by large earthquakes at the basal decollement fault. In this paper we examine the historic eruption and earthquake catalogues, and we test the hypothesis that the events are interconnected in time and space. Earthquakes in the Kaoiki area occur in sequence with eruptions from the NERZ, and earthquakes in the Kona and Hilea areas occur in sequence with eruptions from the SWRZ. Using three-dimensional numerical models, we demonstrate that elastic stress transfer can explain the observed volcano-earthquake interaction. We examine stress changes due to typical intrusions and earthquakes. We find that intrusions change the Coulomb failure stress along the decollement fault so that NERZ intrusions encourage Kaoiki earthquakes and SWRZ intrusions encourage Kona and Hilea earthquakes. On the other hand, earthquakes decompress the magma chamber and unclamp part of the Mauna Loa rift zone, i.e., Kaoiki earthquakes encourage NERZ intrusions, whereas Kona and Hilea earthquakes encourage SWRZ intrusions. We discuss how changes of the static stress field affect the occurrence of earthquakes as well as the occurrence, location, and volume of dikes and of associated eruptions and also the lava composition and fumarolic activity.

  15. Earthquake prediction evaluation standards applied to the VAN Method

    NASA Astrophysics Data System (ADS)

    Jackson, David D.

    Earthquake prediction research must meet certain standards before it can be suitably evaluated for potential application in decision making. For methods that result in a binary (on or off) alarm condition, requirements include (1) a quantitative description of observables that trigger an alarm, (2) a quantitative description, including ranges of time, location, and magnitude, of the predicted earthquakes, (3) documented evidence of all previous alarms, (4) a complete list of predicted earthquakes, (5) a complete list of unpredicted earthquakes. The VAN technique [Varotsos and Lazaridou, 1991; Varotsos et al., 1996] has not yet been stated as a testable hypothesis. It fails criteria (1) and (2) so it is not ready to be evaluated properly. Although telegrams were transmitted in advance of claimed successes, these telegrams did not fully specify the predicted events, and all of the published statistical evaluations involve many subjective ex post facto decisions. Lacking a statistically demonstrated relationship to earthquakes, a candidate prediction technique should satisfy several plausibility criteria, including: (1) a reasonable relationship between the location of the candidate precursor and that of the predicted earthquake, (2) some demonstration that the candidate precursory observations are related to stress, strain, or other quantities related to earthquakes, and (3) the existence of co-seismic as well as pre-seismic variations of the candidate precursor. The VAN technique meets none of these criteria.

  16. Investigation of an Unusually Shallow Earthquake Sequence in Mogul, NV from a Discrimination Perspective (Postprint): Annual Report 1

    DTIC Science & Technology

    2012-05-09

    the ML>1.0 Mogul, Nevada earthquakes located by the Nevada Seismological Laboratory; mining explosions (ML>2.0) and crustal earthquakes (ML>2.5) in...1.0 Mogul, Nevada earthquakes located by the Nevada Seismological Laboratory; mining explosions (ML>2.0) and crustal earthquakes (ML>2.5) in the in...distinguish between very shallow crustal earthquakes and underground nuclear explosions are not well developed, significantly because such well-instrumented

  17. Did you feel it? : citizens contribute to earthquake science

    USGS Publications Warehouse

    Wald, David J.; Dewey, James W.

    2005-01-01

    Since the early 1990s, the magnitude and location of an earthquake have been available within minutes on the Internet. Now, as a result of work by the U.S. Geological Survey and with the cooperation of various regional seismic networks, people who experience an earthquake can go online and share information about its effects to help create a map of shaking intensities and damage. Such “Community Internet Intensity Maps” (CIIMs) contribute greatly toward the quick assessment of the scope of an earthquake emergency and provide valuable data for earthquake research.

  18. Extending the ISC-GEM Global Earthquake Instrumental Catalogue

    NASA Astrophysics Data System (ADS)

    Di Giacomo, Domenico; Engdhal, Bob; Storchak, Dmitry; Villaseñor, Antonio; Harris, James

    2015-04-01

    After a 27-month project funded by the GEM Foundation (www.globalquakemodel.org), in January 2013 we released the ISC-GEM Global Instrumental Earthquake Catalogue (1900 2009) (www.isc.ac.uk/iscgem/index.php) as a special product to use for seismic hazard studies. The new catalogue was necessary as improved seismic hazard studies necessitate that earthquake catalogues are homogeneous (to the largest extent possible) over time in their fundamental parameters, such as location and magnitude. Due to time and resource limitation, the ISC-GEM catalogue (1900-2009) included earthquakes selected according to the following time-variable cut-off magnitudes: Ms=7.5 for earthquakes occurring before 1918; Ms=6.25 between 1918 and 1963; and Ms=5.5 from 1964 onwards. Because of the importance of having a reliable seismic input for seismic hazard studies, funding from GEM and two commercial companies in the US and UK allowed us to start working on the extension of the ISC-GEM catalogue both for earthquakes that occurred beyond 2009 and for earthquakes listed in the International Seismological Summary (ISS) which fell below the cut-off magnitude of 6.25. This extension is part of a four-year program that aims at including in the ISC-GEM catalogue large global earthquakes that occurred before the beginning of the ISC Bulletin in 1964. In this contribution we present the updated ISC GEM catalogue, which will include over 1000 more earthquakes that occurred in 2010 2011 and several hundreds more between 1950 and 1959. The catalogue extension between 1935 and 1949 is currently underway. The extension of the ISC-GEM catalogue will also be helpful for regional cross border seismic hazard studies as the ISC-GEM catalogue should be used as basis for cross-checking the consistency in location and magnitude of those earthquakes listed both in the ISC GEM global catalogue and regional catalogues.

  19. Using meta-information of a posteriori Bayesian solutions of the hypocentre location task for improving accuracy of location error estimation

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2015-06-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analysed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. Although estimating of the earthquake foci location is relatively simple, a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling and a priori uncertainties. In this paper, we addressed this task when statistics of observational and/or modelling errors are unknown. This common situation requires introduction of a priori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland, we propose an approach based on an analysis of Shanon's entropy calculated for the a posteriori distribution. We show that this meta-characteristic of the a posteriori distribution carries some information on uncertainties of the solution found.

  20. Earthquake technology fights crime

    USGS Publications Warehouse

    Lahr, John C.; Ward, Peter L.; Stauffer, Peter H.; Hendley, James W.

    1996-01-01

    Scientists with the U.S. Geological Survey have adapted their methods for quickly finding the exact source of an earthquake to the problem of locating gunshots. On the basis of this work, a private company is now testing an automated gunshot-locating system in a San Francisco Bay area community. This system allows police to rapidly pinpoint and respond to illegal gunfire, helping to reduce crime in our neighborhoods.

  1. Using Earthquake Location and Coda Attenuation Analysis to Explore Shallow Structures Above the Socorro Magma Body, New Mexico

    NASA Astrophysics Data System (ADS)

    Schmidt, J. P.; Bilek, S. L.; Worthington, L. L.; Schmandt, B.; Aster, R. C.

    2017-12-01

    The Socorro Magma Body (SMB) is a thin, sill-like intrusion with a top at 19 km depth covering approximately 3400 km2 within the Rio Grande Rift. InSAR studies show crustal uplift patterns linked to SMB inflation with deformation rates of 2.5 mm/yr in the area of maximum uplift with some peripheral subsidence. Our understanding of the emplacement history and shallow structure above the SMB is limited. We use a large seismic deployment to explore seismicity and crustal attenuation in the SMB region, focusing on the area of highest observed uplift to investigate the possible existence of fluid/magma in the upper crust. We would expect to see shallower earthquakes and/or higher attenuation if high heat flow, fluid or magma is present in the upper crust. Over 800 short period vertical component geophones situated above the northern portion of the SMB were deployed for two weeks in 2015. This data is combined with other broadband and short period seismic stations to detect and locate earthquakes as well as to estimate seismic attenuation. We use phase arrivals from the full dataset to relocate a set of 33 local/regional earthquakes recorded during the deployment. We also measure amplitude decay after the S-wave arrival to estimate coda attenuation caused by scattering of seismic waves and anelastic processes. Coda attenuation is estimated using the single backscatter method described by Aki and Chouet (1975), filtering the seismograms at 6, 9 and 12 Hz center frequencies. Earthquakes occurred at 2-13 km depth during the deployment, but no spatial patterns linked with the high uplift region were observed over this short duration. Attenuation results for this deployment suggest Q ranging in values of 130 to 2000, averaging around Q of 290, comparable to Q estimates of other studies of the western US. With our dense station coverage, we explore attenuation over smaller scales, and find higher attenuation for stations in the area of maximum uplift relative to stations

  2. Assessing the Applicability of Earthquake Early Warning in Nicaragua.

    NASA Astrophysics Data System (ADS)

    Massin, F.; Clinton, J. F.; Behr, Y.; Strauch, W.; Cauzzi, C.; Boese, M.; Talavera, E.; Tenorio, V.; Ramirez, J.

    2016-12-01

    Nicaragua, like much of Central America, suffers from frequent damaging earthquakes (6 M7+ earthquakes occurred in the last 100 years). Thrust events occur at the Middle America Trench where the Cocos plate subducts by 72-81 mm/yr eastward beneath the Caribbean plate. Shallow crustal events occur on-shore, with potential extensive damage as demonstrated in 1972 by a M6.2 earthquake, 5 km beneath Managua. This seismotectonic setting is challenging for Earthquake Early Warning (EEW) because the target events derive from both the offshore seismicity, with potentially large lead times but uncertain locations, and shallow seismicity in close proximity to densely urbanized areas, where an early warning would be short if available at all. Nevertheless, EEW could reduce Nicaragua's earthquake exposure. The Swiss Development and Cooperation Fund and the Nicaraguan Government have funded a collaboration between the Swiss Seismological Service (SED) at ETH Zurich and the Nicaraguan Geosciences Institute (INETER) in Managua to investigate and build a prototype EEW system for Nicaragua and the wider region. In this contribution, we present the potential of EEW to effectively alert Nicaragua and the neighbouring regions. We model alert time delays using all available seismic stations (existing and planned) in the region, as well as communication and processing delays (observed and optimal) to estimate current and potential performances of EEW alerts. Theoretical results are verified with the output from the Virtual Seismologist in SeisComP3 (VS(SC3)). VS(SC3) is implemented in the INETER SeisComP3 system for real-time operation and as an offline instance, that simulates real-time operation, to record processing delays of playback events. We compare our results with similar studies for Europe, California and New Zealand. We further highlight current capabilities and challenges for providing EEW alerts in Nicaragua. We also discuss how combining different algorithms, like e.g. VS

  3. Modified-Fibonacci-Dual-Lucas method for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.

    2015-06-01

    The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (<6.5R) can be predicted with very good accuracy window (+-1 day). In this contribution we present an improvement modification to the FDL method, the MFDL method, which performs better than the FDL. We use the FDL numbers to develop possible earthquakes dates but with the important difference that the starting seed date is a trigger planetary aspect prior to the earthquake. Typical planetary aspects are Moon conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with

  4. Comparison of hypocentre parameters of earthquakes in the Aegean region

    NASA Astrophysics Data System (ADS)

    Özel, Nurcan M.; Shapira, Avi; Harris, James

    2007-06-01

    The Aegean Sea is one of the more seismically active areas in the Euro-Mediterranean region. The seismic activity in the Aegean Sea is monitored by a number of local agencies that contribute their data to the International Seismological Centre (ISC). Consequently, the ISC Bulletin may serve as a reliable reference for assessing the capabilities of local agencies to monitor moderate and low magnitude earthquakes. We have compared bulletins of the Kandilli Observatory and Earthquake Research Institute (KOERI) and the ISC, for the period 1976-2003 that comprises the most complete data sets for both KOERI and ISC. The selected study area is the East Aegean Sea and West Turkey, bounded by latitude 35-41°N and by longitude 24-29°E. The total number of events known to occur in this area, during 1976-2003 is about 41,638. Seventy-two percent of those earthquakes were located by ISC and 75% were located by KOERI. As expected, epicentre location discrepancy between ISC and KOERI solutions are larger as we move away from the KOERI seismic network. Out of the 22,066 earthquakes located by both ISC and KOERI, only 4% show a difference of 50 km or more. About 140 earthquakes show a discrepancy of more than 100 km. Focal Depth determinations differ mainly in the subduction zone along the Hellenic arc. Less than 2% of the events differ in their focal depth by more than 25 km. Yet, the location solutions of about 30 events differ by more than 100 km. Almost a quarter of the events listed in the ISC Bulletin are missed by KOERI, most of them occurring off the coast of Turkey, in the East Aegean. Based on the frequency-magnitude distributions, the KOERI Bulletin is complete for earthquakes with duration magnitudes Md > 2.7 (both located and assigned magnitudes) where as the threshold magnitude for events with location and magnitude determinations by ISC is mb > 4.0. KOERI magnitudes seem to be poorly correlated with ISC magnitudes suggesting relatively high uncertainty in the

  5. Fault Identification by Unsupervised Learning Algorithm

    NASA Astrophysics Data System (ADS)

    Nandan, S.; Mannu, U.

    2012-12-01

    Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault

  6. Source properties of earthquakes near the Salton Sea triggered by the 16 October 1999 M 7.1 Hector Mine, California, earthquake

    USGS Publications Warehouse

    Hough, S.E.; Kanamori, H.

    2002-01-01

    We analyze the source properties of a sequence of triggered earthquakes that occurred near the Salton Sea in southern California in the immediate aftermath of the M 7.1 Hector Mine earthquake of 16 October 1999. The sequence produced a number of early events that were not initially located by the regional network, including two moderate earthquakes: the first within 30 sec of the P-wave arrival and a second approximately 10 minutes after the mainshock. We use available amplitude and waveform data from these events to estimate magnitudes to be approximately 4.7 and 4.4, respectively, and to obtain crude estimates of their locations. The sequence of small events following the initial M 4.7 earthquake is clustered and suggestive of a local aftershock sequence. Using both broadband TriNet data and analog data from the Southern California Seismic Network (SCSN), we also investigate the spectral characteristics of the M 4.4 event and other triggered earthquakes using empirical Green's function (EGF) analysis. We find that the source spectra of the events are consistent with expectations for tectonic (brittle shear failure) earthquakes, and infer stress drop values of 0.1 to 6 MPa for six M 2.1 to M 4.4 events. The estimated stress drop values are within the range observed for tectonic earthquakes elsewhere. They are relatively low compared to typically observed stress drop values, which is consistent with expectations for faulting in an extensional, high heat flow regime. The results therefore suggest that, at least in this case, triggered earthquakes are associated with a brittle shear failure mechanism. This further suggests that triggered earthquakes may tend to occur in geothermal-volcanic regions because shear failure occurs at, and can be triggered by, relatively low stresses in extensional regimes.

  7. The 2008 Wells, Nevada Earthquake Sequence: Application of Subspace Detection and Multiple Event Relocation Techniques

    NASA Astrophysics Data System (ADS)

    Nealy, J. L.; Benz, H.; Hayes, G. P.; Bergman, E.; Barnhart, W. D.

    2016-12-01

    On February 21, 2008 at 14:16:02 (UTC), Wells, Nevada experienced a Mw 6.0 earthquake, the largest earthquake in the state within the past 50 years. Here, we re-analyze in detail the spatiotemporal variations of the foreshock and aftershock sequence and compare the distribution of seismicity to a recent slip model based on inversion of InSAR observations. A catalog of earthquakes for the time period of February 1, 2008 through August 31, 2008 was derived from a combination of arrival time picks using a kurtosis detector (primarily P arrival times), subspace detector (primarily S arrival times), associating the combined pick dataset, and applying multiple event relocation techniques using the 19 closest USArray Transportable Array stations, permanent regional seismic monitoring stations in Nevada and Utah, and temporary stations deployed for an aftershock study. We were able to detect several thousand earthquakes in the months following the mainshock as well as several foreshocks in the days leading up to the event. We reviewed the picks for the largest 986 earthquakes and relocated them using the Hypocentroidal Decomposition (HD) method. The HD technique provides both relative locations for the individual earthquakes and an absolute location for the earthquake cluster, resulting in absolute locations of the events in the cluster having minimal bias from unknown Earth structure. A subset of these "calibrated" earthquake locations that spanned the duration of the sequence and had small uncertainties in location were used as prior constraints within a second relocation effort using the entire dataset and the Bayesloc approach. Accurate locations (to within 2 km) were obtained using Bayesloc for 1,952 of the 2,157 events associated over the seven-month period of the study. The final catalog of earthquake hypocenters indicates that the aftershocks extend for about 20 km along the strike of the ruptured fault. The aftershocks occur primarily updip and along the

  8. Monochromatic body waves excited by great subduction zone earthquakes

    NASA Astrophysics Data System (ADS)

    Ihmlé, Pierre F.; Madariaga, Raúl

    Large quasi-monochromatic body waves were excited by the 1995 Chile Mw=8.1 and by the 1994 Kurile Mw=8.3 events. They are observed on vertical/radial component seismograms following the direct P and Pdiff arrivals, at all azimuths. We devise a slant stack algorithm to characterize the source of the oscillations. This technique aims at locating near-source isotropic scatterers using broadband data from global networks. For both events, we find that the oscillations emanate from the trench. We show that these monochromatic waves are due to localized oscillations of the water column. Their period corresponds to the gravest ID mode of a water layer for vertically traveling compressional waves. We suggest that these monochromatic body waves may yield additional constraints on the source process of great subduction zone earthquakes.

  9. Updated earthquake catalogue for seismic hazard analysis in Pakistan

    NASA Astrophysics Data System (ADS)

    Khan, Sarfraz; Waseem, Muhammad; Khan, Muhammad Asif; Ahmed, Waqas

    2018-03-01

    A reliable and homogenized earthquake catalogue is essential for seismic hazard assessment in any area. This article describes the compilation and processing of an updated earthquake catalogue for Pakistan. The earthquake catalogue compiled in this study for the region (quadrangle bounded by the geographical limits 40-83° N and 20-40° E) includes 36,563 earthquake events, which are reported as 4.0-8.3 moment magnitude (M W) and span from 25 AD to 2016. Relationships are developed between the moment magnitude and body, and surface wave magnitude scales to unify the catalogue in terms of magnitude M W. The catalogue includes earthquakes from Pakistan and neighbouring countries to minimize the effects of geopolitical boundaries in seismic hazard assessment studies. Earthquakes reported by local and international agencies as well as individual catalogues are included. The proposed catalogue is further used to obtain magnitude of completeness after removal of dependent events by using four different algorithms. Finally, seismicity parameters of the seismic sources are reported, and recommendations are made for seismic hazard assessment studies in Pakistan.

  10. Earthquake location determination using data from DOMERAPI and BMKG seismic networks: A preliminary result of DOMERAPI project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramdhan, Mohamad; Agency for Meteorology, Climatology and Geophysics of Indonesia; Nugraha, Andri Dian

    DOMERAPI project has been conducted to comprehensively study the internal structure of Merapi volcano, especially about deep structural features beneath the volcano. DOMERAPI earthquake monitoring network consists of 46 broad-band seismometers installed around the Merapi volcano. Earthquake hypocenter determination is a very important step for further studies, such as hypocenter relocation and seismic tomographic imaging. Ray paths from earthquake events occurring outside the Merapi region can be utilized to delineate the deep magma structure. Earthquakes occurring outside the DOMERAPI seismic network will produce an azimuthal gap greater than 180{sup 0}. Owing to this situation the stations from BMKG seismic networkmore » can be used jointly to minimize the azimuthal gap. We identified earthquake events manually and carefully, and then picked arrival times of P and S waves. The data from the DOMERAPI seismic network were combined with the BMKG data catalogue to determine earthquake events outside the Merapi region. For future work, we will also use the BPPTKG (Center for Research and Development of Geological Disaster Technology) data catalogue in order to study shallow structures beneath the Merapi volcano. The application of all data catalogues will provide good information as input for further advanced studies and volcano hazards mitigation.« less

  11. Detection and Mapping of the September 2017 Mexico Earthquakes Using DAS Fiber-Optic Infrastructure Arrays

    NASA Astrophysics Data System (ADS)

    Karrenbach, M. H.; Cole, S.; Williams, J. J.; Biondi, B. C.; McMurtry, T.; Martin, E. R.; Yuan, S.

    2017-12-01

    Fiber-optic distributed acoustic sensing (DAS) uses conventional telecom fibers for a wide variety of monitoring purposes. Fiber-optic arrays can be located along pipelines for leak detection; along borders and perimeters to detect and locate intruders, or along railways and roadways to monitor traffic and identify and manage incidents. DAS can also be used to monitor oil and gas reservoirs and to detect earthquakes. Because thousands of such arrays are deployed worldwide and acquiring data continuously, they can be a valuable source of data for earthquake detection and location, and could potentially provide important information to earthquake early-warning systems. In this presentation, we show that DAS arrays in Mexico and the United States detected the M8.1 and M7.2 Mexico earthquakes in September 2017. At Stanford University, we have deployed a 2.4 km fiber-optic DAS array in a figure-eight pattern, with 600 channels spaced 4 meters apart. Data have been recorded continuously since September 2016. Over 800 earthquakes from across California have been detected and catalogued. Distant teleseismic events have also been recorded, including the two Mexican earthquakes. In Mexico, fiber-optic arrays attached to pipelines also detected these two events. Because of the length of these arrays and their proximity to the event locations, we can not only detect the earthquakes but also make location estimates, potentially in near real time. In this presentation, we review the data recorded for these two events recorded at Stanford and in Mexico. We compare the waveforms recorded by the DAS arrays to those recorded by traditional earthquake sensor networks. Using the wide coverage provided by the pipeline arrays, we estimate the event locations. Such fiber-optic DAS networks can potentially play a role in earthquake early-warning systems, allowing actions to be taken to minimize the impact of an earthquake on critical infrastructure components. While many such fiber

  12. UCERF3: A new earthquake forecast for California's complex fault system

    USGS Publications Warehouse

    Field, Edward H.; ,

    2015-01-01

    With innovations, fresh data, and lessons learned from recent earthquakes, scientists have developed a new earthquake forecast model for California, a region under constant threat from potentially damaging events. The new model, referred to as the third Uniform California Earthquake Rupture Forecast, or "UCERF" (http://www.WGCEP.org/UCERF3), provides authoritative estimates of the magnitude, location, and likelihood of earthquake fault rupture throughout the state. Overall the results confirm previous findings, but with some significant changes because of model improvements. For example, compared to the previous forecast (Uniform California Earthquake Rupture Forecast 2), the likelihood of moderate-sized earthquakes (magnitude 6.5 to 7.5) is lower, whereas that of larger events is higher. This is because of the inclusion of multifault ruptures, where earthquakes are no longer confined to separate, individual faults, but can occasionally rupture multiple faults simultaneously. The public-safety implications of this and other model improvements depend on several factors, including site location and type of structure (for example, family dwelling compared to a long-span bridge). Building codes, earthquake insurance products, emergency plans, and other risk-mitigation efforts will be updated accordingly. This model also serves as a reminder that damaging earthquakes are inevitable for California. Fortunately, there are many simple steps residents can take to protect lives and property.

  13. MyShake - A smartphone app to detect earthquake

    NASA Astrophysics Data System (ADS)

    Kong, Q.; Allen, R. M.; Schreier, L.; Kwon, Y. W.

    2015-12-01

    We designed an android app that harnesses the accelerometers in personal smartphones to record earthquake-shaking data for research, hazard information and warnings. The app has the function to distinguish earthquake shakings from daily human activities based on the different patterns behind the movements. It also can be triggered by the traditional earthquake early warning (EEW) system to record for a certain amount of time to collect earthquake data. When the app is triggered by the earthquake-like movements, it sends the trigger information back to our server which contains time and location of the trigger, at the same time, it stores the waveform data on local phone first, and upload to our server later. Trigger information from multiple phones will be processed in real time on the server to find the coherent signal to confirm the earthquakes. Therefore, the app provides the basis to form a smartphone seismic network that can detect earthquake and even provide warnings. A planned public roll-out of MyShake could collect millions of seismic recordings for large earthquakes in many regions around the world.

  14. Earthquake Hazard Analysis Methods: A Review

    NASA Astrophysics Data System (ADS)

    Sari, A. M.; Fakhrurrozi, A.

    2018-02-01

    One of natural disasters that have significantly impacted on risks and damage is an earthquake. World countries such as China, Japan, and Indonesia are countries located on the active movement of continental plates with more frequent earthquake occurrence compared to other countries. Several methods of earthquake hazard analysis have been done, for example by analyzing seismic zone and earthquake hazard micro-zonation, by using Neo-Deterministic Seismic Hazard Analysis (N-DSHA) method, and by using Remote Sensing. In its application, it is necessary to review the effectiveness of each technique in advance. Considering the efficiency of time and the accuracy of data, remote sensing is used as a reference to the assess earthquake hazard accurately and quickly as it only takes a limited time required in the right decision-making shortly after the disaster. Exposed areas and possibly vulnerable areas due to earthquake hazards can be easily analyzed using remote sensing. Technological developments in remote sensing such as GeoEye-1 provide added value and excellence in the use of remote sensing as one of the methods in the assessment of earthquake risk and damage. Furthermore, the use of this technique is expected to be considered in designing policies for disaster management in particular and can reduce the risk of natural disasters such as earthquakes in Indonesia.

  15. Time-Reversal Location of the 2004 M6.0 Parkfield Earthquake Using the Vertical Component of Seismic Data.

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Johnson, P.; Huang, L.; Randall, G.; Patton, H.; Montagner, J.

    2007-12-01

    In this work we describe Time Reversal experiments applying seismic waves recorded from the 2004 M6.0 Parkfield Earthquake. The reverse seismic wavefield is created by time-reversing recorded seismograms and then injecting them from the seismograph locations into a whole entire Earth velocity model. The concept is identical to acoustic Time-Reversal Mirror laboratory experiments except the seismic data are numerically backpropagated through a velocity model (Fink, 1996; Ulrich et al, 2007). Data are backpropagated using the finite element code SPECFEM3D (Komatitsch et al, 2002), employing the velocity model s20rts (Ritsema et al, 2000). In this paper, we backpropagate only the vertical component of seismic data from about 100 broadband surface stations located worldwide (FDSN), using the period band of 23-120s. We use those only waveforms that are highly correlated with forward-propagated synthetics. The focusing quality depends upon the type of waves back- propagated; for the vertical displacement component the possible types include body waves, Rayleigh waves, or their combination. We show that Rayleigh waves, both real and artifact, dominate the reverse movie in all cases. They are created during rebroadcast of the time reverse signals, including body wave phases, because we use point-like-force sources for injection. The artifact waves, termed "ghosts" manifest as surface waves, do not correspond to real wave phases during the forward propagation. The surface ghost waves can significantly blur the focusing at the source. We find that the ghosts cannot be easily eliminated in the manner described by Tsogka&Papanicolaou (2002). It is necessary to understand how they are created in order to remove them during TRM studies, particularly when using only the body waves. For this moderate magnitude of earthquake we demonstrate the robustness of the TRM as an alternative location method despite the restriction to vertical component phases. One advantage of TRM location

  16. Prompt Assessment of Global Earthquakes for Response (PAGER): A System for Rapidly Determining the Impact of Earthquakes Worldwide

    USGS Publications Warehouse

    Earle, Paul S.; Wald, David J.; Jaiswal, Kishor S.; Allen, Trevor I.; Hearne, Michael G.; Marano, Kristin D.; Hotovec, Alicia J.; Fee, Jeremy

    2009-01-01

    Within minutes of a significant earthquake anywhere on the globe, the U.S. Geological Survey (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system assesses its potential societal impact. PAGER automatically estimates the number of people exposed to severe ground shaking and the shaking intensity at affected cities. Accompanying maps of the epicentral region show the population distribution and estimated ground-shaking intensity. A regionally specific comment describes the inferred vulnerability of the regional building inventory and, when available, lists recent nearby earthquakes and their effects. PAGER's results are posted on the USGS Earthquake Program Web site (http://earthquake.usgs.gov/), consolidated in a concise one-page report, and sent in near real-time to emergency responders, government agencies, and the media. Both rapid and accurate results are obtained through manual and automatic updates of PAGER's content in the hours following significant earthquakes. These updates incorporate the most recent estimates of earthquake location, magnitude, faulting geometry, and first-hand accounts of shaking. PAGER relies on a rich set of earthquake analysis and assessment tools operated by the USGS and contributing Advanced National Seismic System (ANSS) regional networks. A focused research effort is underway to extend PAGER's near real-time capabilities beyond population exposure to quantitative estimates of fatalities, injuries, and displaced population.

  17. The Road to Total Earthquake Safety

    NASA Astrophysics Data System (ADS)

    Frohlich, Cliff

    Cinna Lomnitz is possibly the most distinguished earthquake seismologist in all of Central and South America. Among many other credentials, Lomnitz has personally experienced the shaking and devastation that accompanied no fewer than five major earthquakes—Chile, 1939; Kern County, California, 1952; Chile, 1960; Caracas,Venezuela, 1967; and Mexico City, 1985. Thus he clearly has much to teach someone like myself, who has never even actually felt a real earthquake.What is this slim book? The Road to Total Earthquake Safety summarizes Lomnitz's May 1999 presentation at the Seventh Mallet-Milne Lecture, sponsored by the Society for Earthquake and Civil Engineering Dynamics. His arguments are motivated by the damage that occurred in three earthquakes—Mexico City, 1985; Loma Prieta, California, 1989; and Kobe, Japan, 1995. All three quakes occurred in regions where earthquakes are common. Yet in all three some of the worst damage occurred in structures located a significant distance from the epicenter and engineered specifically to resist earthquakes. Some of the damage also indicated that the structures failed because they had experienced considerable rotational or twisting motion. Clearly, Lomnitz argues, there must be fundamental flaws in the usually accepted models explaining how earthquakes generate strong motions, and how we should design resistant structures.

  18. Tsunami simulations of the 1867 Virgin Island earthquake: Constraints on epicenter location and fault parameters

    USGS Publications Warehouse

    Barkan, Roy; ten Brink, Uri S.

    2010-01-01

    The 18 November 1867 Virgin Island earthquake and the tsunami that closely followed caused considerable loss of life and damage in several places in the northeast Caribbean region. The earthquake was likely a manifestation of the complex tectonic deformation of the Anegada Passage, which cuts across the Antilles island arc between the Virgin Islands and the Lesser Antilles. In this article, we attempt to characterize the 1867 earthquake with respect to fault orientation, rake, dip, fault dimensions, and first tsunami wave propagating phase, using tsunami simulations that employ high-resolution multibeam bathymetry. In addition, we present new geophysical and geological observations from the region of the suggested earthquake source. Results of our tsunami simulations based on relative amplitude comparison limit the earthquake source to be along the northern wall of the Virgin Islands basin, as suggested by Reid and Taber (1920), or on the carbonate platform north of the basin, and not in the Virgin Islands basin, as commonly assumed. The numerical simulations suggest the 1867 fault was striking 120°–135° and had a mixed normal and left-lateral motion. First propagating wave phase analysis suggests a fault striking 300°–315° is also possible. The best-fitting rupture length was found to be relatively small (50 km), probably indicating the earthquake had a moment magnitude of ∼7.2. Detailed multibeam echo sounder surveys of the Anegada Passage bathymetry between St. Croix and St. Thomas reveal a scarp, which cuts the northern wall of the Virgin Islands basin. High-resolution seismic profiles further indicate it to be a reasonable fault candidate. However, the fault orientation and the orientation of other subparallel faults in the area are more compatible with right-lateral motion. For the other possible source region, no clear disruption in the bathymetry or seismic profiles was found on the carbonate platform north of the basin.

  19. Rapid earthquake characterization using MEMS accelerometers and volunteer hosts following the M 7.2 Darfield, New Zealand, Earthquake

    USGS Publications Warehouse

    Lawrence, J. F.; Cochran, E.S.; Chung, A.; Kaiser, A.; Christensen, C. M.; Allen, R.; Baker, J.W.; Fry, B.; Heaton, T.; Kilb, Debi; Kohler, M.D.; Taufer, M.

    2014-01-01

    We test the feasibility of rapidly detecting and characterizing earthquakes with the Quake‐Catcher Network (QCN) that connects low‐cost microelectromechanical systems accelerometers to a network of volunteer‐owned, Internet‐connected computers. Following the 3 September 2010 M 7.2 Darfield, New Zealand, earthquake we installed over 180 QCN sensors in the Christchurch region to record the aftershock sequence. The sensors are monitored continuously by the host computer and send trigger reports to the central server. The central server correlates incoming triggers to detect when an earthquake has occurred. The location and magnitude are then rapidly estimated from a minimal set of received ground‐motion parameters. Full seismic time series are typically not retrieved for tens of minutes or even hours after an event. We benchmark the QCN real‐time detection performance against the GNS Science GeoNet earthquake catalog. Under normal network operations, QCN detects and characterizes earthquakes within 9.1 s of the earthquake rupture and determines the magnitude within 1 magnitude unit of that reported in the GNS catalog for 90% of the detections.

  20. Brady's Geothermal Field Nodal Seismometer Earthquake Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt Feigl

    90-second records of data from 238 three-component nodal seismometer deployed at Bradys geothermal field. The time window catches an earthquake arrival. Earthquake data from USGS online catalog: Magnitude: 4.3 ml +/- 0.4 Location: 38.479 deg N 118.366 deg W +/- 0.7 km Depth: 9.9 km +/- 0.7 Date and Time: 2016-03-21 07:37:10.535 UTC

  1. Earthquake Catalogue of the Caucasus

    NASA Astrophysics Data System (ADS)

    Godoladze, T.; Gok, R.; Tvaradze, N.; Tumanova, N.; Gunia, I.; Onur, T.

    2016-12-01

    The Caucasus has a documented historical catalog stretching back to the beginning of the Christian era. Most of the largest historical earthquakes prior to the 19th century are assumed to have occurred on active faults of the Greater Caucasus. Important earthquakes include the Samtskhe earthquake of 1283 (Ms˜7.0, Io=9); Lechkhumi-Svaneti earthquake of 1350 (Ms˜7.0, Io=9); and the Alaverdi earthquake of 1742 (Ms˜6.8, Io=9). Two significant historical earthquakes that may have occurred within the Javakheti plateau in the Lesser Caucasus are the Tmogvi earthquake of 1088 (Ms˜6.5, Io=9) and the Akhalkalaki earthquake of 1899 (Ms˜6.3, Io =8-9). Large earthquakes that occurred in the Caucasus within the period of instrumental observation are: Gori 1920; Tabatskuri 1940; Chkhalta 1963; Racha earthquake of 1991 (Ms=7.0), is the largest event ever recorded in the region; Barisakho earthquake of 1992 (M=6.5); Spitak earthquake of 1988 (Ms=6.9, 100 km south of Tbilisi), which killed over 50,000 people in Armenia. Recently, permanent broadband stations have been deployed across the region as part of the various national networks (Georgia (˜25 stations), Azerbaijan (˜35 stations), Armenia (˜14 stations)). The data from the last 10 years of observation provides an opportunity to perform modern, fundamental scientific investigations. In order to improve seismic data quality a catalog of all instrumentally recorded earthquakes has been compiled by the IES (Institute of Earth Sciences/NSMC, Ilia State University) in the framework of regional joint project (Armenia, Azerbaijan, Georgia, Turkey, USA) "Probabilistic Seismic Hazard Assessment (PSHA) in the Caucasus. The catalogue consists of more then 80,000 events. First arrivals of each earthquake of Mw>=4.0 have been carefully examined. To reduce calculation errors, we corrected arrivals from the seismic records. We improved locations of the events and recalculate Moment magnitudes in order to obtain unified magnitude

  2. Efficient blind search for similar-waveform earthquakes in years of continuous seismic data

    NASA Astrophysics Data System (ADS)

    Yoon, C. E.; Bergen, K.; Rong, K.; Elezabi, H.; Bailis, P.; Levis, P.; Beroza, G. C.

    2017-12-01

    Cross-correlating an earthquake waveform template with continuous seismic data has proven to be a sensitive, discriminating detector of small events missing from earthquake catalogs, but a key limitation of this approach is that it requires advance knowledge of the earthquake signals we wish to detect. To overcome this limitation, we can perform a blind search for events with similar waveforms, comparing waveforms from all possible times within the continuous data (Brown et al., 2008). However, the runtime for naive blind search scales quadratically with the duration of continuous data, making it impractical to process years of continuous data. The Fingerprint And Similarity Thresholding (FAST) detection method (Yoon et al., 2015) enables a comprehensive blind search for similar-waveform earthquakes in a fast, scalable manner by adapting data-mining techniques originally developed for audio and image search within massive databases. FAST converts seismic waveforms into compact "fingerprints", which are efficiently organized and searched within a database. In this way, FAST avoids the unnecessary comparison of dissimilar waveforms. To date, the longest duration of continuous data used for event detection with FAST was 3 months at a single station near Guy-Greenbrier, Arkansas, which revealed microearthquakes closely correlated with stages of hydraulic fracturing (Yoon et al., 2017). In this presentation we introduce an optimized, parallel version of the FAST software with improvements to the fingerprinting algorithm and the ability to detect events using continuous data from a network of stations (Bergen et al., 2016). We demonstrate its ability to detect low-magnitude earthquakes within several years of continuous data at locations of interest in California.

  3. Rapid earthquake detection through GPU-Based template matching

    NASA Astrophysics Data System (ADS)

    Mu, Dawei; Lee, En-Jui; Chen, Po

    2017-12-01

    The template-matching algorithm (TMA) has been widely adopted for improving the reliability of earthquake detection. The TMA is based on calculating the normalized cross-correlation coefficient (NCC) between a collection of selected template waveforms and the continuous waveform recordings of seismic instruments. In realistic applications, the computational cost of the TMA is much higher than that of traditional techniques. In this study, we provide an analysis of the TMA and show how the GPU architecture provides an almost ideal environment for accelerating the TMA and NCC-based pattern recognition algorithms in general. So far, our best-performing GPU code has achieved a speedup factor of more than 800 with respect to a common sequential CPU code. We demonstrate the performance of our GPU code using seismic waveform recordings from the ML 6.6 Meinong earthquake sequence in Taiwan.

  4. Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination

    DTIC Science & Technology

    2008-09-01

    explosions from earthquakes, using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites ...Battone et al., 2002). For example, in Figure 1 we compare an earthquake and an explosion at each of four major test sites (rows), bandpass filtered...explosions as the frequency increases. Note also there are interesting differences between the test sites , indicating that emplacement conditions (depth

  5. The USGS Earthquake Notification Service (ENS): Customizable notifications of earthquakes around the globe

    USGS Publications Warehouse

    Wald, Lisa A.; Wald, David J.; Schwarz, Stan; Presgrave, Bruce; Earle, Paul S.; Martinez, Eric; Oppenheimer, David

    2008-01-01

    At the beginning of 2006, the U.S. Geological Survey (USGS) Earthquake Hazards Program (EHP) introduced a new automated Earthquake Notification Service (ENS) to take the place of the National Earthquake Information Center (NEIC) "Bigquake" system and the various other individual EHP e-mail list-servers for separate regions in the United States. These included northern California, southern California, and the central and eastern United States. ENS is a "one-stop shopping" system that allows Internet users to subscribe to flexible and customizable notifications for earthquakes anywhere in the world. The customization capability allows users to define the what (magnitude threshold), the when (day and night thresholds), and the where (specific regions) for their notifications. Customization is achieved by employing a per-user based request profile, allowing the notifications to be tailored for each individual's requirements. Such earthquake-parameter-specific custom delivery was not possible with simple e-mail list-servers. Now that event and user profiles are in a structured query language (SQL) database, additional flexibility is possible. At the time of this writing, ENS had more than 114,000 subscribers, with more than 200,000 separate user profiles. On a typical day, more than 188,000 messages get sent to a variety of widely distributed users for a wide range of earthquake locations and magnitudes. The purpose of this article is to describe how ENS works, highlight the features it offers, and summarize plans for future developments.

  6. Study of Earthquake Disaster Prediction System of Langfang city Based on GIS

    NASA Astrophysics Data System (ADS)

    Huang, Meng; Zhang, Dian; Li, Pan; Zhang, YunHui; Zhang, RuoFei

    2017-07-01

    In this paper, according to the status of China’s need to improve the ability of earthquake disaster prevention, this paper puts forward the implementation plan of earthquake disaster prediction system of Langfang city based on GIS. Based on the GIS spatial database, coordinate transformation technology, GIS spatial analysis technology and PHP development technology, the seismic damage factor algorithm is used to predict the damage of the city under different intensity earthquake disaster conditions. The earthquake disaster prediction system of Langfang city is based on the B / S system architecture. Degree and spatial distribution and two-dimensional visualization display, comprehensive query analysis and efficient auxiliary decision-making function to determine the weak earthquake in the city and rapid warning. The system has realized the transformation of the city’s earthquake disaster reduction work from static planning to dynamic management, and improved the city’s earthquake and disaster prevention capability.

  7. Method for Location of An External Dump in Surface Mining Using the A-Star Algorithm

    NASA Astrophysics Data System (ADS)

    Zajączkowski, Maciej; Kasztelewicz, Zbigniew; Sikora, Mateusz

    2014-10-01

    The construction of a surface mine always involves the necessity of accessing deposits through the removal of the residual overburden above. In the beginning phase of exploitation, the masses of overburden are located outside the perimeters of the excavation site, on the external dump, until the moment of internal dumping. In the case of lignite surface mines, these dumps can cover a ground surface of several dozen to a few thousand hectares. This results from a high concentration of lignite extraction, counted in millions of Mg per year, and the relatively large depth of its residual deposits. Determining the best place for the location of an external dump requires a detailed analysis of existing options, followed by a choice of the most favorable one. This article, using the case study of an open-cast lignite mine, presents the selection method for an external dump location based on graph theory and the A-star algorithm. This algorithm, based on the spatial distribution of individual intersections on the graph, seeks specified graph states, continually expanding them with additional elementary fields until the required surface area for the external dump - defined by the lowest value of the occupied site - is achieved. To do this, it is necessary to accurately identify the factors affecting the choice of dump location. On such a basis, it is then possible to specify the target function, which reflects the individual costs of dump construction on a given site. This is discussed further in chapter 3. The area of potential dump location has been divided into elementary fields, each represented by a corresponding geometrical locus. Ascribed to this locus, in addition to its geodesic coordinates, are the appropriate attributes reflecting the degree of development of its elementary field. These tasks can be carried out automatically thanks to the integration of the method with the system of geospatial data management for the given area. The collection of loci, together

  8. Portals for Real-Time Earthquake Data and Forecasting: Challenge and Promise (Invited)

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Feltstykket, R.; Donnellan, A.; Glasscoe, M. T.

    2013-12-01

    Earthquake forecasts have been computed by a variety of countries world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. However, recent events clearly demonstrate that mitigating personal risk is becoming the responsibility of individual members of the public. Open access to a variety of web-based forecasts, tools, utilities and information is therefore required. Portals for data and forecasts present particular challenges, and require the development of both apps and the client/server architecture to deliver the basic information in real time. The basic forecast model we consider is the Natural Time Weibull (NTW) method (JBR et al., Phys. Rev. E, 86, 021106, 2012). This model uses small earthquakes (';seismicity-based models') to forecast the occurrence of large earthquakes, via data-mining algorithms combined with the ANSS earthquake catalog. This method computes large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Localizing these forecasts in space so that global forecasts can be computed in real time presents special algorithmic challenges, which we describe in this talk. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we compute real-time global forecasts at a grid scale of 0.1o. We analyze and monitor the performance of these models using the standard tests, which include the Reliability/Attributes and Receiver Operating Characteristic (ROC) tests. It is clear from much of the analysis that data quality is a major limitation on the accurate computation of earthquake probabilities. We discuss the challenges of serving up these datasets over the web on web-based platforms such as those at www.quakesim.org , www.e-decider.org , and www.openhazards.com.

  9. Building Loss Estimation for Earthquake Insurance Pricing

    NASA Astrophysics Data System (ADS)

    Durukal, E.; Erdik, M.; Sesetyan, K.; Demircioglu, M. B.; Fahjan, Y.; Siyahi, B.

    2005-12-01

    After the 1999 earthquakes in Turkey several changes in the insurance sector took place. A compulsory earthquake insurance scheme was introduced by the government. The reinsurance companies increased their rates. Some even supended operations in the market. And, most important, the insurance companies realized the importance of portfolio analysis in shaping their future market strategies. The paper describes an earthquake loss assessment methodology that can be used for insurance pricing and portfolio loss estimation that is based on our work esperience in the insurance market. The basic ingredients are probabilistic and deterministic regional site dependent earthquake hazard, regional building inventory (and/or portfolio), building vulnerabilities associated with typical construction systems in Turkey and estimations of building replacement costs for different damage levels. Probable maximum and average annualized losses are estimated as the result of analysis. There is a two-level earthquake insurance system in Turkey, the effect of which is incorporated in the algorithm: the national compulsory earthquake insurance scheme and the private earthquake insurance system. To buy private insurance one has to be covered by the national system, that has limited coverage. As a demonstration of the methodology we look at the case of Istanbul and use its building inventory data instead of a portfolio. A state-of-the-art time depent earthquake hazard model that portrays the increased earthquake expectancies in Istanbul is used. Intensity and spectral displacement based vulnerability relationships are incorporated in the analysis. In particular we look at the uncertainty in the loss estimations that arise from the vulnerability relationships, and at the effect of the implemented repair cost ratios.

  10. Seismic density and its relationship with strong historical earthquakes around Beijing, China

    NASA Astrophysics Data System (ADS)

    WANG, J.

    2012-12-01

    As you know, Beijing is the capital of China. The regional earthquake observation networks have been built around Beijing (115.0°-119.3°E, 38.5°-41.0°N) since 1966. From 1970 to 2009, total 20281 earthquakes were recorded. The accumulation of these data raised a fundamental question: what are the characteristics and the physical nature of small earthquakes? In order to answer such question, we must use a quantitative method to deal with seismic pattern. Here we introduce a new concept of seismic density. The method emphasize that we must pay attention to the accuracy of the epicentre location, but no correction is made for the focal depth, because in any case this uncertainty is in any case greater than that of the epicenter. On the basis of these instrumental data, seismic patterns were calculated. The results illustrate that seismic density is the main character of the seismic pattern. Temporal distribution of small earthquakes in each seismic density zone is analyzed quantitatively. According to the statistics, mainly two types of seismic density are distinguished. Besides of the instrumental data, abundant information of historical earthquakes around Beijing is found in the archives, total 15 strong historical earthquake (M>=6). The earliest one occurred in September 294. After comparing, a very interesting phenomenon was noticed that the epicenters of strong historical earthquakes with high accuracy location corresponding with one of the seismic density type, which temporal distribution is almost stationary. This correspondent means small earthquakes still cluster near the epicenters of historical earthquakes, even if those occurred several hundred years ago. The mechanics of the relationship is analyzed. Strong historical earthquakes and seismic density of small earthquakes are consistent in each case, which reveals the persistent weakness of local crustal medium together. We utilized this relationship to improve the strong historical earthquake locations

  11. Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Schorlemmer, D.; Jordan, T. H.; Zechar, J. D.; Gerstenberger, M. C.; Wiemer, S.; Maechling, P. J.

    2006-12-01

    Earthquake prediction is one of the most difficult problems in physical science and, owing to its societal implications, one of the most controversial. The study of earthquake predictability has been impeded by the lack of an adequate experimental infrastructure---the capability to conduct scientific prediction experiments under rigorous, controlled conditions and evaluate them using accepted criteria specified in advance. To remedy this deficiency, the Southern California Earthquake Center (SCEC) is working with its international partners, which include the European Union (through the Swiss Seismological Service) and New Zealand (through GNS Science), to develop a virtual, distributed laboratory with a cyberinfrastructure adequate to support a global program of research on earthquake predictability. This Collaboratory for the Study of Earthquake Predictability (CSEP) will extend the testing activities of SCEC's Working Group on Regional Earthquake Likelihood Models, from which we will present first results. CSEP will support rigorous procedures for registering prediction experiments on regional and global scales, community-endorsed standards for assessing probability-based and alarm-based predictions, access to authorized data sets and monitoring products from designated natural laboratories, and software to allow researchers to participate in prediction experiments. CSEP will encourage research on earthquake predictability by supporting an environment for scientific prediction experiments that allows the predictive skill of proposed algorithms to be rigorously compared with standardized reference methods and data sets. It will thereby reduce the controversies surrounding earthquake prediction, and it will allow the results of prediction experiments to be communicated to the scientific community, governmental agencies, and the general public in an appropriate research context.

  12. Accelerated nucleation of the 2014 Iquique, Chile Mw 8.2 Earthquake.

    PubMed

    Kato, Aitaro; Fukuda, Jun'ichi; Kumazawa, Takao; Nakagawa, Shigeki

    2016-04-25

    The earthquake nucleation process has been vigorously investigated based on geophysical observations, laboratory experiments, and theoretical studies; however, a general consensus has yet to be achieved. Here, we studied nucleation process for the 2014 Iquique, Chile Mw 8.2 megathrust earthquake located within the current North Chile seismic gap, by analyzing a long-term earthquake catalog constructed from a cross-correlation detector using continuous seismic data. Accelerations in seismicity, the amount of aseismic slip inferred from repeating earthquakes, and the background seismicity, accompanied by an increasing frequency of earthquake migrations, started around 270 days before the mainshock at locations up-dip of the largest coseismic slip patch. These signals indicate that repetitive sequences of fast and slow slip took place on the plate interface at a transition zone between fully locked and creeping portions. We interpret that these different sliding modes interacted with each other and promoted accelerated unlocking of the plate interface during the nucleation phase.

  13. Accelerated nucleation of the 2014 Iquique, Chile Mw 8.2 Earthquake

    NASA Astrophysics Data System (ADS)

    Kato, Aitaro; Fukuda, Jun'Ichi; Kumazawa, Takao; Nakagawa, Shigeki

    2016-04-01

    The earthquake nucleation process has been vigorously investigated based on geophysical observations, laboratory experiments, and theoretical studies; however, a general consensus has yet to be achieved. Here, we studied nucleation process for the 2014 Iquique, Chile Mw 8.2 megathrust earthquake located within the current North Chile seismic gap, by analyzing a long-term earthquake catalog constructed from a cross-correlation detector using continuous seismic data. Accelerations in seismicity, the amount of aseismic slip inferred from repeating earthquakes, and the background seismicity, accompanied by an increasing frequency of earthquake migrations, started around 270 days before the mainshock at locations up-dip of the largest coseismic slip patch. These signals indicate that repetitive sequences of fast and slow slip took place on the plate interface at a transition zone between fully locked and creeping portions. We interpret that these different sliding modes interacted with each other and promoted accelerated unlocking of the plate interface during the nucleation phase.

  14. Accelerated nucleation of the 2014 Iquique, Chile Mw 8.2 Earthquake

    PubMed Central

    Kato, Aitaro; Fukuda, Jun’ichi; Kumazawa, Takao; Nakagawa, Shigeki

    2016-01-01

    The earthquake nucleation process has been vigorously investigated based on geophysical observations, laboratory experiments, and theoretical studies; however, a general consensus has yet to be achieved. Here, we studied nucleation process for the 2014 Iquique, Chile Mw 8.2 megathrust earthquake located within the current North Chile seismic gap, by analyzing a long-term earthquake catalog constructed from a cross-correlation detector using continuous seismic data. Accelerations in seismicity, the amount of aseismic slip inferred from repeating earthquakes, and the background seismicity, accompanied by an increasing frequency of earthquake migrations, started around 270 days before the mainshock at locations up-dip of the largest coseismic slip patch. These signals indicate that repetitive sequences of fast and slow slip took place on the plate interface at a transition zone between fully locked and creeping portions. We interpret that these different sliding modes interacted with each other and promoted accelerated unlocking of the plate interface during the nucleation phase. PMID:27109362

  15. Rupture processes of the 2010 Canterbury earthquake and the 2011 Christchurch earthquake inferred from InSAR, strong motion and teleseismic datasets

    NASA Astrophysics Data System (ADS)

    Yun, S.; Koketsu, K.; Aoki, Y.

    2014-12-01

    The September 4, 2010, Canterbury earthquake with a moment magnitude (Mw) of 7.1 is a crustal earthquake in the South Island, New Zealand. The February 22, 2011, Christchurch earthquake (Mw=6.3) is the biggest aftershock of the 2010 Canterbury earthquake that is located at about 50 km to the east of the mainshock. Both earthquakes occurred on previously unrecognized faults. Field observations indicate that the rupture of the 2010 Canterbury earthquake reached the surface; the surface rupture with a length of about 30 km is located about 4 km south of the epicenter. Also various data including the aftershock distribution and strong motion seismograms suggest a very complex rupture process. For these reasons it is useful to investigate the complex rupture process using multiple data with various sensitivities to the rupture process. While previously published source models are based on one or two datasets, here we infer the rupture process with three datasets, InSAR, strong-motion, and teleseismic data. We first performed point source inversions to derive the focal mechanism of the 2010 Canterbury earthquake. Based on the focal mechanism, the aftershock distribution, the surface fault traces and the SAR interferograms, we assigned several source faults. We then performed the joint inversion to determine the rupture process of the 2010 Canterbury earthquake most suitable for reproducing all the datasets. The obtained slip distribution is in good agreement with the surface fault traces. We also performed similar inversions to reveal the rupture process of the 2011 Christchurch earthquake. Our result indicates steep dip and large up-dip slip. This reveals the observed large vertical ground motion around the source region is due to the rupture process, rather than the local subsurface structure. To investigate the effects of the 3-D velocity structure on characteristic strong motion seismograms of the two earthquakes, we plan to perform the inversion taking 3-D velocity

  16. Catalog of Hawaiian earthquakes, 1823-1959

    USGS Publications Warehouse

    Klein, Fred W.; Wright, Thomas L.

    2000-01-01

    This catalog of more than 17,000 Hawaiian earthquakes (of magnitude greater than or equal to 5), principally located on the Island of Hawaii, from 1823 through the third quarter of 1959 is designed to expand our ability to evaluate seismic hazard in Hawaii, as well as our knowledge of Hawaiian seismic rhythms as they relate to eruption cycles at Kilauea and Mauna Loa volcanoes and to subcrustal earthquake patterns related to the tectonic evolution of the Hawaiian chain.

  17. Remotely Triggered Earthquakes Recorded by EarthScope's Transportable Array and Regional Seismic Networks: A Case Study Of Four Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Velasco, A. A.; Cerda, I.; Linville, L.; Kilb, D. L.; Pankow, K. L.

    2013-05-01

    Changes in field stress required to trigger earthquakes have been classified in two basic ways: static and dynamic triggering. Static triggering occurs when an earthquake that releases accumulated strain along a fault stress loads a nearby fault. Dynamic triggering occurs when an earthquake is induced by the passing of seismic waves from a large mainshock located at least two or more fault lengths from the epicenter of the main shock. We investigate details of dynamic triggering using data collected from EarthScope's USArray and regional seismic networks located in the United States. Triggered events are identified using an optimized automated detector based on the ratio of short term to long term average (Antelope software). Following the automated processing, the flagged waveforms are individually analyzed, in both the time and frequency domains, to determine if the increased detection rates correspond to local earthquakes (i.e., potentially remotely triggered aftershocks). Here, we show results using this automated schema applied to data from four large, but characteristically different, earthquakes -- Chile (Mw 8.8 2010), Tokoku-Oki (Mw 9.0 2011), Baja California (Mw 7.2 2010) and Wells Nevada (Mw 6.0 2008). For each of our four mainshocks, the number of detections within the 10 hour time windows span a large range (1 to over 200) and statistically >20% of the waveforms show evidence of anomalous signals following the mainshock. The results will help provide for a better understanding of the physical mechanisms involved in dynamic earthquake triggering and will help identify zones in the continental U.S. that may be more susceptible to dynamic earthquake triggering.

  18. Toward standardization of slow earthquake catalog -Development of database website-

    NASA Astrophysics Data System (ADS)

    Kano, M.; Aso, N.; Annoura, S.; Arai, R.; Ito, Y.; Kamaya, N.; Maury, J.; Nakamura, M.; Nishimura, T.; Obana, K.; Sugioka, H.; Takagi, R.; Takahashi, T.; Takeo, A.; Yamashita, Y.; Matsuzawa, T.; Ide, S.; Obara, K.

    2017-12-01

    Slow earthquakes have now been widely discovered in the world based on the recent development of geodetic and seismic observations. Many researchers detect a wide frequency range of slow earthquakes including low frequency tremors, low frequency earthquakes, very low frequency earthquakes and slow slip events by using various methods. Catalogs of the detected slow earthquakes are open to us in different formats by each referring paper or through a website (e.g., Wech 2010; Idehara et al. 2014). However, we need to download catalogs from different sources, to deal with unformatted catalogs and to understand the characteristics of different catalogs, which may be somewhat complex especially for those who are not familiar with slow earthquakes. In order to standardize slow earthquake catalogs and to make such a complicated work easier, Scientific Research on Innovative Areas "Science of Slow Earthquakes" has been developing a slow earthquake catalog website. In the website, we can plot locations of various slow earthquakes via the Google Maps by compiling a variety of slow earthquake catalogs including slow slip events. This enables us to clearly visualize spatial relations among slow earthquakes at a glance and to compare the regional activities of slow earthquakes or the locations of different catalogs. In addition, we can download catalogs in the unified format and refer the information on each catalog on the single website. Such standardization will make it more convenient for users to utilize the previous achievements and to promote research on slow earthquakes, which eventually leads to collaborations with researchers in various fields and further understanding of the mechanisms, environmental conditions, and underlying physics of slow earthquakes. Furthermore, we expect that the website has a leading role in the international standardization of slow earthquake catalogs. We report the overview of the website and the progress of construction. Acknowledgment: This

  19. Autonomous Image Processing Algorithms Locate Region-of-Interests: The Mars Rover Application

    NASA Technical Reports Server (NTRS)

    Privitera, Claudio; Azzariti, Michela; Stark, Lawrence W.

    1998-01-01

    In this report, we demonstrate that bottom-up IPA's, image-processing algorithms, can perform a new visual task to select and locate Regions-Of-Interests (ROIs). This task has been defined on the basis of a theory of top-down human vision, the scanpath theory. Further, using measures, Sp and Ss, the similarity of location and ordering, respectively, developed over the years in studying human perception and the active looking role of eye movements, we could quantify the efficient and efficacious manner that IPAs can imitate human vision in located ROIS. The means to quantitatively evaluate IPA performance has been an important part of our study. In fact, these measures were essential in choosing from the initial wide variety of IPAS, that particular one that best serves for a type of picture and for a required task. It should be emphasized that the selection of efficient IPAs has depended upon their correlation with actual human chosen ROIs for the same type of picture and for the same required task accomplishment.

  20. Microseismic event location using global optimization algorithms: An integrated and automated workflow

    NASA Astrophysics Data System (ADS)

    Lagos, Soledad R.; Velis, Danilo R.

    2018-02-01

    We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.

  1. Rupture processes of the 2013-2014 Minab earthquake sequence, Iran

    NASA Astrophysics Data System (ADS)

    Kintner, Jonas A.; Ammon, Charles J.; Cleveland, K. Michael; Herman, Matthew

    2018-06-01

    We constrain epicentroid locations, magnitudes and depths of moderate-magnitude earthquakes in the 2013-2014 Minab sequence using surface-wave cross-correlations, surface-wave spectra and teleseismic body-wave modelling. We estimate precise relative locations of 54 Mw ≥ 3.8 earthquakes using 48 409 teleseismic, intermediate-period Rayleigh and Love-wave cross-correlation measurements. To reduce significant regional biases in our relative locations, we shift the relative locations to align the Mw 6.2 main-shock centroid to a location derived from an independent InSAR fault model. Our relocations suggest that the events lie along a roughly east-west trend that is consistent with the faulting geometry in the GCMT catalogue. The results support previous studies that suggest the sequence consists of left-lateral strain release, but better defines the main-shock fault length and shows that most of the Mw ≥ 5.0 aftershocks occurred on one or two similarly oriented structures. We also show that aftershock activity migrated westwards along strike, away from the main shock, suggesting that Coulomb stress transfer played a role in the fault failure. We estimate the magnitudes of the relocated events using surface-wave cross-correlation amplitudes and find good agreement with the GCMT moment magnitudes for the larger events and underestimation of small-event size by catalogue MS. In addition to clarifying details of the Minab sequence, the results demonstrate that even in tectonically complex regions, relative relocation using teleseismic surface waves greatly improves the precision of relative earthquake epicentroid locations and can facilitate detailed tectonic analyses of remote earthquake sequences.

  2. TrigDB back-filling method in EEW for the regional earthquake for reducing false location of the deep focus earthquake event by considering neighborhood triggers and forced association.

    NASA Astrophysics Data System (ADS)

    Park, J. H.; Chi, H. C.; Lim, I. S.; Seong, Y. J.; Pak, J.

    2017-12-01

    During the first phase of EEW(Earthquake Early Warning) service to the public by KMA (Korea Meteorological Administration) from 2015 in Korea, KIGAM(Korea Institute of Geoscience and Mineral Resources) has adopted ElarmS2 of UC Berkeley BSL and modified local magnitude relation, travel time curves and association procedures so called TrigDB back-filling method. The TrigDB back-filling method uses a database of sorted lists of stations based on epicentral distances of the pre-defined events located on the grids for 1,401 × 1,601 = 2,243,001 events around the Korean Peninsula at a grid spacing of 0.05 degrees. When the version of an event is updated, the TrigDB back-filling method is invoked. First, the grid closest to the epicenter of an event is chosen from the database and candidate stations, which are stations corresponding to the chosen grid and also adjacent to the already-associated stations, are selected. Second, the directions from the chosen grid to the associated stations are averaged to represent the direction of wave propagation, which is used as a reference for computing apparent travel times. The apparent travel times for the associated stations are computed using a P wave velocity of 5.5 km/s from the grid to the projected points in the reference direction. The travel times for the triggered candidate stations are also computed and used to obtain the difference between the apparent travel times of the associated stations and the triggered candidates. Finally, if the difference in the apparent travel times is less than that of the arrival times, the method forces the triggered candidate station to be associated with the event and updates the event location. This method is useful to reduce false locations of events which could be generated from the deep (> 500 km) and regional distance earthquakes happening on the subduction pacific plate boundaries. In comparison of the case study between TrigDB back-filling applied system and the others, we could get

  3. Triggering of repeating earthquakes in central California

    USGS Publications Warehouse

    Wu, Chunquan; Gomberg, Joan; Ben-Naim, Eli; Johnson, Paul

    2014-01-01

    Dynamic stresses carried by transient seismic waves have been found capable of triggering earthquakes instantly in various tectonic settings. Delayed triggering may be even more common, but the mechanisms are not well understood. Catalogs of repeating earthquakes, earthquakes that recur repeatedly at the same location, provide ideal data sets to test the effects of transient dynamic perturbations on the timing of earthquake occurrence. Here we employ a catalog of 165 families containing ~2500 total repeating earthquakes to test whether dynamic perturbations from local, regional, and teleseismic earthquakes change recurrence intervals. The distance to the earthquake generating the perturbing waves is a proxy for the relative potential contributions of static and dynamic deformations, because static deformations decay more rapidly with distance. Clear changes followed the nearby 2004 Mw6 Parkfield earthquake, so we study only repeaters prior to its origin time. We apply a Monte Carlo approach to compare the observed number of shortened recurrence intervals following dynamic perturbations with the distribution of this number estimated for randomized perturbation times. We examine the comparison for a series of dynamic stress peak amplitude and distance thresholds. The results suggest a weak correlation between dynamic perturbations in excess of ~20 kPa and shortened recurrence intervals, for both nearby and remote perturbations.

  4. Mixed-Mode Slip Behavior of the Altotiberina Low-Angle Normal Fault System (Northern Apennines, Italy) through High-Resolution Earthquake Locations and Repeating Events

    NASA Astrophysics Data System (ADS)

    Valoroso, Luisa; Chiaraluce, Lauro; Di Stefano, Raffaele; Monachesi, Giancarlo

    2017-12-01

    We generated a 4.5-year-long (2010-2014) high-resolution earthquake catalogue, composed of 37,000 events with ML < 3.9 and MC = 0.5 completeness magnitude, to report on the seismic activity of the Altotiberina (ATF) low-angle normal fault system and to shed light on the mechanical behavior and seismic potential of this fault, which is capable of generating a M7 event. Seismicity defines the geometry of the fault system composed of the low-angle (15°-20°) ATF, extending for 50 km along strike and between 4 and 16 km at depth showing an 1.5 km thick fault zone made of multiple subparallel slipping planes, and a complex network of synthetic/antithetic higher-angle segments located in the ATF hanging wall (HW) that can be traced along strike for up to 35 km. Ninety percent of the recorded seismicity occurs along the high-angle HW faults during a series of minor, sometimes long-lasting (months) seismic sequences with multiple MW3+ mainshocks. Remaining earthquakes (ML < 2.4) are released instead along the low-angle ATF at a constant rate of 2.2 events per day. Within the ATF-related seismicity, we found 97 clusters of repeating earthquakes (RE), mostly consisting of doublets occurring during short interevent time (hours). RE are located within the geodetically recognized creeping portions of the ATF, around the main locked asperity. The rate of occurrence of RE seems quite synchronous with the ATF-HW seismic release, suggesting that creeping may guide the strain partitioning in the ATF system. The seismic moment released by the ATF seismicity accounts for 30% of the geodetic one, implying aseismic deformation. The ATF-seismicity pattern is thus consistent with a mixed-mode (seismic and aseismic) slip behavior.

  5. Accuracy and Resolution in Micro-earthquake Tomographic Inversion Studies

    NASA Astrophysics Data System (ADS)

    Hutchings, L. J.; Ryan, J.

    2010-12-01

    Accuracy and resolution are complimentary properties necessary to interpret the results of earthquake location and tomography studies. Accuracy is the how close an answer is to the “real world”, and resolution is who small of node spacing or earthquake error ellipse one can achieve. We have modified SimulPS (Thurber, 1986) in several ways to provide a tool for evaluating accuracy and resolution of potential micro-earthquake networks. First, we provide synthetic travel times from synthetic three-dimensional geologic models and earthquake locations. We use this to calculate errors in earthquake location and velocity inversion results when we perturb these models and try to invert to obtain these models. We create as many stations as desired and can create a synthetic velocity model with any desired node spacing. We apply this study to SimulPS and TomoDD inversion studies. “Real” travel times are perturbed with noise and hypocenters are perturbed to replicate a starting location away from the “true” location, and inversion is performed by each program. We establish travel times with the pseudo-bending ray tracer and use the same ray tracer in the inversion codes. This, of course, limits our ability to test the accuracy of the ray tracer. We developed relationships for the accuracy and resolution expected as a function of the number of earthquakes and recording stations for typical tomographic inversion studies. Velocity grid spacing started at 1km, then was decreased to 500m, 100m, 50m and finally 10m to see if resolution with decent accuracy at that scale was possible. We considered accuracy to be good when we could invert a velocity model perturbed by 50% back to within 5% of the original model, and resolution to be the size of the grid spacing. We found that 100 m resolution could obtained by using 120 stations with 500 events, bu this is our current limit. The limiting factors are the size of computers needed for the large arrays in the inversion and a

  6. New study on the 1941 Gloria Fault earthquake and tsunami

    NASA Astrophysics Data System (ADS)

    Baptista, Maria Ana; Miranda, Jorge Miguel; Batlló, Josep; Lisboa, Filipe; Luis, Joaquim; Maciá, Ramon

    2016-08-01

    The M ˜ 8.3-8.4 25 November 1941 was one of the largest submarine strike-slip earthquakes ever recorded in the Northeast (NE) Atlantic basin. This event occurred along the Eurasia-Nubia plate boundary between the Azores and the Strait of Gibraltar. After the earthquake, the tide stations in the NE Atlantic recorded a small tsunami with maximum amplitudes of 40 cm peak to through in the Azores and Madeira islands. In this study, we present a re-evaluation of the earthquake epicentre location using seismological data not included in previous studies. We invert the tsunami travel times to obtain a preliminary tsunami source location using the backward ray tracing (BRT) technique. We invert the tsunami waveforms to infer the initial sea surface displacement using empirical Green's functions, without prior assumptions about the geometry of the source. The results of the BRT simulation locate the tsunami source quite close to the new epicentre. This fact suggests that the co-seismic deformation of the earthquake induced the tsunami. The waveform inversion of tsunami data favours the conclusion that the earthquake ruptured an approximately 160 km segment of the plate boundary, in the eastern section of the Gloria Fault between -20.249 and -18.630° E. The results presented here contribute to the evaluation of tsunami hazard in the Northeast Atlantic basin.

  7. The 29 July 2014 (Mw 6.4) Southern Veracruz, Mexico Earthquake: Scenary Previous to Its Occurrence.

    NASA Astrophysics Data System (ADS)

    Yamamoto, J.

    2014-12-01

    On 29 July 2014 (10:46 UTC) a magnitude 6.4 (Mw) earthquake occurred at the southern Veracruz, Mexico region. The epicenter was preliminary located at 17.70° N and 95.63° W. It was a normal fault event with the slip on a fault that trend NNW and a focus approximately 117 km below the surface of the Gulf of Mexico costal plane. The earthquake was widely felt through centro and southern Mexico. In Oaxaca City 133 km to the south a person die of a hearth attack. No damages were reported. Most prominent moderate-sized earthquakes occurring in the southern Veracruz region since 1959 has been concentrated along two well defined seismic belts. One belt runs off the coast following nearly its contour. Here the earthquakes are shallow depth and mostly show a reverse fault mechanism. This belt of seismicity begins at the Los Tuxtlas volcanic field. Another seismic belt is located inland 70 km to the west. Here most earthquakes are of intermediate-depth (108-154 km) focus and normal faulting mechanism. The July 2014 earthquake is located near to this second seismic belt. In the present paper we discuss, within the regional geotectonic framework, the location and some aspects of the rupture process of the July 2014 earthquake.

  8. Evaluating spatial and temporal relationships between an earthquake cluster near Entiat, central Washington, and the large December 1872 Entiat earthquake

    USGS Publications Warehouse

    Brocher, Thomas M.; Blakely, Richard J.; Sherrod, Brian

    2017-01-01

    We investigate spatial and temporal relations between an ongoing and prolific seismicity cluster in central Washington, near Entiat, and the 14 December 1872 Entiat earthquake, the largest historic crustal earthquake in Washington. A fault scarp produced by the 1872 earthquake lies within the Entiat cluster; the locations and areas of both the cluster and the estimated 1872 rupture surface are comparable. Seismic intensities and the 1–2 m of coseismic displacement suggest a magnitude range between 6.5 and 7.0 for the 1872 earthquake. Aftershock forecast models for (1) the first several hours following the 1872 earthquake, (2) the largest felt earthquakes from 1900 to 1974, and (3) the seismicity within the Entiat cluster from 1976 through 2016 are also consistent with this magnitude range. Based on this aftershock modeling, most of the current seismicity in the Entiat cluster could represent aftershocks of the 1872 earthquake. Other earthquakes, especially those with long recurrence intervals, have long‐lived aftershock sequences, including the Mw">MwMw 7.5 1891 Nobi earthquake in Japan, with aftershocks continuing 100 yrs after the mainshock. Although we do not rule out ongoing tectonic deformation in this region, a long‐lived aftershock sequence can account for these observations.

  9. Earthquake sources near Uturuncu Volcano

    NASA Astrophysics Data System (ADS)

    Keyson, L.; West, M. E.

    2013-12-01

    Uturuncu, located in southern Bolivia near the Chile and Argentina border, is a dacitic volcano that was last active 270 ka. It is a part of the Altiplano-Puna Volcanic Complex, which spans 50,000 km2 and is comprised of a series of ignimbrite flare-ups since ~23 ma. Two sets of evidence suggest that the region is underlain by a significant magma body. First, seismic velocities show a low velocity layer consistent with a magmatic sill below depths of 15-20 km. This inference is corroborated by high electrical conductivity between 10km and 30km. This magma body, the so called Altiplano-Puna Magma Body (APMB) is the likely source of volcanic activity in the region. InSAR studies show that during the 1990s, the volcano experienced an average uplift of about 1 to 2 cm per year. The deformation is consistent with an expanding source at depth. Though the Uturuncu region exhibits high rates of crustal seismicity, any connection between the inflation and the seismicity is unclear. We investigate the root causes of these earthquakes using a temporary network of 33 seismic stations - part of the PLUTONS project. Our primary approach is based on hypocenter locations and magnitudes paired with correlation-based relative relocation techniques. We find a strong tendency toward earthquake swarms that cluster in space and time. These swarms often last a few days and consist of numerous earthquakes with similar source mechanisms. Most seismicity occurs in the top 10 kilometers of the crust and is characterized by well-defined phase arrivals and significant high frequency content. The frequency-magnitude relationship of this seismicity demonstrates b-values consistent with tectonic sources. There is a strong clustering of earthquakes around the Uturuncu edifice. Earthquakes elsewhere in the region align in bands striking northwest-southeast consistent with regional stresses.

  10. Differential energy radiation from two earthquakes in Japan with identical Mw: The Kyushu 1996 and Tottori 2000 earthquakes

    USGS Publications Warehouse

    Choy, G.L.; Boatwright, J.

    2009-01-01

    We examine two closely located earthquakes in Japan that had identical moment magnitudes Mw but significantly different energy magnitudes Me. We use teleseismic data from the Global Seismograph Network and strong-motion data from the National Research Institute for Earth Science and Disaster Prevention's K-Net to analyze the 19 October 1996 Kyushu earthquake (Mw 6.7, Me 6.6) and the 6 October 2000 Tottori earthquake (Mw 6.7, Me 7.4). To obtain regional estimates of radiated energy ES we apply a spectral technique to regional (<200 km) waveforms that are dominated by S and Lg waves. For the thrust-fault Kyushu earthquake, we estimate an average regional attenuation Q(f) 230f0:65. For the strike-slip Tottori earthquake, the average regional attenuation is Q(f) 180f0:6. These attenuation functions are similar to those derived from studies of both California and Japan earthquakes. The regional estimate of ES for the Kyushu earthquake, 3:8 ?? 1014 J, is significantly smaller than that for the Tottori earthquake, ES 1:3 ?? 1015 J. These estimates correspond well with the teleseismic estimates of 3:9 ?? 1014 J and 1:8 ?? 1015 J, respectively. The apparent stress (Ta = ??Es/M0 with ?? equal to rigidity) for the Kyushu earthquake is 4 times smaller than the apparent stress for the Tottori earthquake. In terms of the fault maturity model, the significantly greater release of energy by the strike-slip Tottori earthquake can be related to strong deformation in an immature intraplate setting. The relatively lower energy release of the thrust-fault Kyushu earthquake can be related to rupture on mature faults at a subduction environment. The consistence between teleseismic and regional estimates of ES is particularly significant as teleseismic data for computing ES are routinely available for all large earthquakes whereas often there are no near-field data.

  11. Likelihood testing of seismicity-based rate forecasts of induced earthquakes in Oklahoma and Kansas

    USGS Publications Warehouse

    Moschetti, Morgan P.; Hoover, Susan M.; Mueller, Charles

    2016-01-01

    Likelihood testing of induced earthquakes in Oklahoma and Kansas has identified the parameters that optimize the forecasting ability of smoothed seismicity models and quantified the recent temporal stability of the spatial seismicity patterns. Use of the most recent 1-year period of earthquake data and use of 10–20-km smoothing distances produced the greatest likelihood. The likelihood that the locations of January–June 2015 earthquakes were consistent with optimized forecasts decayed with increasing elapsed time between the catalogs used for model development and testing. Likelihood tests with two additional sets of earthquakes from 2014 exhibit a strong sensitivity of the rate of decay to the smoothing distance. Marked reductions in likelihood are caused by the nonstationarity of the induced earthquake locations. Our results indicate a multiple-fold benefit from smoothed seismicity models in developing short-term earthquake rate forecasts for induced earthquakes in Oklahoma and Kansas, relative to the use of seismic source zones.

  12. Testing hypotheses of earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Kagan, Y. Y.; Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.

    2003-12-01

    We present a relatively straightforward likelihood method for testing those earthquake hypotheses that can be stated as vectors of earthquake rate density in defined bins of area, magnitude, and time. We illustrate the method as it will be applied to the Regional Earthquake Likelihood Models (RELM) project of the Southern California Earthquake Center (SCEC). Several earthquake forecast models are being developed as part of this project, and additional contributed forecasts are welcome. Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. We would test models in pairs, requiring that both forecasts in a pair be defined over the same set of bins. Thus we offer a standard "menu" of bins and ground rules to encourage standardization. One menu category includes five-year forecasts of magnitude 5.0 and larger. Forecasts would be in the form of a vector of yearly earthquake rates on a 0.05 degree grid at the beginning of the test. Focal mechanism forecasts, when available, would be also be archived and used in the tests. The five-year forecast category may be appropriate for testing hypotheses of stress shadows from large earthquakes. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.05 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. All earthquakes would be counted, and no attempt made to separate foreshocks, main shocks, and aftershocks. Earthquakes would be considered as point sources located at the hypocenter. For each pair of forecasts, we plan to compute alpha, the probability that the first would be wrongly rejected in favor of

  13. The Earthquake Early Warning System In Southern Italy: Performance Tests And Next Developments

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Elia, L.; Martino, C.; Colombelli, S.; Emolo, A.; Festa, G.; Iannaccone, G.

    2011-12-01

    PRESTo (PRobabilistic and Evolutionary early warning SysTem) is the software platform for Earthquake Early Warning (EEW) in Southern Italy, that integrates recent algorithms for real-time earthquake location, magnitude estimation and damage assessment, into a highly configurable and easily portable package. The system is under active experimentation based on the Irpinia Seismic Network (ISNet). PRESTo processes the live streams of 3C acceleration data for P-wave arrival detection and, while an event is occurring, promptly performs event detection and provides location, magnitude estimations and peak ground shaking predictions at target sites. The earthquake location is obtained by an evolutionary, real-time probabilistic approach based on an equal differential time formulation. At each time step, it uses information from both triggered and not-yet-triggered stations. Magnitude estimation exploits an empirical relationship that correlates it to the filtered Peak Displacement (Pd), measured over the first 2-4 s of P-signal. Peak ground-motion parameters at any distance can be finally estimated by ground motion prediction equations. Alarm messages containing the updated estimates of these parameters can thus reach target sites before the destructive waves, enabling automatic safety procedures. Using the real-time data streaming from the ISNet network, PRESTo has produced a bulletin for about a hundred low-magnitude events occurred during last two years. Meanwhile, the performances of the EEW system were assessed off-line playing-back the records for moderate and large events from Italy, Spain and Japan and synthetic waveforms for large historical events in Italy. These tests have shown that, when a dense seismic network is deployed in the fault area, PRESTo produces reliable estimates of earthquake location and size within 5-6 s from the event origin time (To). Estimates are provided as probability density functions whose uncertainty typically decreases with time

  14. Feasibility of Twitter Based Earthquake Characterization From Analysis of 32 Million Tweets: There's Got to be a Pony in Here Somewhere!

    NASA Astrophysics Data System (ADS)

    Earle, P. S.; Guy, M. R.; Smoczyk, G. M.; Horvath, S. R.; Jessica, T. S.; Bausch, D. B.

    2014-12-01

    The U.S. Geological Survey (USGS) operates a real-time system that detects earthquakes using only data from Twitter—a service for sending and reading public text-based messages of up to 140 characters. The detector algorithm scans for significant increases in tweets containing the word "earthquake" in several languages and sends internal alerts with the detection time, representative tweet texts, and the location of the population center where most of the tweets originated. It has been running in real-time for over two years and finds, on average, two or three felt events per day, with a false detection rate of 9%. The main benefit of the tweet-based detections is speed, with most detections occurring between 20 and 120 seconds after the earthquake origin time. This is considerably faster than seismic detections in poorly instrumented regions of the world. The detections have reasonable coverage of populated areas globally. The number of Twitter-based detections is small compared to the number of earthquakes detected seismically, and only a rough location and qualitative assessment of shaking can be determined based on Tweet data alone. However, the Twitter-based detections are generally caused by widely felt events in populated urban areas that are of more immediate interest than those with no human impact. We will present a technical overview of the system and investigate the potential for rapid characterization of earthquake damage and effects using the 32 million "earthquake" tweets that the system has so far amassed. Initial results show potential for a correlation between characteristic responses and shaking level. For example, tweets containing the word "terremoto" were common following the MMI VII shaking produced by the April 1, 2014 M8.2 Iquique, Chile earthquake whereas a widely-tweeted deep-focus M5.2 north of Santiago, Chile on April 4, 2014 produced MMI VI shaking and almost exclusively "temblor" tweets. We are also investigating the use of other

  15. The Seismicity of the Central Apennines Region Studied by Means of a Physics-Based Earthquake Simulator

    NASA Astrophysics Data System (ADS)

    Console, R.; Vannoli, P.; Carluccio, R.

    2016-12-01

    The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a

  16. Rapid Large Earthquake and Run-up Characterization in Quasi Real Time

    NASA Astrophysics Data System (ADS)

    Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.

    2017-12-01

    Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.

  17. Design of a TDOA location engine and development of a location system based on chirp spread spectrum.

    PubMed

    Wang, Rui-Rong; Yu, Xiao-Qing; Zheng, Shu-Wang; Ye, Yang

    2016-01-01

    Location based services (LBS) provided by wireless sensor networks have garnered a great deal of attention from researchers and developers in recent years. Chirp spread spectrum (CSS) signaling formatting with time difference of arrival (TDOA) ranging technology is an effective LBS technique in regards to positioning accuracy, cost, and power consumption. The design and implementation of the location engine and location management based on TDOA location algorithms were the focus of this study; as the core of the system, the location engine was designed as a series of location algorithms and smoothing algorithms. To enhance the location accuracy, a Kalman filter algorithm and moving weighted average technique were respectively applied to smooth the TDOA range measurements and location results, which are calculated by the cooperation of a Kalman TDOA algorithm and a Taylor TDOA algorithm. The location management server, the information center of the system, was designed with Data Server and Mclient. To evaluate the performance of the location algorithms and the stability of the system software, we used a Nanotron nanoLOC Development Kit 3.0 to conduct indoor and outdoor location experiments. The results indicated that the location system runs stably with high accuracy at absolute error below 0.6 m.

  18. Earthquake alarm; operating the seismograph station at the University of California, Berkeley.

    USGS Publications Warehouse

    Stump, B.

    1980-01-01

    At the University of California seismographic stations, the task of locating and determining magnitudes for both local and distant earthquakes is a continuous one. Teleseisms must be located rapidly so that events that occur in the Pacific can be identified and the Pacific Tsunami Warning System alerted. For great earthquakes anywhere, there is a responsibility to notify public agencies such as the California Office of Emergency Services, the Federal Disaster Assistance Administration, the Earthquake Engineering Research Institute, the California Seismic Safety Commission, and the American Red Cross. In the case of damaging local earthquakes, it is necessary to alert also the California Department of Water Resources, California Division of Mines and Geology, U.S Army Corps of Engineers, Federal Bureau of Reclamation, and the Bay Area Rapid Transit. These days, any earthquakes that are felt in northern California cause immediate inquiries from the news media and an interested public. The series of earthquakes that jolted the Livermore area from January 24 to 26 1980, is a good case in point. 

  19. Earthquake Early Warning ShakeAlert System: Testing and certification platform

    USGS Publications Warehouse

    Cochran, Elizabeth S.; Kohler, Monica D.; Given, Douglas; Guiwits, Stephen; Andrews, Jennifer; Meier, Men-Andrin; Ahmad, Mohammad; Henson, Ivan; Hartog, Renate; Smith, Deborah

    2017-01-01

    Earthquake early warning systems provide warnings to end users of incoming moderate to strong ground shaking from earthquakes. An earthquake early warning system, ShakeAlert, is providing alerts to beta end users in the western United States, specifically California, Oregon, and Washington. An essential aspect of the earthquake early warning system is the development of a framework to test modifications to code to ensure functionality and assess performance. In 2016, a Testing and Certification Platform (TCP) was included in the development of the Production Prototype version of ShakeAlert. The purpose of the TCP is to evaluate the robustness of candidate code that is proposed for deployment on ShakeAlert Production Prototype servers. TCP consists of two main components: a real‐time in situ test that replicates the real‐time production system and an offline playback system to replay test suites. The real‐time tests of system performance assess code optimization and stability. The offline tests comprise a stress test of candidate code to assess if the code is production ready. The test suite includes over 120 events including local, regional, and teleseismic historic earthquakes, recentering and calibration events, and other anomalous and potentially problematic signals. Two assessments of alert performance are conducted. First, point‐source assessments are undertaken to compare magnitude, epicentral location, and origin time with the Advanced National Seismic System Comprehensive Catalog, as well as to evaluate alert latency. Second, we describe assessment of the quality of ground‐motion predictions at end‐user sites by comparing predicted shaking intensities to ShakeMaps for historic events and implement a threshold‐based approach that assesses how often end users initiate the appropriate action, based on their ground‐shaking threshold. TCP has been developed to be a convenient streamlined procedure for objectively testing algorithms, and it has

  20. Earthquake-related versus non-earthquake-related injuries in spinal injury patients: differentiation with multidetector computed tomography

    PubMed Central

    2010-01-01

    Introduction In recent years, several massive earthquakes have occurred across the globe. Multidetector computed tomography (MDCT) is reliable in detecting spinal injuries. The purpose of this study was to compare the features of spinal injuries resulting from the Sichuan earthquake with those of non-earthquake-related spinal trauma using MDCT. Methods Features of spinal injuries of 223 Sichuan earthquake-exposed patients and 223 non-earthquake-related spinal injury patients were retrospectively compared using MDCT. The date of non-earthquake-related spinal injury patients was collected from 1 May 2009 to 22 July 2009 to avoid the confounding effects of seasonal activity and clothing. We focused on anatomic sites, injury types and neurologic deficits related to spinal injuries. Major injuries were classified according to the grid 3-3-3 scheme of the Magerl (AO) classification system. Results A total of 185 patients (82.96%) in the earthquake-exposed cohort experienced crush injuries. In the earthquake and control groups, 65 and 92 patients, respectively, had neurologic deficits. The anatomic distribution of these two cohorts was significantly different (P < 0.001). Cervical spinal injuries were more common in the control group (risk ratio (RR) = 2.12, P < 0.001), whereas lumbar spinal injuries were more common in the earthquake-related spinal injuries group (277 of 501 injured vertebrae; 55.29%). The major types of injuries were significantly different between these cohorts (P = 0.002). Magerl AO type A lesions composed most of the lesions seen in both of these cohorts. Type B lesions were more frequently seen in earthquake-related spinal injuries (RR = 1.27), while we observed type C lesions more frequently in subjects with non-earthquake-related spinal injuries (RR = 1.98, P = 0.0029). Conclusions Spinal injuries sustained in the Sichuan earthquake were located mainly in the lumbar spine, with a peak prevalence of type A lesions and a high occurrence of

  1. Pre-earthquake Magnetic Pulses

    NASA Astrophysics Data System (ADS)

    Scoville, J.; Heraud, J. A.; Freund, F. T.

    2015-12-01

    A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are suspected to be generated deep in the Earth's crust, in and around the hypocentral volume, days or even weeks before earth quakes. Their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, when the sources of these pulses are triangulated, the locations coincide with the epicenters of future earthquakes. We couple a drift-diffusion semiconductor model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.

  2. Rupture geometry and slip distribution of the 2016 January 21st Ms6.4 Menyuan, China earthquake inferred from Sentinel-1A InSAR measurements

    NASA Astrophysics Data System (ADS)

    Zhou, Y.

    2016-12-01

    On 21 January 2016, an Ms6.4 earthquake stroke Menyuan country, Qinghai Province, China. The epicenter of the main shock and locations of its aftershocks indicate that the Menyuan earthquake occurred near the left-lateral Lenglongling fault. However, the focal mechanism suggests that the earthquake should take place on a thrust fault. In addition, field investigation indicates that the earthquake did not rupture the ground surface. Therefore, the rupture geometry is unclear as well as coseismic slip distribution. We processed two pairs of InSAR images acquired by the ESA Sentinel-1A satellite with the ISCE software, and both ascending and descending orbits were included. After subsampling the coseismic InSAR images into about 800 pixels, coseismic displacement data along LOS direction are inverted for earthquake source parameters. We employ an improved mixed linear-nonlinear Bayesian inversion method to infer fault geometric parameters, slip distribution, and the Laplacian smoothing factor simultaneously. This method incorporates a hybrid differential evolution algorithm, which is an efficient global optimization algorithm. The inversion results show that the Menyuan earthquake ruptured a blind thrust fault with a strike of 124°and a dip angle of 41°. This blind fault was never investigated before and intersects with the left-lateral Lenglongling fault, but the strikes of them are nearly parallel. The slip sense is almost pure thrusting, and there is no significant slip within 4km depth. The max slip value is up to 0.3m, and the estimated moment magnitude is Mw5.93, in agreement with the seismic inversion result. The standard error of residuals between InSAR data and model prediction is as small as 0.5cm, verifying the correctness of the inversion results.

  3. Detecting Earthquakes--Part 2.

    ERIC Educational Resources Information Center

    Isenberg, C.; And Others

    1983-01-01

    Basic concepts associated with seismic wave propagation through the earth and the location of seismic events were explained in part 1 (appeared in January 1983 issue). This part focuses on the construction of a student seismometer for detecting earthquakes and underground nuclear explosions anywhere on the earth's surface. (Author/JN)

  4. Real-time Estimation of Fault Rupture Extent for Recent Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Yamada, M.; Mori, J. J.

    2009-12-01

    Current earthquake early warning systems assume point source models for the rupture. However, for large earthquakes, the fault rupture length can be of the order of tens to hundreds of kilometers, and the prediction of ground motion at a site requires the approximated knowledge of the rupture geometry. Early warning information based on a point source model may underestimate the ground motion at a site, if a station is close to the fault but distant from the epicenter. We developed an empirical function to classify seismic records into near-source (NS) or far-source (FS) records based on the past strong motion records (Yamada et al., 2007). Here, we defined the near-source region as an area with a fault rupture distance less than 10km. If we have ground motion records at a station, the probability that the station is located in the near-source region is; P = 1/(1+exp(-f)) f = 6.046log10(Za) + 7.885log10(Hv) - 27.091 where Za and Hv denote the peak values of the vertical acceleration and horizontal velocity, respectively. Each observation provides the probability that the station is located in near-source region, so the resolution of the proposed method depends on the station density. The information of the fault rupture location is a group of points where the stations are located. However, for practical purposes, the 2-dimensional configuration of the fault is required to compute the ground motion at a site. In this study, we extend the methodology of NS/FS classification to characterize 2-dimensional fault geometries and apply them to strong motion data observed in recent large earthquakes. We apply a cosine-shaped smoothing function to the probability distribution of near-source stations, and convert the point fault location to 2-dimensional fault information. The estimated rupture geometry for the 2007 Niigata-ken Chuetsu-oki earthquake 10 seconds after the origin time is shown in Figure 1. Furthermore, we illustrate our method with strong motion data of the

  5. Estimation of vulnerability functions based on a global earthquake damage database

    NASA Astrophysics Data System (ADS)

    Spence, R. J. S.; Coburn, A. W.; Ruffle, S. J.

    2009-04-01

    Developing a better approach to the estimation of future earthquake losses, and in particular to the understanding of the inherent uncertainties in loss models, is vital to confidence in modelling potential losses in insurance or for mitigation. For most areas of the world there is currently insufficient knowledge of the current building stock for vulnerability estimates to be based on calculations of structural performance. In such areas, the most reliable basis for estimating vulnerability is performance of the building stock in past earthquakes, using damage databases, and comparison with consistent estimates of ground motion. This paper will present a new approach to the estimation of vulnerabilities using the recently launched Cambridge University Damage Database (CUEDD). CUEDD is based on data assembled by the Martin Centre at Cambridge University since 1980, complemented by other more-recently published and some unpublished data. The database assembles in a single, organised, expandable and web-accessible database, summary information on worldwide post-earthquake building damage surveys which have been carried out since the 1960's. Currently it contains data on the performance of more than 750,000 individual buildings, in 200 surveys following 40 separate earthquakes. The database includes building typologies, damage levels, location of each survey. It is mounted on a GIS mapping system and links to the USGS Shakemaps of each earthquake which enables the macroseismic intensity and other ground motion parameters to be defined for each survey and location. Fields of data for each building damage survey include: · Basic earthquake data and its sources · Details of the survey location and intensity and other ground motion observations or assignments at that location · Building and damage level classification, and tabulated damage survey results · Photos showing typical examples of damage. In future planned extensions of the database information on human

  6. Earthquake likelihood model testing

    USGS Publications Warehouse

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  7. Anomalous variations of lithosphere magnetic field before several earthquakes

    NASA Astrophysics Data System (ADS)

    Ni, Z.; Chen, B.

    2015-12-01

    Based on the geomagnetic vector data measured each year since 2011 at more than 500 sites with a mean spatial interval of ~70km.we observed anomalous variations of lithospheric magnetic field before and after over 15 earthquakes having magnitude > 5. We find that the field in near proximity (about 50km) to the epicenter of large earthquakes shows high spatial and temporal gradients before the earthquake. Due to the low frequency of repeat measurements it is unclear when these variations occurred and how do them evolve. We point out anomalous magnetic filed using some circles with radius of 50km usually in June of each year, and then we would check whether quake will locat in our circles during one year after that time (June to next June). Now we caught 10 earthquakes of 15 main shocks having magnitude > 5, most of them located at less than10km away from our circles and some of them were in our circles. Most results show that the variations of lithosphere magnetic filed at the epicenter are different with surrending backgroud usually. When we figure out horizontal variations (vector) of lithosphere magnetic field and epicenter during one year after each June, we found half of them show that the earthquakes will locat at "the inlands in a flowing river", that means earthquakes may occur at "quiet"regions while the backgroud show character as"flow" as liquid. When we compared with GPS results, it appears that these variations of lithospere magnetic field may also correlate with displacement of earth's surface. However we do not compared with GPS results for each earthquake, we are not clear whether these anomalous variations of lithospere magnetic field may also correlate with anomalous displacement of earth's surface. Future work will include developing an automated method for identifying this type of anomalous field behavior and trying to short repeat measurement period to 6 month to try to find when these variations occur.

  8. Monitoring of soil radon by SSNTD in Eastern India in search of possible earthquake precursor.

    PubMed

    Deb, Argha; Gazi, Mahasin; Ghosh, Jayita; Chowdhury, Saheli; Barman, Chiranjib

    2018-04-01

    The present paper deals with monitoring soil radon-222 concentration at two different locations, designated Site A and Site B, 200 m apart at Jadavpur University campus, Kolkata, India, with a view to find possible precursors for the earthquakes that occurred within a few hundred kilometers from the monitoring site. The solid state nuclear track detector CR-39 has been used for detection of radon gas coming out from soil. Radon-222 time series at both locations during the period August 2012-December 2013 have been analysed. Distinct anomalies in the soil radon time series have been observed for seven earthquakes of magnitude greater than 4.0 M that occurred during this time. Of these, radon anomalies for two earthquakes have been observed at both locations A and B. Absence of anomalies for some other earthquakes has been discussed, and the observations have been compared with some earthquake precursor models. Copyright © 2018. Published by Elsevier Ltd.

  9. Earthquakes of the Central United States, 1795-2002

    USGS Publications Warehouse

    Wheeler, Russell L.

    2003-01-01

    This report describes construction of a list of Central U.S. earthquakes to be shown on a large-format map that is targeted for a non-technical audience. The map shows the locations and sizes of historical earthquakes of magnitude 3.0 or larger over the most seismically active part of the central U.S., including the New Madrid seismic zone. The map shows more than one-half million square kilometers and parts or all of ten States. No existing earthquake catalog had provided current, uniform coverage down to magnitude 3.0, so one had to be made. Consultation with State geological surveys insured compatibility with earthquake lists maintained by them, thereby allowing the surveys and the map to present consistent information to the public.

  10. Earthquake Damping Device for Steel Frame

    NASA Astrophysics Data System (ADS)

    Zamri Ramli, Mohd; Delfy, Dezoura; Adnan, Azlan; Torman, Zaida

    2018-04-01

    Structures such as buildings, bridges and towers are prone to collapse when natural phenomena like earthquake occurred. Therefore, many design codes are reviewed and new technologies are introduced to resist earthquake energy especially on building to avoid collapse. The tuned mass damper is one of the earthquake reduction products introduced on structures to minimise the earthquake effect. This study aims to analyse the effectiveness of tuned mass damper by experimental works and finite element modelling. The comparisons are made between these two models under harmonic excitation. Based on the result, it is proven that installing tuned mass damper will reduce the dynamic response of the frame but only in several input frequencies. At the highest input frequency applied, the tuned mass damper failed to reduce the responses. In conclusion, in order to use a proper design of damper, detailed analysis must be carried out to have sufficient design based on the location of the structures with specific ground accelerations.

  11. Earthquake Risk Reduction to Istanbul Natural Gas Distribution Network

    NASA Astrophysics Data System (ADS)

    Zulfikar, Can; Kariptas, Cagatay; Biyikoglu, Hikmet; Ozarpa, Cevat

    2017-04-01

    Earthquake Risk Reduction to Istanbul Natural Gas Distribution Network Istanbul Natural Gas Distribution Corporation (IGDAS) is one of the end users of the Istanbul Earthquake Early Warning (EEW) signal. IGDAS, the primary natural gas provider in Istanbul, operates an extensive system 9,867km of gas lines with 750 district regulators and 474,000 service boxes. The natural gas comes to Istanbul city borders with 70bar in 30inch diameter steel pipeline. The gas pressure is reduced to 20bar in RMS stations and distributed to district regulators inside the city. 110 of 750 district regulators are instrumented with strong motion accelerometers in order to cut gas flow during an earthquake event in the case of ground motion parameters exceeds the certain threshold levels. Also, state of-the-art protection systems automatically cut natural gas flow when breaks in the gas pipelines are detected. IGDAS uses a sophisticated SCADA (supervisory control and data acquisition) system to monitor the state-of-health of its pipeline network. This system provides real-time information about quantities related to pipeline monitoring, including input-output pressure, drawing information, positions of station and RTU (remote terminal unit) gates, slum shut mechanism status at 750 district regulator sites. IGDAS Real-time Earthquake Risk Reduction algorithm follows 4 stages as below: 1) Real-time ground motion data transmitted from 110 IGDAS and 110 KOERI (Kandilli Observatory and Earthquake Research Institute) acceleration stations to the IGDAS Scada Center and KOERI data center. 2) During an earthquake event EEW information is sent from IGDAS Scada Center to the IGDAS stations. 3) Automatic Shut-Off is applied at IGDAS district regulators, and calculated parameters are sent from stations to the IGDAS Scada Center and KOERI. 4) Integrated building and gas pipeline damage maps are prepared immediately after the earthquake event. The today's technology allows to rapidly estimate the

  12. The HayWired Earthquake Scenario—Earthquake Hazards

    USGS Publications Warehouse

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  13. Possible cause for an improbable earthquake: The 1997 MW 4.9 southern Alabama earthquake and hydrocarbon recovery

    USGS Publications Warehouse

    Gomberg, J.; Wolf, L.

    1999-01-01

    Circumstantial and physical evidence indicates that the 1997 MW 4.9 earthquake in southern Alabama may have been related to hydrocarbon recovery. Epicenters of this earthquake and its aftershocks were located within a few kilometers of active oil and gas extraction wells and two pressurized injection wells. Main shock and aftershock focal depths (2-6 km) are within a few kilometers of the injection and withdrawal depths. Strain accumulation at geologic rates sufficient to cause rupture at these shallow focal depths is not likely. A paucity of prior seismicity is difficult to reconcile with the occurrence of an earthquake of MW 4.9 and a magnitude-frequency relationship usually assumed for natural earthquakes. The normal-fault main-shock mechanism is consistent with reactivation of preexisting faults in the regional tectonic stress field. If the earthquake were purely tectonic, however, the question arises as to why it occurred on only the small fraction of a large, regional fault system coinciding with active hydrocarbon recovery. No obvious temporal correlation is apparent between the earthquakes and recovery activities. Although thus far little can be said quantitatively about the physical processes that may have caused the 1997 sequence, a plausible explanation involves the poroelastic response of the crust to extraction of hydrocarbons.

  14. MUSIC algorithm for location searching of dielectric anomalies from S-parameters using microwave imaging

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang; Kim, Hwa Pyung; Lee, Kwang-Jae; Son, Seong-Ho

    2017-11-01

    Motivated by the biomedical engineering used in early-stage breast cancer detection, we investigated the use of MUltiple SIgnal Classification (MUSIC) algorithm for location searching of small anomalies using S-parameters. We considered the application of MUSIC to functional imaging where a small number of dipole antennas are used. Our approach is based on the application of Born approximation or physical factorization. We analyzed cases in which the anomaly is respectively small and large in relation to the wavelength, and the structure of the left-singular vectors is linked to the nonzero singular values of a Multi-Static Response (MSR) matrix whose elements are the S-parameters. Using simulations, we demonstrated the strengths and weaknesses of the MUSIC algorithm in detecting both small and extended anomalies.

  15. Measurement of neutron and charged particle fluxes toward earthquake prediction

    NASA Astrophysics Data System (ADS)

    Maksudov, Asatulla U.; Zufarov, Mars A.

    2017-12-01

    In this paper, we describe a possible method for predicting the earthquakes, which is based on simultaneous recording of the intensity of fluxes of neutrons and charged particles by detectors, commonly used in nuclear physics. These low-energy particles originate from radioactive nuclear processes in the Earth's crust. The variations in the particle flux intensity can be the precursor of the earthquake. A description is given of an electronic installation that records the fluxes of charged particles in the radial direction, which are a possible response to the accumulated tectonic stresses in the Earth's crust. The obtained results showed an increase in the intensity of the fluxes for 10 or more hours before the occurrence of the earthquake. The previous version of the installation was able to indicate for the possibility of an earthquake (Maksudov et al. in Instrum Exp Tech 58:130-131, 2015), but did not give information about the direction of the epicenter location. In this regard, the installation was modified by adding eight directional detectors. With the upgraded setup, we have received both the predictive signals, and signals determining the directions of the location of the forthcoming earthquake, starting 2-3 days before its origin.

  16. Repeated Earthquakes in the Vrancea Subcrustal Source and Source Scaling

    NASA Astrophysics Data System (ADS)

    Popescu, Emilia; Otilia Placinta, Anica; Borleasnu, Felix; Radulian, Mircea

    2017-12-01

    The Vrancea seismic nest, located at the South-Eastern Carpathians Arc bend, in Romania, is a well-confined cluster of seismicity at intermediate depth (60 - 180 km). During the last 100 years four major shocks were recorded in the lithosphere body descending almost vertically beneath the Vrancea region: 10 November 1940 (Mw 7.7, depth 150 km), 4 March 1977 (Mw 7.4, depth 94 km), 30 August 1986 (Mw 7.1, depth 131 km) and a double shock on 30 and 31 May 1990 (Mw 6.9, depth 91 km and Mw 6.4, depth 87 km, respectively). The probability of repeated earthquakes in the Vrancea seismogenic volume is relatively large taking into account the high density of foci. The purpose of the present paper is to investigate source parameters and clustering properties for the repetitive earthquakes (located close each other) recorded in the Vrancea seismogenic subcrustal region. To this aim, we selected a set of earthquakes as templates for different co-located groups of events covering the entire depth range of active seismicity. For the identified clusters of repetitive earthquakes, we applied spectral ratios technique and empirical Green’s function deconvolution, in order to constrain as much as possible source parameters. Seismicity patterns of repeated earthquakes in space, time and size are investigated in order to detect potential interconnections with larger events. Specific scaling properties are analyzed as well. The present analysis represents a first attempt to provide a strategy for detecting and monitoring possible interconnections between different nodes of seismic activity and their role in modelling tectonic processes responsible for generating the major earthquakes in the Vrancea subcrustal seismogenic source.

  17. Seismic tomography of the area of the 2010 Beni-Ilmane earthquake sequence, north-central Algeria.

    PubMed

    Abacha, Issam; Koulakov, Ivan; Semmane, Fethi; Yelles-Chaouche, Abd Karim

    2014-01-01

    The region of Beni-Ilmane (District of M'sila, north-central Algeria) was the site of an earthquake sequence that started on 14 May 2010. This sequence, which lasted several months, was triggered by conjugate E-W reverse and N-S dextral faulting. To image the crustal structure of these active faults, we used a set of 1406 well located aftershocks events and applied the local tomography software (LOTOS) algorithm, which includes absolute source location, optimization of the initial 1D velocity model, and iterative tomographic inversion for 3D seismic P- and S-wave velocities (and the Vp/Vs ratio), and source parameters. The patterns of P-wave low-velocity anomalies correspond to the alignments of faults determined from geological evidence, and the P-wave high-velocity anomalies may represent rigid blocks of the upper crust that are not deformed by regional stresses. The S-wave low-velocity anomalies coincide with the aftershock area, where relatively high values of Vp/Vs ratio (1.78) are observed compared with values in the surrounding areas (1.62-1.66). These high values may indicate high fluid contents in the aftershock area. These fluids could have been released from deeper levels by fault movements during earthquakes and migrated rapidly upwards. This hypothesis is supported by vertical sections across the study area show that the major Vp/Vs anomalies are located above the seismicity clusters.

  18. Time-dependent earthquake forecasting: Method and application to the Italian region

    NASA Astrophysics Data System (ADS)

    Chan, C.; Sorensen, M. B.; Grünthal, G.; Hakimhashemi, A.; Heidbach, O.; Stromeyer, D.; Bosse, C.

    2009-12-01

    We develop a new approach for time-dependent earthquake forecasting and apply it to the Italian region. In our approach, the seismicity density is represented by a bandwidth function as a smoothing Kernel in the neighboring region of earthquakes. To consider the fault-interaction-based forecasting, we calculate the Coulomb stress change imparted by each earthquake in the study area. From this, the change of seismicity rate as a function of time can be estimated by the concept of rate-and-state stress transfer. We apply our approach to the region of Italy and earthquakes that occurred before 2003 to generate the seismicity density. To validate our approach, we compare our estimated seismicity density with the distribution of earthquakes with M≥3.8 after 2004. A positive correlation is found and all of the examined earthquakes locate in the area of the highest 66 percentile of seismicity density in the study region. Furthermore, the seismicity density corresponding to the epicenter of the 2009 April 6, Mw = 6.3, L’Aquila earthquake is in the area of the highest 5 percentile. For the time-dependent seismicity rate change, we estimate the rate-and-state stress transfer imparted by the M≥5.0 earthquakes occurred in the past 50 years. It suggests that the seismicity rate has increased at the locations of 65% of the examined earthquakes. Applying this approach to the L’Aquila sequence by considering seven M≥5.0 aftershocks as well as the main shock, not only spatial but also temporal forecasting of the aftershock distribution is significant.

  19. Earthquakes and depleted gas reservoirs: which comes first?

    NASA Astrophysics Data System (ADS)

    Mucciarelli, M.; Donda, F.; Valensise, G.

    2015-10-01

    While scientists are paying increasing attention to the seismicity potentially induced by hydrocarbon exploitation, so far, little is known about the reverse problem, i.e. the impact of active faulting and earthquakes on hydrocarbon reservoirs. The 20 and 29 May 2012 earthquakes in Emilia, northern Italy (Mw 6.1 and 6.0), raised concerns among the public for being possibly human-induced, but also shed light on the possible use of gas wells as a marker of the seismogenic potential of an active fold and thrust belt. We compared the location, depth and production history of 455 gas wells drilled along the Ferrara-Romagna arc, a large hydrocarbon reserve in the southeastern Po Plain (northern Italy), with the location of the inferred surface projection of the causative faults of the 2012 Emilia earthquakes and of two pre-instrumental damaging earthquakes. We found that these earthquake sources fall within a cluster of sterile wells, surrounded by productive wells at a few kilometres' distance. Since the geology of the productive and sterile areas is quite similar, we suggest that past earthquakes caused the loss of all natural gas from the potential reservoirs lying above their causative faults. To validate our hypothesis we performed two different statistical tests (binomial and Monte Carlo) on the relative distribution of productive and sterile wells, with respect to seismogenic faults. Our findings have important practical implications: (1) they may allow major seismogenic sources to be singled out within large active thrust systems; (2) they suggest that reservoirs hosted in smaller anticlines are more likely to be intact; and (3) they also suggest that in order to minimize the hazard of triggering significant earthquakes, all new gas storage facilities should use exploited reservoirs rather than sterile hydrocarbon traps or aquifers.

  20. a Buffer Analysis Based on Co-Location Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, S.; Wang, H.; Zhang, R.; Wang, Q.; Sha, H.; Liu, X.; Pan, Q.

    2018-05-01

    Buffer analysis is a common tool of spatial analysis, which deals with the problem of proximity in GIS. Buffer analysis researches the relationship between the center object and other objects around a certain distance. Buffer analysis can make the complicated problem be more scientifically and visually, and provide valuable information for users. Over the past decades, people have done a lot of researches on buffer analysis. Along with the constantly improvement of spatial analysis accuracy needed by people, people hope that the results of spatial analysis can be more exactly express the actual situation. Due to the influence of some certain factors, the impact scope and contact range of a geographic elements on the surrounding objects are uncertain. As all we know, each object has its own characteristics and changing rules in the nature. They are both independent and relative to each other. However, almost all the generational algorithms of existing buffer analysis are based on fixed buffer distance, which do not consider the co-location relationship among instances. Consequently, it is a waste of resource to retrieve the useless information, and useful information is ignored.

  1. Catalog of earthquake hypocenters at Alaskan volcanoes: January 1 through December 31, 2012

    USGS Publications Warehouse

    Dixon, James P.; Stihler, Scott D.; Power, John A.; Haney, Matthew M.; Parker, Tom; Searcy, Cheryl; Prejean, Stephanie

    2013-01-01

    Between January 1 and December 31, 2012, the Alaska Volcano Observatory located 4,787 earthquakes, of which 4,211 occurred within 20 kilometers of the 33 volcanoes monitored by a seismograph network. There was significant seismic activity at Iliamna, Kanaga, and Little Sitkin volcanoes in 2012. Instrumentation highlights for this year include the implementation of the Advanced National Seismic System Quake Monitoring System hardware and software in February 2012 and the continuation of the American Recovery and Reinvestment Act work in the summer of 2012. The operational highlight was the removal of Mount Wrangell from the list of monitored volcanoes. This catalog includes hypocenters, magnitudes, and statistics of the earthquakes located in 2012 with the station parameters, velocity models, and other files used to locate these earthquakes.

  2. Long-Delayed Aftershocks in New Zealand and the 2016 M7.8 Kaikoura Earthquake

    NASA Astrophysics Data System (ADS)

    Shebalin, P.; Baranov, S.

    2017-10-01

    We study aftershock sequences of six major earthquakes in New Zealand, including the 2016 M7.8 Kaikaoura and 2016 M7.1 North Island earthquakes. For Kaikaoura earthquake, we assess the expected number of long-delayed large aftershocks of M5+ and M5.5+ in two periods, 0.5 and 3 years after the main shocks, using 75 days of available data. We compare results with obtained for other sequences using same 75-days period. We estimate the errors by considering a set of magnitude thresholds and corresponding periods of data completeness and consistency. To avoid overestimation of the expected rates of large aftershocks, we presume a break of slope of the magnitude-frequency relation in the aftershock sequences, and compare two models, with and without the break of slope. Comparing estimations to the actual number of long-delayed large aftershocks, we observe, in general, a significant underestimation of their expected number. We can suppose that the long-delayed aftershocks may reflect larger-scale processes, including interaction of faults, that complement an isolated relaxation process. In the spirit of this hypothesis, we search for symptoms of the capacity of the aftershock zone to generate large events months after the major earthquake. We adapt an algorithm EAST, studying statistics of early aftershocks, to the case of secondary aftershocks within aftershock sequences of major earthquakes. In retrospective application to the considered cases, the algorithm demonstrates an ability to detect in advance long-delayed aftershocks both in time and space domains. Application of the EAST algorithm to the 2016 M7.8 Kaikoura earthquake zone indicates that the most likely area for a delayed aftershock of M5.5+ or M6+ is at the northern end of the zone in Cook Strait.

  3. Automated Radar Image of Deformation for Amatrice, Italy Earthquake

    NASA Image and Video Library

    2016-08-31

    Amatrice earthquake in central Italy, which caused widespread building damage to several towns throughout the region. This earthquake was the strongest in that area since the 2009 earthquake that destroyed the city of L'Aquila. The Advanced Rapid Imaging and Analysis (ARIA) data system, a collaborative project between NASA's Jet Propulsion Laboratory, Pasadena, California, and the California Institute of Technology in Pasadena, automatically generated interferometric synthetic aperture radar images from the Copernicus Sentinel 1A satellite operated by the European Space Agency (ESA) for the European Commission to calculate a map of the deformation of Earth's surface caused by the quake. This false-color map shows the amount of permanent surface movement, as viewed by the satellite, during a 12-day interval between two Sentinel 1 images acquired on Aug. 15, 2016, and Aug. 27, 2016. The movement was caused almost entirely by the earthquake. In this map, the colors of the surface displacements are proportional to the surface motion. The red and pink tones show the areas where the land moved toward the satellite by up to 2 inches (5 centimeters). The area with various shades of blue moved away from the satellite, mostly downward, by as much as 8 inches (20 centimeters). Contours on the surface motion are 2 inches (5 centimeters) The green star shows the epicenter where the earthquake started as located by the U.S. Geological Survey National Earthquake Information Center. Black dots show town locations. Scientists use these maps to build detailed models of the fault slip at depth and associated land movements to better understand the impact on future earthquake activity. The map shows the fault or faults that moved in the earthquake is about 14 miles (22 kilometers) long between Amatrice and Norcia and slopes to the west beneath the area that moved downward. http://photojournal.jpl.nasa.gov/catalog/PIA20896

  4. Detailed source process of the 2007 Tocopilla earthquake.

    NASA Astrophysics Data System (ADS)

    Peyrat, S.; Madariaga, R.; Campos, J.; Asch, G.; Favreau, P.; Bernard, P.; Vilotte, J.

    2008-05-01

    We investigated the detail rupture process of the Tocopilla earthquake (Mw 7.7) of the 14 November 2007 and of the main aftershocks that occurred in the southern part of the North Chile seismic gap using strong motion data. The earthquake happen in the middle of the permanent broad band and strong motion network IPOC newly installed by GFZ and IPGP, and of a digital strong-motion network operated by the University of Chile. The Tocopilla earthquake is the last large thrust subduction earthquake that occurred since the major Iquique 1877 earthquake which produced a destructive tsunami. The Arequipa (2001) and Antofagasta (1995) earthquakes already ruptured the northern and southern parts of the gap, and the intraplate intermediate depth Tarapaca earthquake (2005) may have changed the tectonic loading of this part of the Peru-Chile subduction zone. For large earthquakes, the depth of the seismic rupture is bounded by the depth of the seismogenic zone. What controls the horizontal extent of the rupture for large earthquakes is less clear. Factors that influence the extent of the rupture include fault geometry, variations of material properties and stress heterogeneities inherited from the previous ruptures history. For subduction zones where structures are not well known, what may have stopped the rupture is not obvious. One crucial problem raised by the Tocopilla earthquake is to understand why this earthquake didn't extent further north, and at south, what is the role of the Mejillones peninsula that seems to act as a barrier. The focal mechanism was determined using teleseismic waveforms inversion and with a geodetic analysis (cf. Campos et al.; Bejarpi et al., in the same session). We studied the detailed source process using the strong motion data available. This earthquake ruptured the interplate seismic zone over more than 150 km and generated several large aftershocks, mainly located south of the rupture area. The strong-motion data show clearly two S

  5. Normal Fault Type Earthquakes Off Fukushima Region - Comparison of the 1938 Events and Recent Earthquakes -

    NASA Astrophysics Data System (ADS)

    Murotani, S.; Satake, K.

    2017-12-01

    Off Fukushima region, Mjma 7.4 (event A) and 6.9 (event B) events occurred on November 6, 1938, following the thrust fault type earthquakes of Mjma 7.5 and 7.3 on the previous day. These earthquakes were estimated as normal fault earthquakes by Abe (1977, Tectonophysics). An Mjma 7.0 earthquake occurred on July 12, 2014 near event B and an Mjma 7.4 earthquake occurred on November 22, 2016 near event A. These recent events are the only M 7 class earthquakes occurred off Fukushima since 1938. Except for the two 1938 events, normal fault earthquakes have not occurred until many aftershocks of the 2011 Tohoku earthquake. We compared the observed tsunami and seismic waveforms of the 1938, 2014, and 2016 earthquakes to examine the normal fault earthquakes occurred off Fukushima region. It is difficult to compare the tsunami waveforms of the 1938, 2014 and 2016 events because there were only a few observations at the same station. The teleseismic body wave inversion of the 2016 earthquake yielded with the focal mechanism of strike 42°, dip 35°, and rake -94°. Other source parameters were as follows: source area 70 km x 40 km, average slip 0.2 m, maximum slip 1.2 m, seismic moment 2.2 x 1019 Nm, and Mw 6.8. A large slip area is located near the hypocenter, and it is compatible with the tsunami source area estimated from tsunami travel times. The 2016 tsunami source area is smaller than that of the 1938 event, consistent with the difference in Mw: 7.7 for event A estimated by Abe (1977) and 6.8 for the 2016 event. Although the 2014 epicenter is very close to that of event B, the teleseismic waveforms of the 2014 event are similar to those of event A and the 2016 event. While Abe (1977) assumed that the mechanism of event B was the same as event A, the initial motions at some stations are opposite, indicating that the focal mechanisms of events A and B are different and more detailed examination is needed. The normal fault type earthquake seems to occur following the

  6. AC-DCFS: a toolchain implementation to Automatically Compute Coulomb Failure Stress changes after relevant earthquakes.

    NASA Astrophysics Data System (ADS)

    Alvarez-Gómez, José A.; García-Mayordomo, Julián

    2017-04-01

    We present an automated free software-based toolchain to obtain Coulomb Failure Stress change maps on fault planes of interest following the occurrence of a relevant earthquake. The system uses as input the focal mechanism data of the event occurred and an active fault database for the region. From the focal mechanism the orientations of the possible rupture planes, the location of the event and the size of the earthquake are obtained. From the size of the earthquake, the dimensions of the rupture plane are obtained by means of an algorithm based on empirical relations. Using the active fault database in the area, the stress-receiving planes are obtained and a verisimilitude index is assigned to the source plane from the two nodal planes of the focal mechanism. The obtained product is a series of layers in a format compatible with any type of GIS (or map completely edited in PDF format) showing the possible stress change maps on the different families of fault planes present in the epicentral zone. These type of products are presented generally in technical reports developed in the weeks following the occurrence of the event, or in scientific publications; however they have been proven useful for emergency management in the hours and days after a major event being these stress changes responsible of aftershocks, in addition to the mid-term earthquake forecasting. The automation of the calculation allows its incorporation within the products generated by the alert and surveillance agencies shortly after the earthquake occurred. It is now being implemented in the Spanish Geological Survey as one of the products that this agency would provides after the occurrence of relevant seismic series in Spain.

  7. Did you feel it? Community-made earthquake shaking maps

    USGS Publications Warehouse

    Wald, D.J.; Wald, L.A.; Dewey, J.W.; Quitoriano, Vince; Adams, Elisabeth

    2001-01-01

    Since the early 1990's, the magnitude and location of an earthquake have been available within minutes on the Internet. Now, as a result of work by the U.S. Geological Survey (USGS) and with the cooperation of various regional seismic networks, people who experience an earthquake can go online and share information about its effects to help create a map of shaking intensities and damage. Such 'Community Internet Intensity Maps' (CIIM's) contribute greatly in quickly assessing the scope of an earthquake emergency, even in areas lacking seismic instruments.

  8. Analysis of earthquake clustering and source spectra in the Salton Sea Geothermal Field

    NASA Astrophysics Data System (ADS)

    Cheng, Y.; Chen, X.

    2015-12-01

    The Salton Sea Geothermal field is located within the tectonic step-over between San Andreas Fault and Imperial Fault. Since the 1980s, geothermal energy exploration has resulted with step-like increase of microearthquake activities, which mirror the expansion of geothermal field. Distinguishing naturally occurred and induced seismicity, and their corresponding characteristics (e.g., energy release) is important for hazard assessment. Between 2008 and 2014, seismic data recorded by a local borehole array were provided public access from CalEnergy through SCEC data center; and the high quality local recording of over 7000 microearthquakes provides unique opportunity to sort out characteristics of induced versus natural activities. We obtain high-resolution earthquake location using improved S-wave picks, waveform cross-correlation and a new 3D velocity model. We then develop method to identify spatial-temporally isolated earthquake clusters. These clusters are classified into aftershock-type, swarm-type, and mixed-type (aftershock-like, with low skew, low magnitude and shorter duration), based on the relative timing of largest earthquakes and moment-release. The mixed-type clusters are mostly located at 3 - 4 km depth near injection well; while aftershock-type clusters and swarm-type clusters also occur further from injection well. By counting number of aftershocks within 1day following mainshock in each cluster, we find that the mixed-type clusters have much higher aftershock productivity compared with other types and historic M4 earthquakes. We analyze detailed spatial variation of 'b-value'. We find that the mixed-type clusters are mostly located within high b-value patches, while large (M>3) earthquakes and other types of clusters are located within low b-value patches. We are currently processing P and S-wave spectra to analyze the spatial-temporal correlation of earthquake stress parameter and seismicity characteristics. Preliminary results suggest that the

  9. Accessing northern California earthquake data via Internet

    NASA Astrophysics Data System (ADS)

    Romanowicz, Barbara; Neuhauser, Douglas; Bogaert, Barbara; Oppenheimer, David

    The Northern California Earthquake Data Center (NCEDC) provides easy access to central and northern California digital earthquake data. It is located at the University of California, Berkeley, and is operated jointly with the U.S. Geological Survey (USGS) in Menlo Park, Calif., and funded by the University of California and the National Earthquake Hazard Reduction Program. It has been accessible to users in the scientific community through Internet since mid-1992.The data center provides an on-line archive for parametric and waveform data from two regional networks: the Northern California Seismic Network (NCSN) operated by the USGS and the Berkeley Digital Seismic Network (BDSN) operated by the Seismographic Station at the University of California, Berkeley.

  10. Source parameters of microearthquakes on an interplate asperity off Kamaishi, NE Japan over two earthquake cycles

    USGS Publications Warehouse

    Uchida, Naoki; Matsuzawa, Toru; Ellsworth, William L.; Imanishi, Kazutoshi; Shimamura, Kouhei; Hasegawa, Akira

    2012-01-01

    We have estimated the source parameters of interplate earthquakes in an earthquake cluster off Kamaishi, NE Japan over two cycles of M~ 4.9 repeating earthquakes. The M~ 4.9 earthquake sequence is composed of nine events that occurred since 1957 which have a strong periodicity (5.5 ± 0.7 yr) and constant size (M4.9 ± 0.2), probably due to stable sliding around the source area (asperity). Using P- and S-wave traveltime differentials estimated from waveform cross-spectra, three M~ 4.9 main shocks and 50 accompanying microearthquakes (M1.5–3.6) from 1995 to 2008 were precisely relocated. The source sizes, stress drops and slip amounts for earthquakes of M2.4 or larger were also estimated from corner frequencies and seismic moments using simultaneous inversion of stacked spectral ratios. Relocation using the double-difference method shows that the slip area of the 2008 M~ 4.9 main shock is co-located with those of the 1995 and 2001 M~ 4.9 main shocks. Four groups of microearthquake clusters are located in and around the mainshock slip areas. Of these, two clusters are located at the deeper and shallower edge of the slip areas and most of these microearthquakes occurred repeatedly in the interseismic period. Two other clusters located near the centre of the mainshock source areas are not as active as the clusters near the edge. The occurrence of these earthquakes is limited to the latter half of the earthquake cycles of the M~ 4.9 main shock. Similar spatial and temporal features of microearthquake occurrence were seen for two other cycles before the 1995 M5.0 and 1990 M5.0 main shocks based on group identification by waveform similarities. Stress drops of microearthquakes are 3–11 MPa and are relatively constant within each group during the two earthquake cycles. The 2001 and 2008 M~ 4.9 earthquakes have larger stress drops of 41 and 27 MPa, respectively. These results show that the stress drop is probably determined by the fault properties and does not change

  11. Review of variations in Mw < 7 earthquake motions on position and tec (Mw = 6.5 aegean sea earthquake sample)

    NASA Astrophysics Data System (ADS)

    Yildirim, O.; Inyurt, S.; Mekik, C.

    2015-10-01

    Turkey is a country located in Middle Latitude zone and in which tectonic activity is intensive. Lastly, an earthquake of magnitude 6.5Mw occurred at Aegean Sea offshore on date 24 May 2014 at 12:25 UTC and it lasted approximately 40 s. The said earthquake was felt also in Greece, Romania and Bulgaria in addition to Turkey. In recent years seismic origin ionospheric anomaly detection studies have been done with TEC (Total Electron Contents) generated from GNSS (Global Navigation Satellite System) signals and the findings obtained have been revealed. In this study, TEC and positional variations have been examined seperately regarding the earthquake which occurred in the Aegean Sea. Then The correlation of the said ionospheric variation with the positional variation has been investigated. For this purpose, total fifteen stations have been used among which the data of four numbers of CORS-TR stations in the seismic zone (AYVL, CANA, IPSA, YENC) and IGS and EUREF stations are used. The ionospheric and positional variations of AYVL, CANA, IPSA and YENC stations have been examined by Bernese 5.0v software. When the (PPP-TEC) values produced as result of the analysis are examined, it has been understood that in the four stations located in Turkey, three days before the earthquake at 08:00 and 10:00 UTC, the TEC values were approximately 4 TECU above the upper limit TEC value. Still in the same stations, one day before the earthquake at 06:00, 08:00 and 10:00 UTC, it is being shown that the TEC values were approximately 5 TECU below the lower limit TEC value. On the other hand, the GIM-TEC values published by the CODE center have been examined. Still in all stations, it has been observed that three days before the earthquake the TEC values in the time portions of 08:00 and 10:00 UTC were approximately 2 TECU above, one day before the earthquake at 06:00, 08:00 and 10:00 UTC, the TEC values were approximately 4 TECU below the lower limit TEC value. Again, by using the same

  12. Review of variations in Mw < 7 earthquake motions on position and TEC (Mw = 6.5 Aegean Sea earthquake sample)

    NASA Astrophysics Data System (ADS)

    Yildirim, Omer; Inyurt, Samed; Mekik, Cetin

    2016-02-01

    Turkey is a country located in the middle latitude zone, where tectonic activity is intensive. Recently, an earthquake of magnitude 6.5 Mw occurred offshore in the Aegean Sea on 24 May 2014 at 09:25 UTC, which lasted about 40 s. The earthquake was also felt in Greece, Romania, and Bulgaria in addition to Turkey. In recent years, ionospheric anomaly detection studies have been carried out because of seismicity with total electron content (TEC) computed from the global navigation satellite system's (GNSS) signal delays and several interesting findings have been published. In this study, both TEC and positional variations have been examined separately following a moderate size earthquake in the Aegean Sea. The correlation of the aforementioned ionospheric variation with the positional variation has also been investigated. For this purpose, a total of 15 stations was used, including four continuously operating reference stations in Turkey (CORS-TR) and stations in the seismic zone (AYVL, CANA, IPSA, and YENC), as well as international GNSS service (IGS) and European reference frame permanent network (EPN) stations. The ionospheric and positional variations of the AYVL, CANA, IPSA, and YENC stations were examined using Bernese v5.0 software. When the precise point positioning TEC (PPP-TEC) values were examined, it was observed that the TEC values were approximately 4 TECU (total electron content unit) above the upper-limit TEC value at four stations located in Turkey, 3 days before the earthquake at 08:00 and 10:00 UTC. At the same stations, on the day before the earthquake at 06:00, 08:00, and 10:00 UTC, the TEC values were approximately 5 TECU below the lower-limit TEC value. The global ionosphere model TEC (GIM-TEC) values published by the Centre for Orbit Determination in Europe (CODE) were also examined. Three days before the earthquake, at all stations, it was observed that the TEC values in the time period between 08:00 and 10:00 UTC were approximately 2 TECU

  13. Rapid processing of data based on high-performance algorithms for solving inverse problems and 3D-simulation of the tsunami and earthquakes

    NASA Astrophysics Data System (ADS)

    Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.

    2012-04-01

    We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to

  14. Earthquakes

    MedlinePlus

    ... Search Term(s): Main Content Home Be Informed Earthquakes Earthquakes An earthquake is the sudden, rapid shaking of the earth, ... by the breaking and shifting of underground rock. Earthquakes can cause buildings to collapse and cause heavy ...

  15. Seismogenic Fault Geometry of 2010 Mw 7.1 Solomon Islands Earthquake

    NASA Astrophysics Data System (ADS)

    Kuo, Y.; Ku, C.; Taylor, F. W.; Huang, B.; Chen, Y.; Chao, W.; Huang, H.; Kuo, Y.; Wu, Y.; Suppe, J.

    2010-12-01

    The Solomon Islands is located in southwestern Pacific, where the Indo-Australian Plate is subducting northeastward beneath the Pacific Plate. Due to subduction of rugged seafloor topography, including seamounts, the seismic activity and tectonic behavior may be complicated. Seismicity in this region was anomalously low until 2007 when a megathrust rupture (Mw 8.1) occurred. More recently, on 3 January 2010, a Mw7.1 earthquake occurred beneath the extreme outer forearc next to the trench. It came with one foreshock (Mw 6.6, 50 minutes ahead) and two large aftershocks (Mw 6.8 and 6.0) greater than magnitude 6 within a week. It is interesting to note that these four focal mechanisms are very much similar and appear to have occurred along the interplate thrust zone between the Indo-Australian plate and Solomon Islands forearc. This Earthquake nucleated approximately 50 km to the southeast of the M8.1 Earthquake occurring in April of 2007, which is located to the other side of Rendova Island. Because a tsunami followed the 2010 earthquake, it is likely that submarine surface deformation accompanied the event. By the results of D-InSAR on ALOS and ERS, plus limited points of ground displacement from GPS and strong motion seismometers, the continuous ground displacement field is constructed and normalized. Our preliminary result shows the ground movement in the Rendova Island can reach tens of centimeters, implying shallow earthquake source consistent with the suggestion by triggering tsunami. Besides, the earthquake sequence retrieved from our local seismometer observation network allows us to further define underground fault geometry. The spatial distribution of the epicenter also concludes the seamount located in the middle divides two seismogenic asperities which generate 2007 and 2010 earthquakes respectively.

  16. Catalog of earthquake hypocenters at Alaskan volcanoes: January 1 through December 31, 2004

    USGS Publications Warehouse

    Dixon, James P.; Stihler, Scott D.; Power, John A.; Tytgat, Guy; Estes, Steve; Prejean, Stephanie; Sanchez, John J.; Sanches, Rebecca; McNutt, Stephen R.; Paskievitch, John

    2005-01-01

    The Alaska Volcano Observatory (AVO), a cooperative program of the U.S. Geological Survey, the Geophysical Institute of the University of Alaska Fairbanks, and the Alaska Division of Geological and Geophysical Surveys, has maintained seismic monitoring networks at historically active volcanoes in Alaska since 1988. The primary objectives of the seismic program are the real-time seismic monitoring of active, potentially hazardous, Alaskan volcanoes and the investigation of seismic processes associated with active volcanism. This catalog presents the calculated earthquake hypocenter and phase arrival data, and changes in the seismic monitoring program for the period January 1 through December 31, 2004.These include Mount Wrangell, Mount Spurr, Redoubt Volcano, Iliamna Volcano, Augustine Volcano, Katmai volcanic cluster (Snowy Mountain, Mount Griggs, Mount Katmai, Novarupta, Trident Volcano, Mount Mageik, Mount Martin), Mount Peulik, Aniakchak Crater, Mount Veniaminof, Pavlof Volcano, Mount Dutton, Isanotski Peaks, Shishaldin Volcano, Fisher Caldera, Westdahl Peak, Akutan Peak, Makushin Volcano, Okmok Caldera, Great Sitkin Volcano, Kanaga Volcano, Tanaga Volcano, and Mount Gareloi. Over the past year, formal monitoring of Okmok, Tanaga and Gareloi were announced following an extended period of monitoring to determine the background seismicity at each volcanic center. The seismicity at Mount Peulik was still being studied at the end of 2004 and has yet to be added to the list of monitored volcanoes in the AVO weekly update. AVO located 6928 earthquakes in 2004.Monitoring highlights in 2004 include: (1) an earthquake swarm at Westdahl Peak in January; (2) an increase in seismicity at Mount Spurr starting in February continuing through the end of the year into 2005; (4) low-level tremor, and low-frequency events related to intermittent ash and steam emissions at Mount Veniaminof between April and October; (4) low-level tremor at Shishaldin Volcano between April and

  17. Earthquake prediction analysis based on empirical seismic rate: the M8 algorithm

    NASA Astrophysics Data System (ADS)

    Molchan, G.; Romashkova, L.

    2010-12-01

    The quality of space-time earthquake prediction is usually characterized by a 2-D error diagram (n, τ), where n is the fraction of failures-to-predict and τ is the local rate of alarm averaged in space. The most reasonable averaging measure for analysis of a prediction strategy is the normalized rate of target events λ(dg) in a subarea dg. In that case the quantity H = 1 - (n + τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n, τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M >= 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw >= 5.5, 1977-2004, and the magnitude range of target events 8.0 <= M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm.

  18. The Cause of the Cauca, Colombia, Cluster of Intermediate-Depth Earthquakes From Earthquake Relocation and Focal Mechanisms

    NASA Astrophysics Data System (ADS)

    Warren, L. M.; Chang, Y.; Prieto, G. A.

    2016-12-01

    In subducting slabs, a high seismicity rate in a concentrated volume (an earthquake cluster) is often associated with geometric complexities such as slab detachment, tearing, or contortions. The intermediate-depth Cauca, Colombia, cluster (3.5°N-5.5°N), in contrast, appears to be located in a slab without such complexities. However, previous constraints on the slab geometry are based on global data. We use regional data to investigate the cause of the Cauca cluster by estimating its geometry from earthquake relocations and stress regime from focal mechanism calculations and stress inversions. The Cauca segment of the Nazca Plate is characterized by relatively sparse seismicity away from the cluster and a narrow volcanic arc. To the northeast of the Cauca cluster, six active volcanoes are concentrated within an 80-km along-trench distance and are isolated 180 km from the rest of the northern Andes volcanic arc. The Colombian National Seismic Network, from Jan 2010 to Mar 2014, reports 433 earthquakes in the cluster at depths of 50-200 km with local magnitudes ranging from 2.0-4.7. Earthquake relocations show a continuous 20-km-thick seismic zone dipping at 33°-43°, with the angle increasing to the south. In addition, earthquakes locate in two columns that extend normal to the slab and into the mantle wedge. The focal mechanisms show various types, including down-dip extension, strike slip, and trench-parallel compression, but are consistent with a predominantly down-dip extensional stress field. The maximum and intermediate stress axes are interchangeable because of their similar magnitudes. The down-dip extensional stress regime may expel dehydrated fluid from the slab into the mantle wedge. As the fluid moves through the mantle wedge, it may generate hydrofractures and the observed mantle-wedge earthquakes. The fluid in the mantle wedge may be transported along the trench, from the steeper southern section to the more shallowly-dipping northern section, and

  19. Catalog of earthquake hypocenters at Alaskan volcanoes: January 1 through December 31, 2005

    USGS Publications Warehouse

    Dixon, James P.; Stihler, Scott D.; Power, John A.; Tytgat, Guy; Estes, Steve; McNutt, Stephen R.

    2006-01-01

    The Alaska Volcano Observatory (AVO), a cooperative program of the U.S. Geological Survey, the Geophysical Institute of the University of Alaska Fairbanks, and the Alaska Division of Geological and Geophysical Surveys, has maintained seismic monitoring networks at historically active volcanoes in Alaska since 1988 (Figure 1). The primary objectives of the seismic program are the real-time seismic monitoring of active, potentially hazardous, Alaskan volcanoes and the investigation of seismic processes associated with active volcanism. This catalog presents calculated earthquake hypocenters and seismic phase arrival data, and details changes in the seismic monitoring program for the period January 1 through December 31, 2005.The AVO seismograph network was used to monitor the seismic activity at thirty-two volcanoes within Alaska in 2005 (Figure 1). The network was augmented by two new subnetworks to monitor the Semisopochnoi Island volcanoes and Little Sitkin Volcano. Seismicity at these volcanoes was still being studied at the end of 2005 and has not yet been added to the list of permanently monitored volcanoes in the AVO weekly update. Following an extended period of monitoring to determine the background seismicity at the Mount Peulik, Ukinrek Maars, and Korovin Volcano, formal monitoring of these volcanoes began in 2005. AVO located 9,012 earthquakes in 2005.Monitoring highlights in 2005 include: (1) seismicity at Mount Spurr remaining above background, starting in February 2004, through the end of the year and into 2006; (2) an increase in seismicity at Augustine Volcano starting in May 2005, and continuing through the end of the year into 2006; (3) volcanic tremor and seismicity related to low-level strombolian activity at Mount Veniaminof in January to March and September; and (4) a seismic swarm at Tanaga Volcano in October and November.This catalog includes: (1) descriptions and locations of seismic instrumentation deployed in the field in 2005; (2) a

  20. Catalog of earthquake hypocenters at Alaskan volcanoes: January 1 through December 31, 2003

    USGS Publications Warehouse

    Dixon, James P.; Stihler, Scott D.; Power, John A.; Tytgat, Guy; Moran, Seth C.; Sanchez, John J.; McNutt, Stephen R.; Estes, Steve; Paskievitch, John

    2004-01-01

    The Alaska Volcano Observatory (AVO), a cooperative program of the U.S. Geological Survey, the Geophysical Institute of the University of Alaska Fairbanks, and the Alaska Division of Geological and Geophysical Surveys, has maintained seismic monitoring networks at historically active volcanoes in Alaska since 1988. The primary objectives of this program are the near real time seismic monitoring of active, potentially hazardous, Alaskan volcanoes and the investigation of seismic processes associated with active volcanism. This catalog presents the calculated earthquake hypocenter and phase arrival data, and changes in the seismic monitoring program for the period January 1 through December 31, 2003.The AVO seismograph network was used to monitor the seismic activity at twenty-seven volcanoes within Alaska in 2003. These include Mount Wrangell, Mount Spurr, Redoubt Volcano, Iliamna Volcano, Augustine Volcano, Katmai volcanic cluster (Snowy Mountain, Mount Griggs, Mount Katmai, Novarupta, Trident Volcano, Mount Mageik, Mount Martin), Aniakchak Crater, Mount Veniaminof, Pavlof Volcano, Mount Dutton, Isanotski Peaks, Shishaldin Volcano, Fisher Caldera, Westdahl Peak, Akutan Peak, Makushin Volcano, Okmok Caldera, Great Sitkin Volcano, Kanaga Volcano, Tanaga Volcano, and Mount Gareloi. Monitoring highlights in 2003 include: continuing elevated seismicity at Mount Veniaminof in January-April (volcanic unrest began in August 2002), volcanogenic seismic swarms at Shishaldin Volcano throughout the year, and low-level tremor at Okmok Caldera throughout the year. Instrumentation and data acquisition highlights in 2003 were the installation of subnetworks on Tanaga and Gareloi Islands, the installation of broadband installations on Akutan Volcano and Okmok Caldera, and the establishment of telemetry for the Okmok Caldera subnetwork. AVO located 3911 earthquakes in 2003.This catalog includes: (1) a description of instruments deployed in the field and their locations; (2) a

  1. Extreme Magnitude Earthquakes and their Economical Consequences

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Perea, N.; Emerson, D.; Salazar, A.; Moulinec, C.

    2011-12-01

    The frequency of occurrence of extreme magnitude earthquakes varies from tens to thousands of years, depending on the considered seismotectonic region of the world. However, the human and economic losses when their hypocenters are located in the neighborhood of heavily populated and/or industrialized regions, can be very large, as recently observed for the 1985 Mw 8.01 Michoacan, Mexico and the 2011 Mw 9 Tohoku, Japan, earthquakes. Herewith, a methodology is proposed in order to estimate the probability of exceedance of: the intensities of extreme magnitude earthquakes, PEI and of their direct economical consequences PEDEC. The PEI are obtained by using supercomputing facilities to generate samples of the 3D propagation of extreme earthquake plausible scenarios, and enlarge those samples by Monte Carlo simulation. The PEDEC are computed by using appropriate vulnerability functions combined with the scenario intensity samples, and Monte Carlo simulation. An example of the application of the methodology due to the potential occurrence of extreme Mw 8.5 subduction earthquakes on Mexico City is presented.

  2. Some characteristics of the complex El Mayor-Cucapah, MW7.2, April 4, 2010, Baja California, Mexico, earthquake, from well-located aftershock data from local and regional networks.

    NASA Astrophysics Data System (ADS)

    Frez, J.; Nava Pichardo, F. A.; Acosta, J.; Munguia, L.; Carlos, J.; García, R.

    2015-12-01

    Aftershocks from the El Mayor-Cucapah (EMC), MW7.2, April 4, 2010, Baja California, Mexico, earthquake, were recorded over two months by a 31 station local array (Reftek RT130 seismographs loaned from IRIS-PASSCAL), complemented by regional data from SCSN, and CICESE. The resulting data base includes 518 aftershocks with ML ≥ 3.0, plus 181 smaller events. Reliable hypocenters were determined using HYPODD and a velocity structure determined from refraction data for a mesa located to the west of the Mexicali-Imperial Valley. Aftershock hypocenters show that the El Mayor-Cucapah earthquake was a multiple event comprising two or three different ruptures of which the last one constituted the main event. The main event rupture, which extends in a roughly N45°W direction, is complex with well-defined segments having different characteristics. The main event central segment, located close to the first event epicenter is roughly vertical, the northwest segment dips ~68°NE, while the two southeast segments dip ~60°SW and ~52°SW, respectively, which agrees with results of previous studies based on teleseismic long periods and on GPS-INSAR. All main rupture aftershock hypocenters have depths above 10-11km and, except for the central segment, they delineate the edges of zones with largest coseismic displacement. The two southern segments show seismicity concentrated below 5km and 3.5km, respectively; the paucity of shallow seismicity may be caused by the thick layer of non-consolidated sediments in this region. The ruptures delineated by aftershocks in the southern regions correspond to the Indiviso fault, unidentified until the occurrence of the EMC earthquake. The first event was relocated together with the aftershocks; the epicenter lies slightly westwards of published locations, but it definitely does not lie on, or close to, the main rupture. The focal mechanism of the first event, based on first arrival polarities, is predominantly strike-slip; the focal plane

  3. GPS coseismic and postseismic surface displacements of the El Mayor-Cucapah earthquake

    NASA Astrophysics Data System (ADS)

    Gonzalez, A.; Gonzalez-Garcia, J. J.; Sandwell, D. T.; Fialko, Y.; Agnew, D. C.; Lipovsky, B.; Fletcher, J. M.; Nava Pichardo, F. A.

    2010-12-01

    GPS surveys were performed after the El Mayor Cucapah earthquake Mw 7.2 in northern Baja California by scientists from CICESE, UCSD, and UCR. Six of the sites were occupied for several weeks to capture the postseismic deformation within a day of the earthquake. We calculated the coseismic displacement for 22 sites with previous secular velocity in ITRF2005 reference frame and found 1.160±0.016 m of maximum horizontal displacement near the epicentral area at La Puerta location, and 0.636±0.036 m of vertical offset near Ejido Durango. Most of the GPS sites are located East of the main rupture in Mexicali Valley, 5 are located West at Sierra Juarez and South near San Felipe. We present a velocity field before, along with coseismic displacements and early postseismic features related to the El Mayor-Cucapah earthquake.

  4. The California Post-Earthquake Information Clearinghouse: A Plan to Learn From the Next Large California Earthquake

    NASA Astrophysics Data System (ADS)

    Loyd, R.; Walter, S.; Fenton, J.; Tubbesing, S.; Greene, M.

    2008-12-01

    In the rush to remove debris after a damaging earthquake, perishable data related to a wide range of impacts on the physical, built and social environments can be lost. The California Post-Earthquake Information Clearinghouse is intended to prevent this data loss by supporting the earth scientists, engineers, and social and policy researchers who will conduct fieldwork in the affected areas in the hours and days following the earthquake to study these effects. First called for by Governor Ronald Reagan following the destructive M6.5 San Fernando earthquake in 1971, the concept of the Clearinghouse has since been incorporated into the response plans of the National Earthquake Hazard Reduction Program (USGS Circular 1242). This presentation is intended to acquaint scientists with the purpose, functions, and services of the Clearinghouse. Typically, the Clearinghouse is set up in the vicinity of the earthquake within 24 hours of the mainshock and is maintained for several days to several weeks. It provides a location where field researchers can assemble to share and discuss their observations, plan and coordinate subsequent field work, and communicate significant findings directly to the emergency responders and to the public through press conferences. As the immediate response effort winds down, the Clearinghouse will ensure that collected data are archived and made available through "lessons learned" reports and publications that follow significant earthquakes. Participants in the quarterly meetings of the Clearinghouse include representatives from state and federal agencies, universities, NGOs and other private groups. Overall management of the Clearinghouse is delegated to the agencies represented by the authors above.

  5. Investigation of Backprojection Uncertainties With M6 Earthquakes

    NASA Astrophysics Data System (ADS)

    Fan, Wenyuan; Shearer, Peter M.

    2017-10-01

    We investigate possible biasing effects of inaccurate timing corrections on teleseismic P wave backprojection imaging of large earthquake ruptures. These errors occur because empirically estimated time shifts based on aligning P wave first arrivals are exact only at the hypocenter and provide approximate corrections for other parts of the rupture. Using the Japan subduction zone as a test region, we analyze 46 M6-M7 earthquakes over a 10 year period, including many aftershocks of the 2011 M9 Tohoku earthquake, performing waveform cross correlation of their initial P wave arrivals to obtain hypocenter timing corrections to global seismic stations. We then compare backprojection images for each earthquake using its own timing corrections with those obtained using the time corrections from other earthquakes. This provides a measure of how well subevents can be resolved with backprojection of a large rupture as a function of distance from the hypocenter. Our results show that backprojection is generally very robust and that the median subevent location error is about 25 km across the entire study region (˜700 km). The backprojection coherence loss and location errors do not noticeably converge to zero even when the event pairs are very close (<20 km). This indicates that most of the timing differences are due to 3-D structure close to each of the hypocenter regions, which limits the effectiveness of attempts to refine backprojection images using aftershock calibration, at least in this region.

  6. Determination of Magnitude and Location of Earthquakes With Only Five Seconds of a Three Component Broadband Sensor Signal Located Near Bogota, Colombia Using Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Ochoa Gutierrez, L. H.; Vargas Jiménez, C. A.; Niño Vasquez, L. F., Sr.

    2017-12-01

    Early warning generation for earthquakes that occur near the city of Bogotá-Colombia is extremely important. Using the information of a broadband and three component station, property of the Servicio Geológico Colombiano (SGC), called El Rosal, which is located very near the city, we developed a model based on support vector machines techniques (SVM), with a standardized polynomial kernel, using as descriptors or input data, seismic signal features, complemented by the hipocentral parameters calculated for each one of the reported events. The model was trained and evaluated by cross correlation and was used to predict, with only five seconds of signal, the magnitude and location of a seismic event. With the proposed model we calculated local magnitude with an accuracy of 0.19 units of magnitude, epicentral distance with an accuracy of about 11 k, depth with a precision of approximately 40 km and the azimuth of arrival with a precision of 45°. This research made a significant contribution for early warning generation for the country, in particular for the city of Bogotá. These models will be implemented in the future in the "Red Sismológica de la Sabana de Bogotá y sus Alrededores (RSSB)" which belongs to the Universidad Nacional de Colombia.

  7. Privacy-Preserving Location-Based Query Using Location Indexes and Parallel Searching in Distributed Networks

    PubMed Central

    Liu, Lei; Zhao, Jing

    2014-01-01

    An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users. PMID:24790579

  8. Privacy-preserving location-based query using location indexes and parallel searching in distributed networks.

    PubMed

    Zhong, Cheng; Liu, Lei; Zhao, Jing

    2014-01-01

    An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users.

  9. Case studies of Induced Earthquakes in Ohio for 2016 and 2017

    NASA Astrophysics Data System (ADS)

    Friberg, P. A.; Brudzinski, M.; Kozlowska, M.; Loughner, E.; Langenkamp, T.; Dricker, I.

    2017-12-01

    Over the last four years, unconventional oil and gas production activity in the Utica shale play in Ohio has induced over 20 earthquake sequences (Friberg et al, 2014; Skoumal et al, 2016; Friberg et al, 2016; Kozlowska et al, in submission) including a few new ones in 2017. The majority of the induced events have been attributed to optimally oriented faults located in crystalline basement rocks, which are closer to the Utica formation than the Marcellus shale, a shallower formation more typically targeted in Pennsylvania and West Virginia. A number of earthquake sequences in 2016 and 2017 are examined using multi-station cross correlation template matching techniques. We examine the Gutenberg-Richter b-values and, where possible, the b-value evolution of the earthquake sequences to help determine seismogensis of the events. Refined earthquake locations using HypoDD are determined using data from stations operated by the USGS, IRIS, ODNR, Miami University, and PASEIS.

  10. "Repeating Events" as Estimator of Location Precision: The China National Seismograph Network

    NASA Astrophysics Data System (ADS)

    Jiang, Changsheng; Wu, Zhongliang; Li, Yutong; Ma, Tengfei

    2014-03-01

    "Repeating earthquakes" identified by waveform cross-correlation, with inter-event separation of no more than 1 km, can be used for assessment of location precision. Assuming that the network-measured apparent inter-epicenter distance X of the "repeating doublets" indicates the location precision, we estimated the regionalized location quality of the China National Seismograph Network by comparing the "repeating events" in and around China by S chaff and R ichards (Science 303: 1176-1178, 2004; J Geophys Res 116: B03309, 2011) and the monthly catalogue of the China Earthquake Networks Center. The comparison shows that the average X value of the China National Seismograph Network is approximately 10 km. The mis-location is larger for the Tibetan Plateau, west and north of Xinjiang, and east of Inner Mongolia, as indicated by larger X values. Mis-location is correlated with the completeness magnitude of the earthquake catalogue. Using the data from the Beijing Capital Circle Region, the dependence of the mis-location on the distribution of seismic stations can be further confirmed.

  11. Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model

    USGS Publications Warehouse

    Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,

    2013-01-01

    In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of

  12. Attention bias in earthquake-exposed survivors: an event-related potential study.

    PubMed

    Zhang, Yan; Kong, Fanchang; Han, Li; Najam Ul Hasan, Abbasi; Chen, Hong

    2014-12-01

    The Chinese Wenchuan earthquake, which happened on the 28th of May in 2008, may leave deep invisible scars in individuals. China has a large number of children and adolescents, who tend to be most vulnerable because they are in an early stage of human development and possible post-traumatic psychological distress may have a life-long consequence. Trauma survivors without post-traumatic stress disorder (PTSD) have received little attention in previous studies, especially in event-related potential (ERP) studies. We compared the attention bias to threat stimuli between the earthquake-exposed group and the control group in a masked version of the dot probe task. The target probe presented at the same space location consistent with earthquake-related words was the congruent trial, while in the space location of neutral words was the incongruent trial. Thirteen earthquake-exposed middle school students without PTSD and 13 matched controls were included in this investigation. The earthquake-exposed group showed significantly faster RTs to congruent trials than to incongruent trials. The earthquake-exposed group produced significantly shorter C1 and P1 latencies and larger C1, P1 and P2 amplitudes than the control group. In particular, enhanced P1 amplitude to threat stimuli was observed in the earthquake-exposed group. These findings are in agreement with the prediction that earthquake-exposed survivors have an attention bias to threat stimuli. The traumatic event had a much greater effect on earthquake-exposed survivors even if they showed no PTSD symptoms than individuals in the controls. These results will provide neurobiological evidences for effective intervention and prevention to post-traumatic mental problems. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Is earthquake rate in south Iceland modified by seasonal loading?

    NASA Astrophysics Data System (ADS)

    Jonsson, S.; Aoki, Y.; Drouin, V.

    2017-12-01

    Several temporarily varying processes have the potential of modifying the rate of earthquakes in the south Iceland seismic zone, one of the two most active seismic zones in Iceland. These include solid earth tides, seasonal meteorological effects and influence from passing weather systems, and variations in snow and glacier loads. In this study we investigate the influence these processes may have on crustal stresses and stressing rates in the seismic zone and assess whether they appear to be influencing the earthquake rate. While historical earthquakes in the south Iceland have preferentially occurred in early summer, this tendency is less clear for small earthquakes. The local earthquake catalogue (going back to 1991, magnitude of completeness < 1.0) has indeed more earthquakes in summer than in winter. However, this pattern is strongly influenced by aftershock sequences of the largest M6+ earthquakes, which occurred in June 2000 and May 2008. Standard Reasenberg earthquake declustering or more involved model independent stochastic declustering algorithms are not capable of fully eliminating the aftershocks from the catalogue. We therefore inspected the catalogue for the time period before 2000 and it shows limited seasonal tendency in earthquake occurrence. Our preliminary results show no clear correlation between earthquake rates and short-term stressing variations induced from solid earth tides or passing storms. Seasonal meteorological effects also appear to be too small to influence the earthquake activity. Snow and glacier load variations induce significant vertical motions in the area with peak loading occurring in Spring (April-May) and maximum unloading in Fall (Sept.-Oct.). Early summer occurrence of historical earthquakes therefore correlates with early unloading rather than with the peak unloading or unloading rate, which appears to indicate limited influence of this seasonal process on the earthquake activity.

  14. Source Parameters and Rupture Directivities of Earthquakes Within the Mendocino Triple Junction

    NASA Astrophysics Data System (ADS)

    Allen, A. A.; Chen, X.

    2017-12-01

    The Mendocino Triple Junction (MTJ), a region in the Cascadia subduction zone, produces a sizable amount of earthquakes each year. Direct observations of the rupture properties are difficult to achieve due to the small magnitudes of most of these earthquakes and lack of offshore observations. The Cascadia Initiative (CI) project provides opportunities to look at the earthquakes in detail. Here we look at the transform plate boundary fault located in the MTJ, and measure source parameters of Mw≥4 earthquakes from both time-domain deconvolution and spectral analysis using empirical Green's function (EGF) method. The second-moment method is used to infer rupture length, width, and rupture velocity from apparent source duration measured at different stations. Brune's source model is used to infer corner frequency and spectral complexity for stacked spectral ratio. EGFs are selected based on their location relative to the mainshock, as well as the magnitude difference compared to the mainshock. For the transform fault, we first look at the largest earthquake recorded during the Year 4 CI array, a Mw5.72 event that occurred in January of 2015, and select two EGFs, a Mw1.75 and a Mw1.73 located within 5 km of the mainshock. This earthquake is characterized with at least two sub-events, with total duration of about 0.3 second and rupture length of about 2.78 km. The earthquake is rupturing towards west along the transform fault, and both source durations and corner frequencies show strong azimuthal variations, with anti-correlation between duration and corner frequency. The stacked spectral ratio from multiple stations with the Mw1.73 EGF event shows deviation from pure Brune's source model following the definition from Uchide and Imanishi [2016], likely due to near-field recordings with rupture complexity. We will further analyze this earthquake using more EGF events to test the reliability and stability of the results, and further analyze three other Mw≥4 earthquakes

  15. Earthquake early warning for Romania - most recent improvements

    NASA Astrophysics Data System (ADS)

    Marmureanu, Alexandru; Elia, Luca; Martino, Claudio; Colombelli, Simona; Zollo, Aldo; Cioflan, Carmen; Toader, Victorin; Marmureanu, Gheorghe; Marius Craiu, George; Ionescu, Constantin

    2014-05-01

    EWS for Vrancea earthquakes uses the time interval (28-32 sec.) between the moment when the earthquake is detected by the local seismic network installed in the epicenter area (Vrancea) and the arrival time of the seismic waves in the protected area (Bucharest) to send earthquake warning to users. In the last years, National Institute for Earth Physics (NIEP) upgraded its seismic network in order to cover better the seismic zones of Romania. Currently the National Institute for Earth Physics (NIEP) operates a real-time seismic network designed to monitor the seismic activity on the Romania territory, dominated by the Vrancea intermediate-depth (60-200 km) earthquakes. The NIEP real-time network consists of 102 stations and two seismic arrays equipped with different high quality digitizers (Kinemetrics K2, Quanterra Q330, Quanterra Q330HR, PS6-26, Basalt), broadband and short period seismometers (CMG3ESP, CMG40T, KS2000, KS54000, KS2000, CMG3T,STS2, SH-1, S13, Ranger, gs21, Mark l22) and acceleration sensors (Episensor). Recent improvement of the seismic network and real-time communication technologies allows implementation of a nation-wide EEWS for Vrancea and other seismic sources from Romania. We present a regional approach to Earthquake Early Warning for Romania earthquakes. The regional approach is based on PRESTo (Probabilistic and Evolutionary early warning SysTem) software platform: PRESTo processes in real-time three channel acceleration data streams: once the P-waves arrival have been detected, it provides earthquake location and magnitude estimations, and peak ground motion predictions at target sites. PRESTo is currently implemented in real- time at National Institute for Earth Physics, Bucharest for several months in parallel with a secondary EEWS. The alert notification is issued only when both systems validate each other. Here we present the results obtained using offline earthquakes originating from Vrancea area together with several real

  16. A probabilistic framework for single-station location of seismicity on Earth and Mars

    NASA Astrophysics Data System (ADS)

    Böse, M.; Clinton, J. F.; Ceylan, S.; Euchner, F.; van Driel, M.; Khan, A.; Giardini, D.; Lognonné, P.; Banerdt, W. B.

    2017-01-01

    Locating the source of seismic energy from a single three-component seismic station is associated with large uncertainties, originating from challenges in identifying seismic phases, as well as inevitable pick and model uncertainties. The challenge is even higher for planets such as Mars, where interior structure is a priori largely unknown. In this study, we address the single-station location problem by developing a probabilistic framework that combines location estimates from multiple algorithms to estimate the probability density function (PDF) for epicentral distance, back azimuth, and origin time. Each algorithm uses independent and complementary information in the seismic signals. Together, the algorithms allow locating seismicity ranging from local to teleseismic quakes. Distances and origin times of large regional and teleseismic events (M > 5.5) are estimated from observed and theoretical body- and multi-orbit surface-wave travel times. The latter are picked from the maxima in the waveform envelopes in various frequency bands. For smaller events at local and regional distances, only first arrival picks of body waves are used, possibly in combination with fundamental Rayleigh R1 waveform maxima where detectable; depth phases, such as pP or PmP, help constrain source depth and improve distance estimates. Back azimuth is determined from the polarization of the Rayleigh- and/or P-wave phases. When seismic signals are good enough for multiple approaches to be used, estimates from the various methods are combined through the product of their PDFs, resulting in an improved event location and reduced uncertainty range estimate compared to the results obtained from each algorithm independently. To verify our approach, we use both earthquake recordings from existing Earth stations and synthetic Martian seismograms. The Mars synthetics are generated with a full-waveform scheme (AxiSEM) using spherically-symmetric seismic velocity, density and attenuation models of

  17. Location error uncertainties - an advanced using of probabilistic inverse theory

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2016-04-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analyzed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. While estimating of the earthquake foci location is relatively simple a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling, and apriori uncertainties. In this presentation we addressed this task when statistics of observational and/or modeling errors are unknown. This common situation requires introduction of apriori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland we illustrate an approach based on an analysis of Shanon's entropy calculated for the aposteriori distribution. We show that this meta-characteristic of the aposteriori distribution carries some information on uncertainties of the solution found.

  18. Earthquakes triggered by fluid extraction

    USGS Publications Warehouse

    Segall, P.

    1989-01-01

    Seismicity is correlated in space and time with production from some oil and gas fields where pore pressures have declined by several tens of megapascals. Reverse faulting has occurred both above and below petroleum reservoirs, and normal faulting has occurred on the flanks of at least one reservoir. The theory of poroelasticity requires that fluid extraction locally alter the state of stress. Calculations with simple geometries predict stress perturbations that are consistent with observed earthquake locations and focal mechanisms. Measurements of surface displacement and strain, pore pressure, stress, and poroelastic rock properties in such areas could be used to test theoretical predictions and improve our understanding of earthquake mechanics. -Author

  19. Locating low-frequency earthquakes using amplitude signals from seismograph stations: Examples from events at Montserrat, West Indies and from synthetic data

    NASA Astrophysics Data System (ADS)

    Jolly, A.; Jousset, P.; Neuberg, J.

    2003-04-01

    We determine locations for low-frequency earthquakes occurring prior to a collapse on June 25th, 1997 using signal amplitudes from a 7-station local seismograph network at the Soufriere Hills volcano on Montserrat, West Indies. Locations are determined by averaging the signal amplitude over the event waveform and inverting these data using an assumed amplitude decay model comprising geometrical spreading and attenuation. Resulting locations are centered beneath the active dome from 500 to 2000 m below sea level assuming body wave geometrical spreading and a quality factor of Q=22. Locations for the same events shifted systematically shallower by about 500 m assuming a surface wave geometrical spreading. Locations are consistent to results obtained using arrival time methods. The validity of the method is tested against synthetic low-frequency events constructed from a 2-D finite difference model including visco-elastic properties. Two example events are tested; one from a point source triggered in a low velocity conduit ranging between 100-1100 m below the surface, and the second triggered in a conduit located 1500-2500 m below the surface. Resulting seismograms have emergent onsets and extended codas and include the effect of conduit resonance. Employing geometrical spreading and attenuation from the finite-difference modelling, we obtain locations within the respective model conduits validating our approach.The location depths are sensitive to the assumed geometric spreading and Q model. We can distinguish between two sources separated by about 1000 meters only if we know the decay parameters.

  20. Local earthquake tomography of Scotland

    NASA Astrophysics Data System (ADS)

    Luckett, Richard; Baptie, Brian

    2015-03-01

    Scotland is a relatively aseismic region for the use of local earthquake tomography, but 40 yr of earthquakes recorded by a good and growing network make it possible. A careful selection is made from the earthquakes located by the British Geological Survey (BGS) over the last four decades to provide a data set maximising arrival time accuracy and ray path coverage of Scotland. A large number of 1-D velocity models with different layer geometries are considered and differentiated by employing quarry blasts as ground-truth events. Then, SIMULPS14 is used to produce a robust 3-D tomographic P-wave velocity model for Scotland. In areas of high resolution the model shows good agreement with previously published interpretations of seismic refraction and reflection experiments. However, the model shows relatively little lateral variation in seismic velocity except at shallow depths, where sedimentary basins such as the Midland Valley are apparent. At greater depths, higher velocities in the northwest parts of the model suggest that the thickness of crust increases towards the south and east. This observation is also in agreement with previous studies. Quarry blasts used as ground truth events and relocated with the preferred 3-D model are shown to be markedly more accurate than when located with the existing BGS 1-D velocity model.

  1. Source models of M-7 class earthquakes in the rupture area of the 2011 Tohoku-Oki Earthquake by near-field tsunami modeling

    NASA Astrophysics Data System (ADS)

    Kubota, T.; Hino, R.; Inazu, D.; Saito, T.; Iinuma, T.; Suzuki, S.; Ito, Y.; Ohta, Y.; Suzuki, K.

    2012-12-01

    We estimated source models of small amplitude tsunami associated with M-7 class earthquakes in the rupture area of the 2011 Tohoku-Oki Earthquake using near-field records of tsunami recorded by ocean bottom pressure gauges (OBPs). The largest (Mw=7.3) foreshock of the Tohoku-Oki earthquake, occurred on 9 Mar., two days before the mainshock. Tsunami associated with the foreshock was clearly recorded by seven OBPs, as well as coseismic vertical deformation of the seafloor. Assuming a planer fault along the plate boundary as a source, the OBP records were inverted for slip distribution. As a result, the most of the coseismic slip was found to be concentrated in the area of about 40 x 40 km in size and located to the north-west of the epicenter, suggesting downdip rupture propagation. Seismic moment of our tsunami waveform inversion is 1.4 x 10^20 Nm, equivalent to Mw 7.3. On 2011 July 10th, an earthquake of Mw 7.0 occurred near the hypocenter of the mainshock. Its relatively deep focus and strike-slip focal mechanism indicate that this earthquake was an intraslab earthquake. The earthquake was associated with small amplitude tsunami. By using the OBP records, we estimated a model of the initial sea-surface height distribution. Our tsunami inversion showed that a pair of uplift/subsiding eyeballs was required to explain the observed tsunami waveform. The spatial pattern of the seafloor deformation is consistent with the oblique strike-slip solution obtained by the seismic data analyses. The location and strike of the hinge line separating the uplift and subsidence zones correspond well to the linear distribution of the aftershock determined by using local OBS data (Obana et al., 2012).

  2. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  3. Applicability of source scaling relations for crustal earthquakes to estimation of the ground motions of the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe

    2017-01-01

    A two-stage scaling relationship of the source parameters for crustal earthquakes in Japan has previously been constructed, in which source parameters obtained from the results of waveform inversion of strong motion data are combined with parameters estimated based on geological and geomorphological surveys. A three-stage scaling relationship was subsequently developed to extend scaling to crustal earthquakes with magnitudes greater than M w 7.4. The effectiveness of these scaling relationships was then examined based on the results of waveform inversion of 18 recent crustal earthquakes ( M w 5.4-6.9) that occurred in Japan since the 1995 Hyogo-ken Nanbu earthquake. The 2016 Kumamoto earthquake, with M w 7.0, was one of the largest earthquakes to occur since dense and accurate strong motion observation networks, such as K-NET and KiK-net, were deployed after the 1995 Hyogo-ken Nanbu earthquake. We examined the applicability of the scaling relationships of the source parameters of crustal earthquakes in Japan to the 2016 Kumamoto earthquake. The rupture area and asperity area were determined based on slip distributions obtained from waveform inversion of the 2016 Kumamoto earthquake observations. We found that the relationship between the rupture area and the seismic moment for the 2016 Kumamoto earthquake follows the second-stage scaling within one standard deviation ( σ = 0.14). The ratio of the asperity area to the rupture area for the 2016 Kumamoto earthquake is nearly the same as ratios previously obtained for crustal earthquakes. Furthermore, we simulated the ground motions of this earthquake using a characterized source model consisting of strong motion generation areas (SMGAs) based on the empirical Green's function (EGF) method. The locations and areas of the SMGAs were determined through comparison between the synthetic ground motions and observed motions. The sizes of the SMGAs were nearly coincident with the asperities with large slip. The synthetic

  4. Using earthquake clusters to identify fracture zones at Puna geothermal field, Hawaii

    NASA Astrophysics Data System (ADS)

    Lucas, A.; Shalev, E.; Malin, P.; Kenedi, C. L.

    2010-12-01

    The actively producing Puna geothermal system (PGS) is located on the Kilauea East Rift Zone (ERZ), which extends out from the active Kilauea volcano on Hawaii. In the Puna area the rift trend is identified as NE-SW from surface expressions of normal faulting with a corresponding strike; at PGS the surface expression offsets in a left step, but no rift perpendicular faulting is observed. An eight station borehole seismic network has been installed in the area of the geothermal system. Since June 2006, a total of 6162 earthquakes have been located close to or inside the geothermal system. The spread of earthquake locations follows the rift trend, but down rift to the NE of PGS almost no earthquakes are observed. Most earthquakes located within the PGS range between 2-3 km depth. Up rift to the SW of PGS the number of events decreases and the depth range increases to 3-4 km. All initial locations used Hypoinverse71 and showed no trends other than the dominant rift parallel. Double difference relocation of all earthquakes, using both catalog and cross-correlation, identified one large cluster but could not conclusively identify trends within the cluster. A large number of earthquake waveforms showed identifiable shear wave splitting. For five stations out of the six where shear wave splitting was observed, the dominant polarization direction was rift parallel. Two of the five stations also showed a smaller rift perpendicular signal. The sixth station (located close to the area of the rift offset) displayed a N-S polarization, approximately halfway between rift parallel and perpendicular. The shear wave splitting time delays indicate that fracture density is higher at the PGS compared to the surrounding ERZ. Correlation co-efficient clustering with independent P and S wave windows was used to identify clusters based on similar earthquake waveforms. In total, 40 localized clusters containing ten or more events were identified. The largest cluster was located in the

  5. Earthquake behavior along the Levant fault from paleoseismology (Invited)

    NASA Astrophysics Data System (ADS)

    Klinger, Y.; Le Beon, M.; Wechsler, N.; Rockwell, T. K.

    2013-12-01

    The Levant fault is a major continental structure 1200 km-long that bounds the Arabian plate to the west. The finite offset of this left-lateral strike-slip fault is estimated to be 105 km for the section located south of the restraining bend corresponding roughly to Lebanon. Along this southern section the slip-rate has been estimated over a large range of time scales, from few years to few hundreds thousands of years. Over these different time scales, studies agree for the slip-rate to be 5mm/yr × 2 mm/yr. The southern section of the Levant fault is particularly attractive to study earthquake behavior through time for several reasons: 1/ The fault geometry is simple and well constrained. 2/ The fault system is isolated and does not interact with obvious neighbor fault systems. 3/ The Middle-East, where the Levant fault is located, is the region in the world where one finds the longest and most complete historical record of past earthquakes. About 30 km north of the city of Aqaba, we opened a trench in the southern part of the Yotvata playa, along the Wadi Araba fault segment. The stratigraphy presents silty sand playa units alternating with coarser sand sediments from alluvial fans flowing westwards from the Jordan plateau. Two fault zones can be recognized in the trench and a minimum of 8 earthquakes can be identified, based on upward terminations of ground ruptures. Dense 14C dating through the entire exposure allows matching the 4 most recent events with historical events in AD1458, AD1212, AD1068 and AD748. Size of the ground rupture suggests a bi-modal distribution of earthquakes with earthquakes rupturing the entire Wadi Araba segment and earthquakes ending in the extensional jog forming the playa. Timing of earthquakes shows that no earthquakes occurred at this site since about 600 years, suggesting earthquake clustering along this section of the fault and potential for a large earthquake in the near future. 3D paleoseismological trenches at the Beteiha

  6. The EM Earthquake Precursor

    NASA Astrophysics Data System (ADS)

    Jones, K. B., II; Saxton, P. T.

    2013-12-01

    Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After the 1989 Loma Prieta Earthquake, American earthquake investigators predetermined magnetometer use and a minimum earthquake magnitude necessary for EM detection. This action was set in motion, due to the extensive damage incurred and public outrage concerning earthquake forecasting; however, the magnetometers employed, grounded or buried, are completely subject to static and electric fields and have yet to correlate to an identifiable precursor. Secondly, there is neither a networked array for finding any epicentral locations, nor have there been any attempts to find even one. This methodology needs dismissal, because it is overly complicated, subject to continuous change, and provides no response time. As for the minimum magnitude threshold, which was set at M5, this is simply higher than what modern technological advances have gained. Detection can now be achieved at approximately M1, which greatly improves forecasting chances. A propagating precursor has now been detected in both the field and laboratory. Field antenna testing conducted outside the NE Texas town of Timpson in February, 2013, detected three strong EM sources along with numerous weaker signals. The antenna had mobility, and observations were noted for recurrence, duration, and frequency response. Next, two

  7. Seismic databases and earthquake catalogue of the Caucasus

    NASA Astrophysics Data System (ADS)

    Godoladze, Tea; Javakhishvili, Zurab; Tvaradze, Nino; Tumanova, Nino; Jorjiashvili, Nato; Gok, Rengen

    2016-04-01

    The Caucasus has a documented historical catalog stretching back to the beginning of the Christian era. Most of the largest historical earthquakes prior to the 19th century are assumed to have occurred on active faults of the Greater Caucasus. Important earthquakes include the Samtskhe earthquake of 1283, Ms~7.0, Io=9; Lechkhumi-Svaneti earthquake of 1350, Ms~7.0, Io=9; and the Alaverdi(earthquake of 1742, Ms~6.8, Io=9. Two significant historical earthquakes that may have occurred within the Javakheti plateau in the Lesser Caucasus are the Tmogvi earthquake of 1088, Ms~6.5, Io=9 and the Akhalkalaki earthquake of 1899, Ms~6.3, Io =8-9. Large earthquakes that occurred in the Caucasus within the period of instrumental observation are: Gori 1920; Tabatskuri 1940; Chkhalta 1963; 1991 Ms=7.0 Racha earthquake, the largest event ever recorded in the region; the 1992 M=6.5 Barisakho earthquake; Ms=6.9 Spitak, Armenia earthquake (100 km south of Tbilisi), which killed over 50,000 people in Armenia. Recently, permanent broadband stations have been deployed across the region as part of various national networks (Georgia (~25 stations), Azerbaijan (~35 stations), Armenia (~14 stations)). The data from the last 10 years of observation provides an opportunity to perform modern, fundamental scientific investigations. A catalog of all instrumentally recorded earthquakes has been compiled by the IES (Institute of Earth Sciences, Ilia State University). The catalog consists of more then 80,000 events. Together with our colleagues from Armenia, Azerbaijan and Turkey the database for the Caucasus seismic events was compiled. We tried to improve locations of the events and calculate Moment magnitudes for the events more than magnitude 4 estimate in order to obtain unified magnitude catalogue of the region. The results will serve as the input for the Seismic hazard assessment for the region.

  8. Application of geostatistical simulation to compile seismotectonic provinces based on earthquake databases (case study: Iran)

    NASA Astrophysics Data System (ADS)

    Jalali, Mohammad; Ramazi, Hamidreza

    2018-04-01

    This article is devoted to application of a simulation algorithm based on geostatistical methods to compile and update seismotectonic provinces in which Iran has been chosen as a case study. Traditionally, tectonic maps together with seismological data and information (e.g., earthquake catalogues, earthquake mechanism, and microseismic data) have been used to update seismotectonic provinces. In many cases, incomplete earthquake catalogues are one of the important challenges in this procedure. To overcome this problem, a geostatistical simulation algorithm, turning band simulation, TBSIM, was applied to make a synthetic data to improve incomplete earthquake catalogues. Then, the synthetic data was added to the traditional information to study the seismicity homogeneity and classify the areas according to tectonic and seismic properties to update seismotectonic provinces. In this paper, (i) different magnitude types in the studied catalogues have been homogenized to moment magnitude (Mw), and earthquake declustering was then carried out to remove aftershocks and foreshocks; (ii) time normalization method was introduced to decrease the uncertainty in a temporal domain prior to start the simulation procedure; (iii) variography has been carried out in each subregion to study spatial regressions (e.g., west-southwestern area showed a spatial regression from 0.4 to 1.4 decimal degrees; the maximum range identified in the azimuth of 135 ± 10); (iv) TBSIM algorithm was then applied to make simulated events which gave rise to make 68,800 synthetic events according to the spatial regression found in several directions; (v) simulated events (i.e., magnitudes) were classified based on their intensity in ArcGIS packages and homogenous seismic zones have been determined. Finally, according to the synthetic data, tectonic features, and actual earthquake catalogues, 17 seismotectonic provinces were introduced in four major classes introduced as very high, high, moderate, and low

  9. A new strategy for earthquake focal mechanisms using waveform-correlation-derived relative polarities and cluster analysis: Application to the 2014 Long Valley Caldera earthquake swarm

    USGS Publications Warehouse

    Shelly, David R.; Hardebeck, Jeanne L.; Ellsworth, William L.; Hill, David P.

    2016-01-01

    In microseismicity analyses, reliable focal mechanisms can typically be obtained for only a small subset of located events. We address this limitation here, presenting a framework for determining robust focal mechanisms for entire populations of very small events. To achieve this, we resolve relative P and S wave polarities between pairs of waveforms by using their signed correlation coefficients—a by-product of previously performed precise earthquake relocation. We then use cluster analysis to group events with similar patterns of polarities across the network. Finally, we apply a standard mechanism inversion to the grouped data, using either catalog or correlation-derived P wave polarity data sets. This approach has great potential for enhancing analyses of spatially concentrated microseismicity such as earthquake swarms, mainshock-aftershock sequences, and industrial reservoir stimulation or injection-induced seismic sequences. To demonstrate its utility, we apply this technique to the 2014 Long Valley Caldera earthquake swarm. In our analysis, 85% of the events (7212 out of 8494 located by Shelly et al. [2016]) fall within five well-constrained mechanism clusters, more than 12 times the number with network-determined mechanisms. Of the earthquakes we characterize, 3023 (42%) have magnitudes smaller than 0.0. We find that mechanism variations are strongly associated with corresponding hypocentral structure, yet mechanism heterogeneity also occurs where it cannot be resolved by hypocentral patterns, often confined to small-magnitude events. Small (5–20°) rotations between mechanism orientations and earthquake location trends persist when we apply 3-D velocity models and might reflect a geometry of en echelon, interlinked shear, and dilational faulting.

  10. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  11. CyberShake-derived ground-motion prediction models for the Los Angeles region with application to earthquake early warning

    USGS Publications Warehouse

    Bose, Maren; Graves, Robert; Gill, David; Callaghan, Scott; Maechling, Phillip J.

    2014-01-01

    in the Los Angeles (LA) basin. These thresholds are used to construct a simple and robust EEW algorithm: to declare a warning, the algorithm only needs to locate the earthquake and to verify that the corresponding magnitude threshold is exceeded. The models predict that a relatively moderate M6.5–7 earthquake along the Palos Verdes, Newport-Inglewood/Rose Canyon, Elsinore or San Jacinto faults with a rupture propagating towards LA could cause ‘very strong’ to ‘severe’ shaking in the LA basin; however, warning times for these events could exceed 30 s.

  12. Rescaled earthquake recurrence time statistics: application to microrepeaters

    NASA Astrophysics Data System (ADS)

    Goltz, Christian; Turcotte, Donald L.; Abaimov, Sergey G.; Nadeau, Robert M.; Uchida, Naoki; Matsuzawa, Toru

    2009-01-01

    Slip on major faults primarily occurs during `characteristic' earthquakes. The recurrence statistics of characteristic earthquakes play an important role in seismic hazard assessment. A major problem in determining applicable statistics is the short sequences of characteristic earthquakes that are available worldwide. In this paper, we introduce a rescaling technique in which sequences can be superimposed to establish larger numbers of data points. We consider the Weibull and log-normal distributions, in both cases we rescale the data using means and standard deviations. We test our approach utilizing sequences of microrepeaters, micro-earthquakes which recur in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Microrepeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. In this paper, we present results for the analysis of recurrence times for several microrepeater sequences from Parkfield, CA as well as NE Japan. We find that, once the respective sequence can be considered to be of sufficient stationarity, the statistics can be well fitted by either a Weibull or a log-normal distribution. We clearly demonstrate this fact by our technique of rescaled combination. We conclude that the recurrence statistics of the microrepeater sequences we consider are similar to the recurrence statistics of characteristic earthquakes on major faults.

  13. Monitoring the Earthquake source process in North America

    USGS Publications Warehouse

    Herrmann, Robert B.; Benz, H.; Ammon, C.J.

    2011-01-01

    With the implementation of the USGS National Earthquake Information Center Prompt Assessment of Global Earthquakes for Response system (PAGER), rapid determination of earthquake moment magnitude is essential, especially for earthquakes that are felt within the contiguous United States. We report an implementation of moment tensor processing for application to broad, seismically active areas of North America. This effort focuses on the selection of regional crustal velocity models, codification of data quality tests, and the development of procedures for rapid computation of the seismic moment tensor. We systematically apply these techniques to earthquakes with reported magnitude greater than 3.5 in continental North America that are not associated with a tectonic plate boundary. Using the 0.02-0.10 Hz passband, we can usually determine, with few exceptions, moment tensor solutions for earthquakes with M w as small as 3.7. The threshold is significantly influenced by the density of stations, the location of the earthquake relative to the seismic stations and, of course, the signal-to-noise ratio. With the existing permanent broadband stations in North America operated for rapid earthquake response, the seismic moment tensor of most earthquakes that are M w 4 or larger can be routinely computed. As expected the nonuniform spatial pattern of these solutions reflects the seismicity pattern. However, the orientation of the direction of maximum compressive stress and the predominant style of faulting is spatially coherent across large regions of the continent.

  14. A moment-tensor catalog for intermediate magnitude earthquakes in Mexico

    NASA Astrophysics Data System (ADS)

    Rodríguez Cardozo, Félix; Hjörleifsdóttir, Vala; Martínez-Peláez, Liliana; Franco, Sara; Iglesias Mendoza, Arturo

    2016-04-01

    Located among five tectonic plates, Mexico is one of the world's most seismically active regions. The earthquake focal mechanisms provide important information on the active tectonics. A widespread technique for estimating the earthquake magnitud and focal mechanism is the inversion for the moment tensor, obtained by minimizing a misfit function that estimates the difference between synthetic and observed seismograms. An important element in the estimation of the moment tensor is an appropriate velocity model, which allows for the calculation of accurate Green's Functions so that the differences between observed and synthetics seismograms are due to the source of the earthquake rather than the velocity model. However, calculating accurate synthetic seismograms gets progressively more difficult as the magnitude of the earthquakes decreases. Large earthquakes (M>5.0) excite waves of longer periods that interact weakly with lateral heterogeneities in the crust. For these events, using 1D velocity models to compute Greens functions works well and they are well characterized by seismic moment tensors reported in global catalogs (eg. USGS fast moment tensor solutions and GCMT). The opposite occurs for small and intermediate sized events, where the relatively shorter periods excited interact strongly with lateral heterogeneities in the crust and upper mantle. To accurately model the Green's functions for the smaller events in a large heterogeneous area, requires 3D or regionalized 1D models. To obtain a rapid estimate of earthquake magnitude, the National Seismological Survey in Mexico (Servicio Sismológico Nacional, SSN) automatically calculates seismic moment tensors for events in the Mexican Territory (Franco et al., 2002; Nolasco-Carteño, 2006). However, for intermediate-magnitude and small earthquakes the signal-to-noise ratio could is low for many of the seismic stations, and without careful selection and filtering of the data, obtaining a stable focal mechanism

  15. Seismicity in the source areas of the 1896 and 1933 Sanriku earthquakes and implications for large near-trench earthquake faults

    NASA Astrophysics Data System (ADS)

    Obana, Koichiro; Nakamura, Yasuyuki; Fujie, Gou; Kodaira, Shuichi; Kaiho, Yuka; Yamamoto, Yojiro; Miura, Seiichi

    2018-03-01

    In the northern part of the Japan Trench, the 1933 Showa-Sanriku earthquake (Mw 8.4), an outer-trench, normal-faulting earthquake, occurred 37 yr after the 1896 Meiji-Sanriku tsunami earthquake (Mw 8.0), a shallow, near-trench, plate-interface rupture. Tsunamis generated by both earthquakes caused severe damage along the Sanriku coast. Precise locations of earthquakes in the source areas of the 1896 and 1933 earthquakes have not previously been obtained because they occurred at considerable distances from the coast in deep water beyond the maximum operational depth of conventional ocean bottom seismographs (OBSs). In 2015, we incorporated OBSs designed for operation in deep water (ultradeep OBSs) in an OBS array during two months of seismic observations in the source areas of the 1896 and 1933 Sanriku earthquakes to investigate the relationship of seismicity there to outer-rise normal-faulting earthquakes and near-trench tsunami earthquakes. Our analysis showed that seismicity during our observation period occurred along three roughly linear trench-parallel trends in the outer-trench region. Seismic activity along these trends likely corresponds to aftershocks of the 1933 Showa-Sanriku earthquake and the Mw 7.4 normal-faulting earthquake that occurred 40 min after the 2011 Tohoku-Oki earthquake. Furthermore, changes of the clarity of reflections from the oceanic Moho on seismic reflection profiles and low-velocity anomalies within the oceanic mantle were observed near the linear trends of the seismicity. The focal mechanisms we determined indicate that an extensional stress regime extends to about 40 km depth, below which the stress regime is compressional. These observations suggest that rupture during the 1933 Showa-Sanriku earthquake did not extend to the base of the oceanic lithosphere and that compound rupture of multiple or segmented faults is a more plausible explanation for that earthquake. The source area of the 1896 Meiji-Sanriku tsunami earthquake is

  16. Constraints on recent earthquake source parameters, fault geometry and aftershock characteristics in Oklahoma

    NASA Astrophysics Data System (ADS)

    McNamara, D. E.; Benz, H.; Herrmann, R. B.; Bergman, E. A.; McMahon, N. D.; Aster, R. C.

    2014-12-01

    In late 2009, the seismicity of Oklahoma increased dramatically. The largest of these earthquakes was a series of three damaging events (Mw 4.8, 5.6, 4.8) that occurred over a span of four days in November 2011 near the town of Prague in central Oklahoma. Studies suggest that these earthquakes were induced by reactivation of the Wilzetta fault due to the disposal of waste water from hydraulic fracturing ("fracking") and other oil and gas activities. The Wilzetta fault is a northeast trending vertical strike-slip fault that is a well known structural trap for oil and gas. Since the November 2011 Prague sequence, thousands of small to moderate (M2-M4) earthquakes have occurred throughout central Oklahoma. The most active regions are located near the towns of Stillwater and Medford in north-central Oklahoma, and Guthrie, Langston and Jones near Oklahoma City. The USGS, in collaboration with the Oklahoma Geological Survey and the University of Oklahoma, has responded by deploying numerous temporary seismic stations in the region in order to record the vigorous aftershock sequences. In this study we use data from the temporary seismic stations to re-locate all Oklahoma earthquakes in the USGS National Earthquake Information Center catalog using a multiple-event approach known as hypo-centroidal decomposition that locates earthquakes with decreased uncertainty relative to one another. Modeling from this study allows us to constrain the detailed geometry of the reactivated faults, as well as source parameters (focal mechanisms, stress drop, rupture length) for the larger earthquakes. Preliminary results from the November 2011 Prague sequence suggest that subsurface rupture lengths of the largest earthquakes are anomalously long with very low stress drop. We also observe very high Q (~1000 at 1 Hz) that explains the large felt areas and we find relatively low b-value and a rapid decay of aftershocks.

  17. Earthquake insurance pricing: a risk-based approach.

    PubMed

    Lin, Jeng-Hsiang

    2018-04-01

    Flat earthquake premiums are 'uniformly' set for a variety of buildings in many countries, neglecting the fact that the risk of damage to buildings by earthquakes is based on a wide range of factors. How these factors influence the insurance premiums is worth being studied further. Proposed herein is a risk-based approach to estimate the earthquake insurance rates of buildings. Examples of application of the approach to buildings located in Taipei city of Taiwan were examined. Then, the earthquake insurance rates for the buildings investigated were calculated and tabulated. To fulfil insurance rating, the buildings were classified into 15 model building types according to their construction materials and building height. Seismic design levels were also considered in insurance rating in response to the effect of seismic zone and construction years of buildings. This paper may be of interest to insurers, actuaries, and private and public sectors of insurance. © 2018 The Author(s). Disasters © Overseas Development Institute, 2018.

  18. MyShake: Smartphone-based detection and analysis of Oklahoma earthquakes

    NASA Astrophysics Data System (ADS)

    Kong, Q.; Allen, R. M.; Schreier, L.

    2016-12-01

    MyShake is a global smartphone seismic network that harnesses the power of crowdsourcing (myshake.berkeley.edu). It uses the accelerometer data from phones to detect earthquake-like motion, and then uploads triggers and waveform data to a server for aggregation of the results. Since the public release in Feb 2016, more than 200,000 android-phone owners have installed the app, and the global network has recorded more than 300 earthquakes. In Oklahoma, there are about 200 active users each day providing enough data for the network to detect earthquakes and for us to perform analysis of the events. MyShake has recorded waveform data for M2.6 to M5.8 earthquakes in the state. For the September 3, 2016, M5.8 earthquake 14 phones detected the event and we can use the waveforms to determine event characteristics. MyShake data provides a location 3.95 km from the ANSS location and a magnitude of 5.7. We can also use MyShake data to estimate a stress drop of 7.4 MPa. MyShake is still a rapidly expanding network that has the ability to grow by thousands of stations/phones in a matter of hours as public interest increases. These initial results suggest that the data will be useful for a variety of scientific studies of induced seismicity phenomena in Oklahoma as well as having the potential to provide earthquake early warning in the future.

  19. Geophysical setting of the February 21, 2008 Mw 6 Wells earthquake, Nevada, and implications for earthquake hazards

    USGS Publications Warehouse

    Ponce, David A.; Watt, Janet T.; Bouligand, C.

    2011-01-01

    We utilize gravity and magnetic methods to investigate the regional geophysical setting of the Wells earthquake. In particular, we delineate major crustal structures that may have played a role in the location of the earthquake and discuss the geometry of a nearby sedimentary basin that may have contributed to observed ground shaking. The February 21, 2008 Mw 6.0 Wells earthquake, centered about 10 km northeast of Wells, Nevada, caused considerable damage to local buildings, especially in the historic old town area. The earthquake occurred on a previously unmapped normal fault and preliminary relocated events indicate a fault plane dipping about 55 degrees to the southeast. The epicenter lies near the intersection of major Basin and Range normal faults along the Ruby Mountains and Snake Mountains, and strike-slip faults in the southern Snake Mountains. Regionally, the Wells earthquake epicenter is aligned with a crustal-scale boundary along the edge of a basement gravity high that correlates to the Ruby Mountains fault zone. The Wells earthquake also occurred near a geophysically defined strike-slip fault that offsets buried plutonic rocks by about 30 km. In addition, a new depth-to-basement map, derived from the inversion of gravity data, indicates that the Wells earthquake and most of its associated aftershock sequence lie below a small oval- to rhomboid-shaped basin, that reaches a depth of about 2 km. Although the basin is of limited areal extent, it could have contributed to increased ground shaking in the vicinity of the city of Wells, Nevada, due to basin amplification of seismic waves.

  20. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.