Science.gov

Sample records for california earthquake probabilities

  1. The Working Groups on California Earthquake Probabilities

    NASA Astrophysics Data System (ADS)

    Field, E. H.

    2005-12-01

    The most official time-dependent earthquake forecasts for any region in the United States have been the Working Groups on California Earthquake Probabilities (1988, 1989, 1995, and 2002). This presentation will give an overview of each of these efforts, including basic model assumptions, innovations, implications, and some controversies that ensued. Challenges and potential issues in a newly formed Working Group on California Earthquake Probabilities will also be discussed. One goal of this new effort is to provide statewide model that can be used by the California Earthquake Authority in setting earthquake insurance rates. This potential use means a forecast duration of one to five years is desirable (as opposed to the 30-year duration of previous working groups), which, combined with the fact that smaller earthquakes can cause significant losses, means that aftershocks and earthquake triggering in general must be considered carefully. Other potential innovations in the new effort include: 1) the inclusion of GPS constraints by using kinematically consistent deformation models; 2) abandoning the strict segmentation paradigm and allowing fault-to-fault ruptures; 3) a more systematic approach to using paleoseismic data; and 4) having the model exist as an adaptable and extensible entity where modifications can be made in real time as events unfold.

  2. An empirical model for earthquake probabilities in the San Francisco Bay region, California, 2002-2031

    USGS Publications Warehouse

    Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.

    2003-01-01

    The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ???6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ???6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ???6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the earthquake hazard and endorse the use of all credible earthquake probability models for the region, including the empirical model, with appropriate weighting, as was done in WGCEP (2002).

  3. Earthquake Rate Model 2 of the 2007 Working Group for California Earthquake Probabilities, Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2008-01-01

    The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).

  4. California Fault Parameters for the National Seismic Hazard Maps and Working Group on California Earthquake Probabilities 2007

    USGS Publications Warehouse

    Wills, Chris J.; Weldon, Ray J., II; Bryant, W.A.

    2008-01-01

    This report describes development of fault parameters for the 2007 update of the National Seismic Hazard Maps and the Working Group on California Earthquake Probabilities (WGCEP, 2007). These reference parameters are contained within a database intended to be a source of values for use by scientists interested in producing either seismic hazard or deformation models to better understand the current seismic hazards in California. These parameters include descriptions of the geometry and rates of movements of faults throughout the state. These values are intended to provide a starting point for development of more sophisticated deformation models which include known rates of movement on faults as well as geodetic measurements of crustal movement and the rates of movements of the tectonic plates. The values will be used in developing the next generation of the time-independent National Seismic Hazard Maps, and the time-dependant seismic hazard calculations being developed for the WGCEP. Due to the multiple uses of this information, development of these parameters has been coordinated between USGS, CGS and SCEC. SCEC provided the database development and editing tools, in consultation with USGS, Golden. This database has been implemented in Oracle and supports electronic access (e.g., for on-the-fly access). A GUI-based application has also been developed to aid in populating the database. Both the continually updated 'living' version of this database, as well as any locked-down official releases (e.g., used in a published model for calculating earthquake probabilities or seismic shaking hazards) are part of the USGS Quaternary Fault and Fold Database http://earthquake.usgs.gov/regional/qfaults/ . CGS has been primarily responsible for updating and editing of the fault parameters, with extensive input from USGS and SCEC scientists.

  5. Earthquake Rate Model 2.2 of the 2007 Working Group for California Earthquake Probabilities, Appendix D: Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2007-01-01

    Summary To estimate the down-dip coseismic fault dimension, W, the Executive Committee has chosen the Nazareth and Hauksson (2004) method, which uses the 99% depth of background seismicity to assign W. For the predicted earthquake magnitude-fault area scaling used to estimate the maximum magnitude of an earthquake rupture from a fault's length, L, and W, the Committee has assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2002) (as updated in 2007) equations. The former uses a single relation; the latter uses a bilinear relation which changes slope at M=6.65 (A=537 km2).

  6. California earthquake history

    USGS Publications Warehouse

    Toppozada, T.; Branum, D.

    2004-01-01

    This paper presents an overview of the advancement in our knowledge of California's earthquake history since ??? 1800, and especially during the last 30 years. We first review the basic statewide research on earthquake occurrences that was published from 1928 through 2002, to show how the current catalogs and their levels of completeness have evolved with time. Then we review some of the significant new results in specific regions of California, and some of what remains to be done. Since 1850, 167 potentially damaging earthquakes of M ??? 6 or larger have been identified in California and its border regions, indicating an average rate of 1.1 such events per year. Table I lists the earthquakes of M ??? 6 to 6.5 that were also destructive since 1812 in California and its border regions, indicating an average rate of one such event every ??? 5 years. Many of these occurred before 1932 when epicenters and magnitudes started to be determined routinely using seismographs in California. The number of these early earthquakes is probably incomplete in sparsely populated remote parts of California before ??? 1870. For example, 6 of the 7 pre-1873 events in table I are of M ??? 7, suggesting that other earthquakes of M 6.5 to 6.9 occurred but were not properly identified, or were not destructive. The epicenters and magnitudes (M) of the pre-instrumental earthquakes were determined from isoseismal maps that were based on the Modified Mercalli Intensity of shaking (MMI) at the communities that reported feeling the earthquakes. The epicenters were estimated to be in the regions of most intense shaking, and values of M were estimated from the extent of the areas shaken at various MMI levels. MMI VII or greater shaking is the threshold of damage to weak buildings. Certain areas in the regions of Los Angeles, San Francisco, and Eureka were each shaken repeatedly at MMI VII or greater at least six times since ??? 1812, as depicted by Toppozada and Branum (2002, fig. 19).

  7. Long‐term time‐dependent probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3)

    USGS Publications Warehouse

    Field, Ned; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David A.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin; Page, Morgan T.; Parsons, Thomas E.; Powers, Peter; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J., II; Zeng, Yuehua

    2015-01-01

    The 2014 Working Group on California Earthquake Probabilities (WGCEP 2014) presents time-dependent earthquake probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3). Building on the UCERF3 time-independent model, published previously, renewal models are utilized to represent elastic-rebound-implied probabilities. A new methodology has been developed that solves applicability issues in the previous approach for un-segmented models. The new methodology also supports magnitude-dependent aperiodicity and accounts for the historic open interval on faults that lack a date-of-last-event constraint. Epistemic uncertainties are represented with a logic tree, producing 5,760 different forecasts. Results for a variety of evaluation metrics are presented, including logic-tree sensitivity analyses and comparisons to the previous model (UCERF2). For 30-year M≥6.7 probabilities, the most significant changes from UCERF2 are a threefold increase on the Calaveras fault and a threefold decrease on the San Jacinto fault. Such changes are due mostly to differences in the time-independent models (e.g., fault slip rates), with relaxation of segmentation and inclusion of multi-fault ruptures being particularly influential. In fact, some UCERF2 faults were simply too long to produce M 6.7 sized events given the segmentation assumptions in that study. Probability model differences are also influential, with the implied gains (relative to a Poisson model) being generally higher in UCERF3. Accounting for the historic open interval is one reason. Another is an effective 27% increase in the total elastic-rebound-model weight. The exact factors influencing differences between UCERF2 and UCERF3, as well as the relative importance of logic-tree branches, vary throughout the region, and depend on the evaluation metric of interest. For example, M≥6.7 probabilities may not be a good proxy for other hazard or loss measures. This sensitivity, coupled with the approximate nature of the model and known limitations, means the applicability of UCERF3 should be evaluated on a case-by-case basis.

  8. Earthquake education in California

    USGS Publications Warehouse

    MacCabe, M. P.

    1980-01-01

    In a survey of community response to the earthquake threat in southern California, Ralph Turner and his colleagues in the Department of Sociology at the University of California, Los Angeles, found that the public very definitely wants to be educated about the kinds of problems and hazards they can expect during and after a damaging earthquake; and they also want to know how they can prepare themselves to minimize their vulnerability. Decisionmakers, too, are recognizing this new wave of public concern. 

  9. The Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2)

    USGS Publications Warehouse

    2007 Working Group on California Earthquake Probabilities

    2008-01-01

    California?s 35 million people live among some of the most active earthquake faults in the United States. Public safety demands credible assessments of the earthquake hazard to maintain appropriate building codes for safe construction and earthquake insurance for loss protection. Seismic hazard analysis begins with an earthquake rupture forecast?a model of probabilities that earthquakes of specified magnitudes, locations, and faulting types will occur during a specified time interval. This report describes a new earthquake rupture forecast for California developed by the 2007 Working Group on California Earthquake Probabilities (WGCEP 2007).

  10. Paleoseismic event dating and the conditional probability of large earthquakes on the southern San Andreas fault, California

    USGS Publications Warehouse

    Biasi, G.P.; Weldon, R.J., II; Fumal, T.E.; Seitz, G.G.

    2002-01-01

    We introduce a quantitative approach to paleoearthquake dating and apply it to paleoseismic data from the Wrightwood and Pallett Creek sites on the southern San Andreas fault. We illustrate how stratigraphic ordering, sedimentological, and historical data can be used quantitatively in the process of estimating earthquake ages. Calibrated radiocarbon age distributions are used directly from layer dating through recurrence intervals and recurrence probability estimation. The method does not eliminate subjective judgements in event dating, but it does provide a means of systematically and objectively approaching the dating process. Date distributions for the most recent 14 events at Wrightwood are based on sample and contextual evidence in Fumal et al. (2002) and site context and slip history in Weldon et al. (2002). Pallett Creek event and dating descriptions are from published sources. For the five most recent events at Wrightwood, our results are consistent with previously published estimates, with generally comparable or narrower uncertainties. For Pallett Creek, our earthquake date estimates generally overlap with previous results but typically have broader uncertainties. Some event date estimates are very sensitive to details of data interpretation. The historical earthquake in 1857 ruptured the ground at both sites but is not constrained by radiocarbon data. Radiocarbon ages, peat accumulation rates, and historical constraints at Pallett Creek for event X yield a date estimate in the earliest 1800s and preclude a date in the late 1600s. This event is almost certainly the historical 1812 earthquake, as previously concluded by Sieh et al. (1989). This earthquake also produced ground deformation at Wrightwood. All events at Pallett Creek, except for event T, about A.D. 1360, and possibly event I, about A.D. 960, have corresponding events at Wrightwood with some overlap in age ranges. Event T falls during a period of low sedimentation at Wrightwood when conditions were not favorable for recording earthquake evidence. Previously proposed correlations of Pallett Creek X with Wrightwood W3 in the 1690s and Pallett Creek event V with W5 around 1480 (Fumal et al., 1993) appear unlikely after our dating reevaluation. Apparent internal inconsistencies among event, layer, and dating relationships around events R and V identify them as candidates for further investigation at the site. Conditional probabilities of earthquake recurrence were estimated using Poisson, lognormal, and empirical models. The presence of 12 or 13 events at Wrightwood during the same interval that 10 events are reported at Pallett Creek is reflected in mean recurrence intervals of 105 and 135 years, respectively. Average Poisson model 30-year conditional probabilities are about 20% at Pallett Creek and 25% at Wrightwood. The lognormal model conditional probabilities are somewhat higher, about 25% for Pallett Creek and 34% for Wrightwood. Lognormal variance ??ln estimates of 0.76 and 0.70, respectively, imply only weak time predictability. Conditional probabilities of 29% and 46%, respectively, were estimated for an empirical distribution derived from the data alone. Conditional probability uncertainties are dominated by the brevity of the event series; dating uncertainty contributes only secondarily. Wrightwood and Pallett Creek event chronologies both suggest variations in recurrence interval with time, hinting that some form of recurrence rate modulation may be at work, but formal testing shows that neither series is more ordered than might be produced by a Poisson process.

  11. Computing Earthquake Probabilities on Global Scales

    NASA Astrophysics Data System (ADS)

    Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.

    2016-03-01

    Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.

  12. Earthquake research needed in California

    NASA Astrophysics Data System (ADS)

    Bell, Peter M.

    According to an analysis by a group of seismologists and tectonophysicists from the Lamont-Doherty Geological Observatory of Columbia University and the Seismological Laboratory of the California Institute of Technology, there is an imperative need for extensive studies of the San Andreas fault system throughout its extent within the state of California. Although there is considerable controversy surrounding the question of which segment of the San Andreas system may produce the next earthquake, C.B. Raleigh, K. Sieh, L.R. Sykes, and D.L. Anderson report that it is conceivable that the entire fault in southern California could rupture at once. (Science, Sept. 17, 1982).The fears that a major earthquake may occur at anytime in southern California are based on numerous statistical factors underlying the idea that The longer it's been since the last big one, the sooner the next one will be. The timing of earthquakes along active seismic zones, particularly those that coincide with plate boundaries, seems to be directly related to the amount of displacement generated since the last such earthquake at the same location. For example, the recurrence rate of about 150 years for great earthquakes along the San Andreas fault in southern California and the displacement rate of about 3 cm per year for the segment of the last earthquake from Cholame Valley to Cajon Pass in 1857 suggest that the next such large event may occur in the near future. On this basis Raleigh et al. (Science, sup.) conclude that Both observations mark the San Andreas fault north and east of Los Angeles as a mature seismic gap and the prime candidate for producing southern California's next great earthquake. The expected consequences are described as appalling and as having a potential for severe losses of life and property from such a great earthquake The worst case, cited in a report issued by the National Security Council through the Federal Emergency Management Administration in 1980 for an earthquake magnitude of M=7.5 in southern California near Long Beach, could cause the loss of 20,000 lives and $69 billion in property damages.

  13. Uniform California earthquake rupture forecast, version 2 (UCERF 2)

    USGS Publications Warehouse

    Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.

    2009-01-01

    The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.

  14. Prospective Tests of Southern California Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.; Kagan, Y. Y.; Helmstetter, A.; Wiemer, S.; Field, N.

    2004-12-01

    We are testing earthquake forecast models prospectively using likelihood ratios. Several investigators have developed such models as part of the Southern California Earthquake Center's project called Regional Earthquake Likelihood Models (RELM). Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. Here we describe the testing procedure and present preliminary results. Forecasts are expressed as the yearly rate of earthquakes within pre-specified bins of longitude, latitude, magnitude, and focal mechanism parameters. We test models against each other in pairs, which requires that both forecasts in a pair be defined over the same set of bins. For this reason we specify a standard "menu" of bins and ground rules to guide forecasters in using common descriptions. One menu category includes five-year forecasts of magnitude 5.0 and larger. Contributors will be requested to submit forecasts in the form of a vector of yearly earthquake rates on a 0.1 degree grid at the beginning of the test. Focal mechanism forecasts, when available, are also archived and used in the tests. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.1 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. Tests are based on the log likelihood scores derived from the probability that future earthquakes would occur where they do if a given forecast were true [Kagan and Jackson, J. Geophys. Res.,100, 3,943-3,959, 1995]. For each pair of forecasts, we compute alpha, the probability that the first would be wrongly rejected in favor of the second, and beta, the probability that the second would be wrongly rejected in favor of the first. Computing alpha and beta requires knowing the theoretical distribution of likelihood scores under each hypothesis, which we estimate by simulations. In this scheme, each forecast is given equal status; there is no "null hypothesis" which would be accepted by default. Forecasts and test results will be archived and posted on the RELM web site. Major problems under discussion include how to treat aftershocks, which clearly violate the variable-rate Poissonian hypotheses that we employ, and how to deal with the temporal variations in catalog completeness that follow large earthquakes.

  15. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California.

    PubMed

    Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F

    2011-10-01

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts. PMID:21949355

  16. Earthquake probabilities: theoretical assessments and reality

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2013-12-01

    It is of common knowledge that earthquakes are complex phenomena which classification and sizing remain serious problems of the contemporary seismology. In general, their frequency-magnitude distribution exhibit power law scaling. This scaling differs significantly when different time and/or space domains are considered. At the scale of a particular earthquake rupture zone the frequency of similar size events is usually estimated to be about once in several hundred years. Evidently, contemporary seismology does not possess enough reported instrumental data for any reliable quantification of an earthquake probability at a given place of expected event. Regretfully, most of the state-of-the-art theoretical approaches to assess probability of seismic events are based on trivial (e.g. Poisson, periodic, etc) or, conversely, delicately-designed (e.g. STEP, ETAS, etc) models of earthquake sequences. Some of these models are evidently erroneous, some can be rejected by the existing statistics, and some are hardly testable in our life-time. Nevertheless such probabilistic counts including seismic hazard assessment and earthquake forecasting when used on practice eventually mislead to scientifically groundless advices communicated to decision makers and inappropriate decisions. As a result, the population of seismic regions continues facing unexpected risk and losses. The international project Global Earthquake Model (GEM) is on the wrong track, if it continues to base seismic risk estimates on the standard, mainly probabilistic, methodology to assess seismic hazard. It is generally accepted that earthquakes are infrequent, low-probability events. However, they keep occurring at earthquake-prone areas with 100% certainty. Given the expectation of seismic event once per hundred years, the daily probability of occurrence on a certain date may range from 0 to 100% depending on a choice of probability space (which is yet unknown and, therefore, made by a subjective lucky chance). How many days are needed to distinguish 0 from the average probability of 0.000027? Is it theoretically admissible to apply average when seismic events, including mega-earthquakes, are evidently clustered in time and space displaying behaviors that are far from independent? Is it possible to ignore possibly fractal, definitely, far from uniform distribution in space when mapping seismic probability density away from the empirical earthquake locus embedded onto the boundaries of the lithosphere blocks? These are simple questions to those who advocate the existing probabilistic products for seismic hazard assessment and forecasting. Fortunately, the situation is not hopeless due to deterministic pattern recognition approaches applied to available geological evidences, specifically, when intending to predict predictable, but not the exact size, site, date, and probability of a target event. Understanding by modeling the complexity of non-linear dynamics of hierarchically organized systems of blocks-and-faults has led already to methodologies of neo-deterministic seismic hazard analysis and intermediate-term middle- to narrow-range earthquake prediction algorithms tested in real-time applications over the last decades.

  17. Combining earthquake forecasts using differential probability gains

    NASA Astrophysics Data System (ADS)

    Shebalin, Peter N.; Narteau, Clément; Zechar, Jeremy Douglas; Holschneider, Matthias

    2014-12-01

    We describe an iterative method to combine seismicity forecasts. With this method, we produce the next generation of a starting forecast by incorporating predictive skill from one or more input forecasts. For a single iteration, we use the differential probability gain of an input forecast relative to the starting forecast. At each point in space and time, the rate in the next-generation forecast is the product of the starting rate and the local differential probability gain. The main advantage of this method is that it can produce high forecast rates using all types of numerical forecast models, even those that are not rate-based. Naturally, a limitation of this method is that the input forecast must have some information not already contained in the starting forecast. We illustrate this method using the Every Earthquake a Precursor According to Scale (EEPAS) and Early Aftershocks Statistics (EAST) models, which are currently being evaluated at the US testing center of the Collaboratory for the Study of Earthquake Predictability. During a testing period from July 2009 to December 2011 (with 19 target earthquakes), the combined model we produce has better predictive performance - in terms of Molchan diagrams and likelihood - than the starting model (EEPAS) and the input model (EAST). Many of the target earthquakes occur in regions where the combined model has high forecast rates. Most importantly, the rates in these regions are substantially higher than if we had simply averaged the models.

  18. Seismicity alert probabilities at Parkfield, California, revisited

    USGS Publications Warehouse

    Michael, A.J.; Jones, L.M.

    1998-01-01

    For a decade, the US Geological Survey has used the Parkfield Earthquake Prediction Experiment scenario document to estimate the probability that earthquakes observed on the San Andreas fault near Parkfield will turn out to be foreshocks followed by the expected magnitude six mainshock. During this time, we have learned much about the seismogenic process at Parkfield, about the long-term probability of the Parkfield mainshock, and about the estimation of these types of probabilities. The probabilities for potential foreshocks at Parkfield are reexamined and revised in light of these advances. As part of this process, we have confirmed both the rate of foreshocks before strike-slip earthquakes in the San Andreas physiographic province and the uniform distribution of foreshocks with magnitude proposed by earlier studies. Compared to the earlier assessment, these new estimates of the long-term probability of the Parkfield mainshock are lower, our estimate of the rate of background seismicity is higher, and we find that the assumption that foreshocks at Parkfield occur in a unique way is not statistically significant at the 95% confidence level. While the exact numbers vary depending on the assumptions that are made, the new alert probabilities are lower than previously estimated. Considering the various assumptions and the statistical uncertainties in the input parameters, we also compute a plausible range for the probabilities. The range is large, partly due to the extra knowledge that exists for the Parkfield segment, making us question the usefulness of these numbers.

  19. California earthquakes: why only shallow focus?

    PubMed

    Brace, W F; Byerlee, J D

    1970-06-26

    Frictional sliding on sawcuts and faults in laboratory samples of granite and gabbro is markedly temperature-dependent. At pressures from 1 to 5 kilobars, stick-slip gave way to stable sliding as temperature was increased from 200 to 500 degrees Celsius. Increased temperature with depth could thus cause the abrupt disappearance of earthquakes noted at shallow depths in California. PMID:17759338

  20. California earthquakes: Why only shallow focus?

    USGS Publications Warehouse

    Brace, W.F.; Byerlee, J.D.

    1970-01-01

    Frictional sliding on sawcuts and faults in laboratory samples of granite and gabbro is markedly temperature-dependent. At pressures from 1 to 5 kilobars, stick-slip gave way to stable sliding as temperature was increased from 200 to 500 degrees Celsius. Increased temperature with depth could thus cause the abrupt disappearance of earthquakes noted at shallow depths in California.

  1. Psychological distress following urban earthquakes in California.

    PubMed

    Bourque, Linda B; Siegel, Judith M; Shoaf, Kimberley I

    2002-01-01

    During and following a disaster caused by a natural event, human populations are thought to be at greater risk of psychological morbidity and mortality directly attributable to increased, disaster-induced stress. Drawing both on the research of others and that conducted at the Center for Public Health and Disaster Relief of the University of California-Los Angeles (UCLA) following California earthquakes, this paper examines the extent to which research evidence supports these assumptions. Following a brief history of disaster research in the United States, the response of persons at the time of an earthquake was examined with particular attention to psychological morbidity; the number of deaths that can be attributed to cardiovascular events and suicides; and the extent to which and by whom, health services are used following an earthquake. The implications of research findings for practitioners in the field are discussed. PMID:12500731

  2. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  3. Time?dependent renewal?model probabilities when date of last earthquake is unknown

    USGS Publications Warehouse

    Field, Edward H.; Jordan, Thomas H.

    2015-01-01

    We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.

  4. Conditional Probabilities for Large Events Estimated by Small Earthquake Rate

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Li, Hsien-Chi

    2016-01-01

    We examined forecasting quiescence and activation models to obtain the conditional probability that a large earthquake will occur in a specific time period on different scales in Taiwan. The basic idea of the quiescence and activation models is to use earthquakes that have magnitudes larger than the completeness magnitude to compute the expected properties of large earthquakes. We calculated the probability time series for the whole Taiwan region and for three subareas of Taiwan—the western, eastern, and northeastern Taiwan regions—using 40 years of data from the Central Weather Bureau catalog. In the probability time series for the eastern and northeastern Taiwan regions, a high probability value is usually yielded in cluster events such as events with foreshocks and events that all occur in a short time period. In addition to the time series, we produced probability maps by calculating the conditional probability for every grid point at the time just before a large earthquake. The probability maps show that high probability values are yielded around the epicenter before a large earthquake. The receiver operating characteristic (ROC) curves of the probability maps demonstrate that the probability maps are not random forecasts, but also suggest that lowering the magnitude of a forecasted large earthquake may not improve the forecast method itself. From both the probability time series and probability maps, it can be observed that the probability obtained from the quiescence model increases before a large earthquake and the probability obtained from the activation model increases as the large earthquakes occur. The results lead us to conclude that the quiescence model has better forecast potential than the activation model.

  5. Infrasonic observations of the Northridge, California, earthquake

    SciTech Connect

    Mutschlecner, J.P.; Whitaker, R.W.

    1994-09-01

    Infrasonic waves from the Northridge, California, earthquake of 17 January 1994 were observed at the St. George, Utah, infrasound array of the Los Alamos National Laboratory. The distance to the epicenter was 543 kilometers. The signal shows a complex character with many peaks and a long duration. An interpretation is given in terms of several modes of signal propagation and generation including a seismic-acoustic secondary source mechanism. A number of signals from aftershocks are also observed.

  6. Assigning probability gain for precursors of four large Chinese earthquakes

    SciTech Connect

    Cao, T.; Aki, K.

    1983-03-10

    We extend the concept of probability gain associated with a precursor (Aki, 1981) to a set of precursors which may be mutually dependent. Making use of a new formula, we derive a criterion for selecting precursors from a given data set in order to calculate the probability gain. The probabilities per unit time immediately before four large Chinese earthquakes are calculated. They are approximately 0.09, 0.09, 0.07 and 0.08 per day for 1975 Haicheng (M = 7.3), 1976 Tangshan (M = 7.8), 1976 Longling (M = 7.6), and Songpan (M = 7.2) earthquakes, respectively. These results are encouraging because they suggest that the investigated precursory phenomena may have included the complete information for earthquake prediction, at least for the above earthquakes. With this method, the step-by-step approach to prediction used in China may be quantified in terms of the probability of earthquake occurrence. The ln P versus t curve (where P is the probability of earthquake occurrence at time t) shows that ln P does not increase with t linearly but more rapidly as the time of earthquake approaches.

  7. Earthquake Simulations and Historical Patterns of Events: Forecasting the Next Great Earthquake in California

    NASA Astrophysics Data System (ADS)

    Sachs, M. K.; Rundle, J. B.; Heien, E. M.; Turcotte, D. L.; Yikilmaz, M.; Kellogg, L. H.

    2013-12-01

    The fault system in California combined with some of the United States most densely populated regions is a recipe for devastation. It has been estimated that a repeat of the 1906 m=7.8 San Francisco earthquake could cause as much as $84 billion in damage. Earthquake forecasting can help alleviate the effects of these events by targeting disaster relief and preparedness in regions that will need it the most. However, accurate earthquake forecasting has proven difficult. We present a forecasting technique that uses simulated earthquake catalogs generated by Virtual California and patterns of historical events. As background, we also describe internal details of the Virtual California earthquake simulator.

  8. Earthquake probabilities in the San Francisco Bay Region: 2000 to 2030 - a summary of findings

    USGS Publications Warehouse

    Working Group on California Earthquake Probabilities

    1999-01-01

    The San Francisco Bay region sits astride a dangerous “earthquake machine,” the tectonic boundary between the Pacific and North American Plates. The region has experienced major and destructive earthquakes in 1838, 1868, 1906, and 1989, and future large earthquakes are a certainty. The ability to prepare for large earthquakes is critical to saving lives and reducing damage to property and infrastructure. An increased understanding of the timing, size, location, and effects of these likely earthquakes is a necessary component in any effective program of preparedness. This study reports on the probabilities of occurrence of major earthquakes in the San Francisco Bay region (SFBR) for the three decades 2000 to 2030. The SFBR extends from Healdsberg on the northwest to Salinas on the southeast and encloses the entire metropolitan area, including its most rapidly expanding urban and suburban areas. In this study a “major” earthquake is defined as one with M≥6.7 (where M is moment magnitude). As experience from the Northridge, California (M6.7, 1994) and Kobe, Japan (M6.9, 1995) earthquakes has shown us, earthquakes of this size can have a disastrous impact on the social and economic fabric of densely urbanized areas. To reevaluate the probability of large earthquakes striking the SFBR, the U.S. Geological Survey solicited data, interpretations, and analyses from dozens of scientists representing a wide crosssection of the Earth-science community (Appendix A). The primary approach of this new Working Group (WG99) was to develop a comprehensive, regional model for the long-term occurrence of earthquakes, founded on geologic and geophysical observations and constrained by plate tectonics. The model considers a broad range of observations and their possible interpretations. Using this model, we estimate the rates of occurrence of earthquakes and 30-year earthquake probabilities. Our study considers a range of magnitudes for earthquakes on the major faults in the region—an innovation over previous studies of the SFBR that considered only a small number of potential earthquakes of fixed magnitude.

  9. Bayesian probabilities of earthquake occurrences in Longmenshan fault system (China)

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Zhang, Keyin; Gan, Qigang; Zhou, Wen; Xiong, Liang; Zhang, Shihua; Liu, Chao

    2015-01-01

    China has a long history of earthquake records, and the Longmenshan fault system (LFS) is a famous earthquake zone. We believed that the LFS could be divided into three seismogenic zones (north, central, and south zones) based on the geological structures and the earthquake catalog. We applied the Bayesian probability method using extreme-value distribution of earthquake occurrences to estimate the seismic hazard in the LFS. The seismic moment, slip rate, earthquake recurrence rate, and magnitude were considered as the basic parameters for computing the Bayesian prior estimates of the seismicity. These estimates were then updated in terms of Bayes' theorem and historical estimates of seismicity in the LFS. Generally speaking, the north zone seemingly is quite peaceful compared with the central and south zones. The central zone is the most dangerous; however, the periodicity of earthquake occurrences for M s = 8.0 is quite long (1,250 to 5,000 years). The selection of upper bound probable magnitude influences the result, and the upper bound magnitude of the south zone maybe 7.5. We obtained the empirical relationship of magnitude conversion for M s and ML, the values of the magnitude of completeness Mc (3.5), and the Gutenberg-Richter b value before applying the Bayesian extreme-value distribution of earthquake occurrences method.

  10. Operational earthquake forecasting in California: A prototype system combining UCERF3 and CyberShake

    NASA Astrophysics Data System (ADS)

    Milner, K. R.; Jordan, T. H.; Field, E. H.

    2014-12-01

    Operational earthquake forecasting (OEF) is the dissemination of authoritative information about time-dependent earthquake probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To attain this goal, OEF must provide a complete description of the seismic hazard—ground motion exceedance probabilities as well as short-term rupture probabilities—in concert with the long-term forecasts of probabilistic seismic hazard analysis. We have combined the Third Uniform California Earthquake Rupture Forecast (UCERF3) of the Working Group on California Earthquake Probabilities (Field et al., 2014) with the CyberShake ground-motion model of the Southern California Earthquake Center (Graves et al., 2011; Callaghan et al., this meeting) into a prototype OEF system for generating time-dependent hazard maps. UCERF3 represents future earthquake activity in terms of fault-rupture probabilities, incorporating both Reid-type renewal models and Omori-type clustering models. The current CyberShake model comprises approximately 415,000 earthquake rupture variations to represent the conditional probability of future shaking at 285 geographic sites in the Los Angeles region (~236 million horizontal-component seismograms). This combination provides significant probability gains relative to OEF models based on empirical ground-motion prediction equations (GMPEs), primarily because the physics-based CyberShake simulations account for the rupture directivity, basin effects, and directivity-basin coupling that are not represented by the GMPEs.

  11. Selection of minimum earthquake intensity in calculating pipe failure probabilities

    SciTech Connect

    Lo, T.Y.

    1985-01-01

    In a piping reliability analysis, it is sometimes necessary to specify a minimum ground motion intensity, usually the peak acceleration, below which the ground motions are not considered as earthquakes and, hence, are neglected. The calculated probability of failure of a piping system is dependent on this selected minimum earthquake intensity chosen for the analysis. A study was conducted to determine the effects of the minimum earthquake intensity on the probability of pipe failure. The results indicated that the probability of failure of the piping system is not very sensitive to the variations of the selected minimum peak ground acceleration. However, it does have significant effects on various scenarios that make up the system failure.

  12. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  13. Detection of hydrothermal precursors to large northern california earthquakes.

    PubMed

    Silver, P G; Valette-Silver, N J

    1992-09-01

    During the period 1973 to 1991 the interval between eruptions from a periodic geyser in Northern California exhibited precursory variations 1 to 3 days before the three largest earthquakes within a 250-kilometer radius of the geyser. These include the magnitude 7.1 Loma Prieta earthquake of 18 October 1989 for which a similar preseismic signal was recorded by a strainmeter located halfway between the geyser and the earthquake. These data show that at least some earthquakes possess observable precursors, one of the prerequisites for successful earthquake prediction. All three earthquakes were further than 130 kilometers from the geyser, suggesting that precursors might be more easily found around rather than within the ultimate rupture zone of large California earthquakes. PMID:17738277

  14. Are all major California cities seriously threatened by earthquakes?

    SciTech Connect

    Suen, C.J.

    1995-09-01

    This report discusses the seismic hazards associated with living in various urban areas of California, particularly the Fresno area. According to this assessment and other studies, the Fresno metropolitan area is relatively safe from the threat of a large destructive earthquake, due to its location away from major earthquake-prone fault zones. Unlike other major metropolitan areas in California such as San Francisco and Los Angeles, the Fresno area has no known active faults that are capable of causing destructive tremors. Several maps are included which indicate the location of earthquake epicenters and magnitudes in California from 1769 to the present.

  15. Earthquakes and faults in southern California (1970-2010)

    USGS Publications Warehouse

    Sleeter, Benjamin M.; Calzia, James P.; Walter, Stephen R.

    2012-01-01

    The map depicts both active and inactive faults and earthquakes magnitude 1.5 to 7.3 in southern California (1970–2010). The bathymetry was generated from digital files from the California Department of Fish And Game, Marine Region, Coastal Bathymetry Project. Elevation data are from the U.S. Geological Survey National Elevation Database. Landsat satellite image is from fourteen Landsat 5 Thematic Mapper scenes collected between 2009 and 2010. Fault data are reproduced with permission from 2006 California Geological Survey and U.S. Geological Survey data. The earthquake data are from the U.S. Geological Survey National Earthquake Information Center.

  16. A post-Tohoku earthquake review of earthquake probabilities in the Southern Kanto District, Japan

    NASA Astrophysics Data System (ADS)

    Somerville, Paul G.

    2014-12-01

    The 2011 Mw 9.0 Tohoku earthquake generated an aftershock sequence that affected a large part of northern Honshu, and has given rise to widely divergent forecasts of changes in earthquake occurrence probabilities in northern Honshu. The objective of this review is to assess these forecasts as they relate to potential changes in the occurrence probabilities of damaging earthquakes in the Kanto Region. It is generally agreed that the 2011 Mw 9.0 Tohoku earthquake increased the stress on faults in the southern Kanto district. Toda and Stein (Geophys Res Lett 686, 40: doi:10.1002, 2013) further conclude that the probability of earthquakes in the Kanto Corridor has increased by a factor of 2.5 for the time period 11 March 2013 to 10 March 2018 in the Kanto Corridor. Estimates of earthquake probabilities in a wider region of the Southern Kanto District by Nanjo et al. (Geophys J Int, doi:10.1093, 2013) indicate that any increase in the probability of earthquakes is insignificant in this larger region. Uchida et al. (Earth Planet Sci Lett 374: 81-91, 2013) conclude that the Philippine Sea plate the extends well north of the northern margin of Tokyo Bay, inconsistent with the Kanto Fragment hypothesis of Toda et al. (Nat Geosci, 1:1-6,2008), which attributes deep earthquakes in this region, which they term the Kanto Corridor, to a broken fragment of the Pacific plate. The results of Uchida and Matsuzawa (J Geophys Res 115:B07309, 2013)support the conclusion that fault creep in southern Kanto may be slowly relaxing the stress increase caused by the Tohoku earthquake without causing more large earthquakes. Stress transfer calculations indicate a large stress transfer to the Off Boso Segment as a result of the 2011 Tohoku earthquake. However, Ozawa et al. (J Geophys Res 117:B07404, 2012) used onshore GPS measurements to infer large post-Tohoku creep on the plate interface in the Off-Boso region, and Uchida and Matsuzawa (ibid.) measured similar large creep off the Boso Peninsula. Thus some of the large stress transfer may be undergoing aseismic release, consistent with pre-Tohoku geodetic data, so a large earthquake on the Off Boso segment may have a low probability.

  17. Earthquake preparedness levels amongst youth and adults in Oakland, California

    NASA Astrophysics Data System (ADS)

    Burris, M.; Arroyo-Ruiz, D.; Crockett, C.; Dixon, G.; Jones, M.; Lei, P.; Phillips, B.; Romero, D.; Scott, M.; Spears, D.; Tate, L.; Whitlock, J.; Diaz, J.; Chagolla, R.

    2011-12-01

    The San Francisco Bay Area has not experienced a large earthquake since 1989. However research shows that the Hayward fault is overdue for a tremor, based on paleo-seismic research. To analyze the level of earthquake preparedness in the Oakland area (close to the Hayward fault), we surveyed over 150 people to assess their understanding of earthquakes. Our research evaluates whether increased earthquake knowledge impacts people's preparedness and concern toward earthquake events. Data was collected using smart-phone technology and survey software in four sites across Oakland including; North Oakland, Downtown, East Oakland, and a summer school program in East Oakland, which has youth from throughout the city. Preliminary studies show that over 60% of interviewees have sufficient earthquake knowledge, but that over half of all interviewees are not prepared for a seismic event. Our study shows that in Oakland, California earthquake preparedness levels vary, which could mean we need to develop more ways to disseminate information on earthquake preparedness.

  18. The magnitude distribution of declustered earthquakes in Southern California.

    PubMed

    Knopoff, L

    2000-10-24

    The binned distribution densities of magnitudes in both the complete and the declustered catalogs of earthquakes in the Southern California region have two significantly different branches with crossover magnitude near M = 4.8. In the case of declustered earthquakes, the b-values on the two branches differ significantly from each other by a factor of about two. The absence of self-similarity across a broad range of magnitudes in the distribution of declustered earthquakes is an argument against the application of an assumption of scale-independence to models of main-shock earthquake occurrence, and in turn to the use of such models to justify the assertion that earthquakes are unpredictable. The presumption of scale-independence for complete local earthquake catalogs is attributable, not to a universal process of self-organization leading to future large earthquakes, but to the universality of the process that produces aftershocks, which dominate complete catalogs. PMID:11035770

  19. Loma Prieta earthquake, October 17, 1989, Santa Cruz County, California

    SciTech Connect

    McNutt, S.

    1990-01-01

    On Tuesday, October 17, 1989 at 5:04 p.m. Pacific Daylight Time, a magnitude 7.1 earthquake occurred on the San Andreas fault 10 miles northeast of Santa Cruz. This earthquake was the largest earthquake to occur in the San Francisco Bay area since 1906, and the largest anywhere in California since 1952. The earthquake was responsible for 67 deaths and about 7 billion dollars worth of damage, making it the biggest dollar loss natural disaster in United States history. This article describes the seismological features of the earthquake, and briefly outlines a number of other geologic observations made during study of the earthquake, its aftershocks, and its effects. Much of the information in this article was provided by the U.S. Geological Survey (USGS).

  20. Probable Maximum Earthquake Magnitudes for the Cascadia Subduction

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.

    2013-12-01

    The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc values for different β-values. For magnitude larger than 8.5, the turbidite data are consistent with all three TGR models. For smaller magnitudes, the TGR models predict a higher rate than the paleoseismic data show. The discrepancy can be attributed to uncertainties in the paleoseismic magnitudes, the potential incompleteness of the paleoseismic record for smaller events, or temporal variations of the seismicity. Nevertheless, our results show that for this zone, earthquake of m 8.8×0.2 are expected over a 500-year period, m 9.0×0.2 are expected over a 1000-year period, and m 9.3×0.2 are expected over a 10,000-year period.

  1. The Effects of Static Coulomb Stress Change on Southern California Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Strader, Anne Elizabeth

    I investigate how inclusion of static Coulomb stress changes, caused by tectonic loading and previous seismicity, contributes to the effectiveness and reliability of prospective earthquake forecasts. Several studies have shown that positive static Coulomb stress changes are associated with increased seismicity, relative to stress shadows. However, it is difficult to avoid bias when the learning and testing intervals are chosen retrospectively. I hypothesize that earthquake forecasts based on static Coulomb stress fields may improve upon existing earthquake forecasts based on historical seismicity. Within southern California, I have confirmed the aforementioned relationship between earthquake location and Coulomb stress change, but found no identifiable triggering threshold based on static Coulomb stress history at individual earthquake locations. I have also converted static Coulomb stress changes into spatially-varying earthquake rates by optimizing an index function and calculating probabilities of cells containing at least one earthquake based on Coulomb stress ranges. Inclusion of Coulomb stress effects gives an improvement in earthquake forecasts that is significant with 95% confidence, compared to smoothed seismicity null forecasts. Because of large uncertainties in Coulomb stress calculations near faults (and aftershock distributions), I combine static Coulomb stress and smoothed seismicity into a hybrid earthquake forecast. Evaluating such forecasts against those in which only Coulomb stress or smoothed seismicity determines earthquake rates indicates that Coulomb stress is more effective in the far field, whereas statistical seismology outperforms Coulomb stress near faults. Additionally, I test effects of receiver plane orientation, stress type (normal and shear components), and declustering receiver earthquakes. While static Coulomb stress shows significant potential in a prospective earthquake forecast, simplifying assumptions compromise its effectiveness. For example, we assume that crustal material within the study region is isotropic and homogeneous and purely elastic, and that pore fluid pressure variations do not significantly affect the static Coulomb stress field. Such assumptions require further research in order to detect direct earthquake triggering mechanisms.

  2. Estimating earthquake-induced failure probability and downtime of critical facilities.

    PubMed

    Porter, Keith; Ramer, Kyle

    2012-01-01

    Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways. PMID:22576139

  3. California Earthquakes: Science, Risks, and the Politics of Hazard Mitigation

    NASA Astrophysics Data System (ADS)

    Shedlock, Kaye M.

    "Politics" should be the lead word in the sub-title of this engrossing study of the emergence and growth of the California and federal earthquake hazard reduction infrastructures. Beginning primarily with the 1906 San Francisco earthquake, scientists, engineers, and other professionals cooperated and clashed with state and federal officials, the business community, " boosters," and the general public to create programs, agencies, and commissions to support earthquake research and hazards mitigation. Moreover, they created a "regulatory-state" apparatus that governs human behavior without sustained public support for its creation. The public readily accepts that earthquake research and mitigation are government responsibilities. The government employs or funds the scientists, engineers, emergency response personnel, safety officials, building inspectors, and others who are instrumental in reducing earthquake hazards. This book clearly illustrates how, and why all of this came to pass.

  4. Keeping the History in Historical Seismology: The 1872 Owens Valley, California Earthquake

    NASA Astrophysics Data System (ADS)

    Hough, Susan E.

    2008-07-01

    The importance of historical earthquakes is being increasingly recognized. Careful investigations of key pre-instrumental earthquakes can provide critical information and insights for not only seismic hazard assessment but also for earthquake science. In recent years, with the explosive growth in computational sophistication in Earth sciences, researchers have developed increasingly sophisticated methods to analyze macroseismic data quantitatively. These methodological developments can be extremely useful to exploit fully the temporally and spatially rich information source that seismic intensities often represent. For example, the exhaustive and painstaking investigations done by Ambraseys and his colleagues of early Himalayan earthquakes provides information that can be used to map out site response in the Ganges basin. In any investigation of macroseismic data, however, one must stay mindful that intensity values are not data but rather interpretations. The results of any subsequent analysis, regardless of the degree of sophistication of the methodology, will be only as reliable as the interpretations of available accounts—and only as complete as the research done to ferret out, and in many cases translate, these accounts. When intensities are assigned without an appreciation of historical setting and context, seemingly careful subsequent analysis can yield grossly inaccurate results. As a case study, I report here on the results of a recent investigation of the 1872 Owen's Valley, California earthquake. Careful consideration of macroseismic observations reveals that this event was probably larger than the great San Francisco earthquake of 1906, and possibly the largest historical earthquake in California. The results suggest that some large earthquakes in California will generate significantly larger ground motions than San Andreas fault events of comparable magnitude.

  5. Keeping the History in Historical Seismology: The 1872 Owens Valley, California Earthquake

    SciTech Connect

    Hough, Susan E.

    2008-07-08

    The importance of historical earthquakes is being increasingly recognized. Careful investigations of key pre-instrumental earthquakes can provide critical information and insights for not only seismic hazard assessment but also for earthquake science. In recent years, with the explosive growth in computational sophistication in Earth sciences, researchers have developed increasingly sophisticated methods to analyze macroseismic data quantitatively. These methodological developments can be extremely useful to exploit fully the temporally and spatially rich information source that seismic intensities often represent. For example, the exhaustive and painstaking investigations done by Ambraseys and his colleagues of early Himalayan earthquakes provides information that can be used to map out site response in the Ganges basin. In any investigation of macroseismic data, however, one must stay mindful that intensity values are not data but rather interpretations. The results of any subsequent analysis, regardless of the degree of sophistication of the methodology, will be only as reliable as the interpretations of available accounts - and only as complete as the research done to ferret out, and in many cases translate, these accounts. When intensities are assigned without an appreciation of historical setting and context, seemingly careful subsequent analysis can yield grossly inaccurate results. As a case study, I report here on the results of a recent investigation of the 1872 Owen's Valley, California earthquake. Careful consideration of macroseismic observations reveals that this event was probably larger than the great San Francisco earthquake of 1906, and possibly the largest historical earthquake in California. The results suggest that some large earthquakes in California will generate significantly larger ground motions than San Andreas fault events of comparable magnitude.

  6. Historic Ground Failures in Northern California Triggered by Earthquakes

    USGS Publications Warehouse

    Youd, T. Leslie; Hoose, Seena N.

    1978-01-01

    A major source of earthquake-related damage and casualties in northern California has been ground failures generated by the seismic shaking, including landslides, lateral spreads, ground settlement, and surface cracks. The historical record shows that, except for offshore shocks, the geographic area affected and the quantity and general severity of ground failures increase markedly with Richter magnitude. Hence, the largest historical event, the 1906 San Francisco earthquake, has been the most important generator of ground failures. Because of recent population growth and land development in northern California, the potential for damage in future events is enormous compared with that existing in 1906. Reports of the 1906 San Francisco earthquake and other northern California earthquakes and descriptions of ground failures therein are used to (1) identify and clarify the types of ground failures associated with earthquakes, (2) provide a guide for engineers, planners, and others responsible for minimizing seismic hazards, and (3) form a data base for other geotechnical studies of earthquake-triggered pound failures. Geologic, hydrologic, and topographic setting have an important influence on ground failure development as well as distance from the causative fault. Areas especially vulnerable to ground failure in northern California have been oversteepened slopes, such as mountain cliffs, streambanks, and coastal bluffs, and lowland deposits, principally Holocene fluvial deposits, deltaic deposits, and poorly compacted fills. Liquefaction has been the direct cause of most lowland failures. The historical record suggests that ground failures during future large earthquakes are most likely to occur at the same or geologically similar locations as failures during previous earhquakes.

  7. The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?

    USGS Publications Warehouse

    Kilb, Debi; Gomberg, J.

    1999-01-01

    We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

  8. Depth dependence of earthquake frequency-magnitude distributions in California: Implications for rupture initiation

    USGS Publications Warehouse

    Mori, J.; Abercrombie, R.E.

    1997-01-01

    Statistics of earthquakes in California show linear frequency-magnitude relationships in the range of M2.0 to M5.5 for various data sets. Assuming Gutenberg-Richter distributions, there is a systematic decrease in b value with increasing depth of earthquakes. We find consistent results for various data sets from northern and southern California that both include and exclude the larger aftershock sequences. We suggest that at shallow depth (???0 to 6 km) conditions with more heterogeneous material properties and lower lithospheric stress prevail. Rupture initiations are more likely to stop before growing into large earthquakes, producing relatively more smaller earthquakes and consequently higher b values. These ideas help to explain the depth-dependent observations of foreshocks in the western United States. The higher occurrence rate of foreshocks preceding shallow earthquakes can be interpreted in terms of rupture initiations that are stopped before growing into the mainshock. At greater depth (9-15 km), any rupture initiation is more likely to continue growing into a larger event, so there are fewer foreshocks. If one assumes that frequency-magnitude statistics can be used to estimate probabilities of a small rupture initiation growing into a larger earthquake, then a small (M2) rupture initiation at 9 to 12 km depth is 18 times more likely to grow into a M5.5 or larger event, compared to the same small rupture initiation at 0 to 3 km. Copyright 1997 by the American Geophysical Union.

  9. Major improvements in progress for Southern California Earthquake Monitoring

    NASA Astrophysics Data System (ADS)

    Mori, Jim; Kanamori, Hiroo; Davis, James; Hauksson, Egill; Clayton, Robert; Heaton, Thomas; Jones, Lucile; Shakal, Anthony; Porcella, Ron

    Major improvements in seismic and strong-motion monitoring networks are being implemented in southern California to better meet the needs of emergency response personnel, structural engineers, and the research community in promoting earthquake hazard reduction. Known as the TriNet project, the improvements are being coordinated by the California Institute of Technology (Caltech), the U.S. Geological Survey (USGS), and the California Division of Mines and Geology (CDMG) of the state's Department of Conservation. Already the ambitious instrument and system development project has started to record and disseminate ground motions from a spatially dense and robust network of high quality seismographs.

  10. Dynamics of Liquefaction during the 1987 Superstition Hills, California, Earthquake

    NASA Astrophysics Data System (ADS)

    Holzer, T. L.; Youd, T. L.; Hanks, T. C.

    1989-04-01

    Simultaneous measurements of seismically induced pore-water pressure changes and surface and subsurface accelerations at a site undergoing liquefaction caused by the Superstition Hills, California, earthquake (24 November 1987; M = 6.6) reveal that total pore pressures approached lithostatic conditions, but, unexpectedly, after most of the strong motion ceased. Excess pore pressures were generated once horizontal acceleration exceeded a threshold value.

  11. Dynamics of liquefaction during the 1987 Superstition Hills, California, earthquake

    USGS Publications Warehouse

    Holzer, T.L.; Youd, T.L.; Hanks, T.C.

    1989-01-01

    Simultaneous measurements of seismically induced pore-water pressure changes and surface and subsurface accelerations at a site undergoing liquefaction caused by the Superstition Hills, California, earthquake (24 November 1987; M = 6.6) reveal that total pore pressures approached lithostatic conditions, but, unexpectedly, after most of the strong motion ceased. Excess pore pressures were generated once horizontal acceleration exceeded a threshold value.

  12. Search for seismic forerunners to earthquakes in central California

    USGS Publications Warehouse

    Wesson, R.L.; Robinson, R.; Bufe, C.G.; Ellsworth, W.L.; Pfluke, J.H.; Steppe, J.A.; Seekins, L.C.

    1977-01-01

    The relatively high seismicity of the San Andreas fault zone in central California provides an excellent opportunity to search for seismic forerunners to moderate earthquakes. Analysis of seismic traveltime and earthquake location data has resulted in the identification of two possible seismic forerunners. The first is a period of apparently late (0.3 sec) P-wave arrival times lasting several weeks preceding one earthquake of magnitude 5.0. The rays for these travel paths passed through - or very close to - the aftershock volume of the subsequent earthquake. The sources for these P-arrival time data were earthquakes in the distance range 20-70 km. Uncertainties in the influence of small changes in the hypocenters of the source earthquakes and in the identification of small P-arrivals raise the possibility that the apparantly delayed arrivals are not the result of a decrease in P-velocity. The second possible precursor is an apparent increase in the average depth of earthquakes preceding two moderate earthquakes. This change might be only apparent, caused by a location bias introduced by a decrease in P-wave velocity, but numerical modeling for realistic possible changes in velocity suggests that the observed effect is more likely a true migration of earthquakes. To carry out this work - involving the manipulation of several thousand earthquake hypocenters and several hundred thousand readings of arrival time - a system of data storage was designed and manipulation programs for a large digital computer have been executed. This system allows, for example, the automatic selection of earthquakes from a specific region, the extraction of all the observed arrival times for these events, and their relocation under a chosen set of assumptions. ?? 1977.

  13. Very-long-period volcanic earthquakes beneath Mammoth Mountain, California

    USGS Publications Warehouse

    Hill, D.P.; Dawson, P.; Johnston, M.J.S.; Pitt, A.M.; Biasi, G.; Smith, K.

    2002-01-01

    Detection of three very-long-period (VLP) volcanic earthquakes beneath Mammoth Mountain emphasizes that magmatic processes continue to be active beneath this young, eastern California volcano. These VLP earthquakes, which occured in October 1996 and July and August 2000, appear as bell-shaped pulses with durations of one to two minutes on a nearby borehole dilatometer and on the displacement seismogram from a nearby broadband seismometer. They are accompanied by rapid-fire sequences of high-frequency (HF) earthquakes and several long- period (LP) volcanic earthquakes. The limited VLP data are consistent with a CLVD source at a depth of ???3 km beneath the summit, which we interpret as resulting from a slug of fluid (CO2- saturated magmatic brine or perhaps basaltic magma) moving into a crack.

  14. Chapter F. The Loma Prieta, California, Earthquake of October 17, 1989 - Tectonic Processes and Models

    USGS Publications Warehouse

    Simpson, Robert W.

    1994-01-01

    If there is a single theme that unifies the diverse papers in this chapter, it is the attempt to understand the role of the Loma Prieta earthquake in the context of the earthquake 'machine' in northern California: as the latest event in a long history of shocks in the San Francisco Bay region, as an incremental contributor to the regional deformation pattern, and as a possible harbinger of future large earthquakes. One of the surprises generated by the earthquake was the rather large amount of uplift that occurred as a result of the reverse component of slip on the southwest-dipping fault plane. Preearthquake conventional wisdom had been that large earthquakes in the region would probably be caused by horizontal, right-lateral, strike-slip motion on vertical fault planes. In retrospect, the high topography of the Santa Cruz Mountains and the elevated marine terraces along the coast should have provided some clues. With the observed ocean retreat and the obvious uplift of the coast near Santa Cruz that accompanied the earthquake, Mother Nature was finally caught in the act. Several investigators quickly saw the connection between the earthquake uplift and the long-term evolution of the Santa Cruz Mountains and realized that important insights were to be gained by attempting to quantify the process of crustal deformation in terms of Loma Prieta-type increments of northward transport and fault-normal shortening.

  15. Short-term earthquake probabilities during the L'Aquila earthquake sequence in central Italy, 2009

    NASA Astrophysics Data System (ADS)

    Falcone, G.; Murru, M.; Zhuang, J.; Console, R.

    2014-12-01

    We compare the forecasting performance of several statistical models, which are used to describe the occurrence process of earthquakes, in forecasting the short-term earthquake probabilities during the occurrence of the L'Aquila earthquake sequence in central Italy, 2009. These models include the Proximity to Past Earthquakes (PPE) model and different versions of the Epidemic Type Aftershock Sequence (ETAS) model. We used the information gains corresponding to the Poisson and binomial scores to evaluate the performance of these models. It is shown that all ETAS models work better than the PPE model. However, when comparing the different types of the ETAS models, the one with the same fixed exponent coefficient α = 2.3 for both the productivity function and the scaling factor in the spatial response function, performs better in forecasting the active aftershock sequence than the other models with different exponent coefficients when the Poisson score is adopted. These latter models perform only better when a lower magnitude threshold of 2.0 and the binomial score are used. The reason is likely due to the fact that the catalog does not contain an event of magnitude similar to the L'Aquila main shock (Mw 6.3) in the training period (April 16, 2005 to March 15, 2009). In this case the a-value is under-estimated and thus also the forecasted seismicity is underestimated when the productivity function is extrapolated to high magnitudes. These results suggest that the training catalog used for estimating the model parameters should include earthquakes of similar magnitudes as the main shock when forecasting seismicity is during an aftershock sequences.

  16. Crustal deformation in great California earthquake cycles

    NASA Technical Reports Server (NTRS)

    Li, Victor C.; Rice, James R.

    1986-01-01

    Periodic crustal deformation associated with repeated strike slip earthquakes is computed for the following model: A depth L (less than or similiar to H) extending downward from the Earth's surface at a transform boundary between uniform elastic lithospheric plates of thickness H is locked between earthquakes. It slips an amount consistent with remote plate velocity V sub pl after each lapse of earthquake cycle time T sub cy. Lower portions of the fault zone at the boundary slip continuously so as to maintain constant resistive shear stress. The plates are coupled at their base to a Maxwellian viscoelastic asthenosphere through which steady deep seated mantle motions, compatible with plate velocity, are transmitted to the surface plates. The coupling is described approximately through a generalized Elsasser model. It is argued that the model gives a more realistic physical description of tectonic loading, including the time dependence of deep slip and crustal stress build up throughout the earthquake cycle, than do simpler kinematic models in which loading is represented as imposed uniform dislocation slip on the fault below the locked zone.

  17. Long Period Earthquakes Beneath California's Young and Restless Volcanoes

    NASA Astrophysics Data System (ADS)

    Pitt, A. M.; Dawson, P. B.; Shelly, D. R.; Hill, D. P.; Mangan, M.

    2013-12-01

    The newly established USGS California Volcano Observatory has the broad responsibility of monitoring and assessing hazards at California's potentially threatening volcanoes, most notably Mount Shasta, Medicine Lake, Clear Lake Volcanic Field, and Lassen Volcanic Center in northern California; and Long Valley Caldera, Mammoth Mountain, and Mono-Inyo Craters in east-central California. Volcanic eruptions occur in California about as frequently as the largest San Andreas Fault Zone earthquakes-more than ten eruptions have occurred in the last 1,000 years, most recently at Lassen Peak (1666 C.E. and 1914-1917 C.E.) and Mono-Inyo Craters (c. 1700 C.E.). The Long Valley region (Long Valley caldera and Mammoth Mountain) underwent several episodes of heightened unrest over the last three decades, including intense swarms of volcano-tectonic (VT) earthquakes, rapid caldera uplift, and hazardous CO2 emissions. Both Medicine Lake and Lassen are subsiding at appreciable rates, and along with Clear Lake, Long Valley Caldera, and Mammoth Mountain, sporadically experience long period (LP) earthquakes related to migration of magmatic or hydrothermal fluids. Worldwide, the last two decades have shown the importance of tracking LP earthquakes beneath young volcanic systems, as they often provide indication of impending unrest or eruption. Herein we document the occurrence of LP earthquakes at several of California's young volcanoes, updating a previous study published in Pitt et al., 2002, SRL. All events were detected and located using data from stations within the Northern California Seismic Network (NCSN). Event detection was spatially and temporally uneven across the NCSN in the 1980s and 1990s, but additional stations, adoption of the Earthworm processing system, and heightened vigilance by seismologists have improved the catalog over the last decade. LP earthquakes are now relatively well-recorded under Lassen (~150 events since 2000), Clear Lake (~60 events), Mammoth Mountain (~320 events), and Long Valley Caldera (~40 events). LP earthquakes are notably absent under Mount Shasta. With the exception of Long Valley Caldera where LP earthquakes occur at depths of ≤5 km, hypocenters are generally between 15-25 km. The rates of LP occurrence over the last decade have been relatively steady within the study areas, except at Mammoth Mountain, where years of gradually declining LP activity abruptly increased after a swarm of unusually deep (20 km) VT earthquakes in October 2012. Epicenter locations relative to the sites of most recent volcanism vary across volcanic centers, but most LP earthquakes fall within 10 km of young vents. Source models for LP earthquakes often involve the resonance of fluid-filled cracks or nonlinear flow of fluids along irregular cracks (reviewed in Chouet and Matoza, 2013, JVGR). At mid-crustal depths the relevant fluids are likely to be low-viscosity basaltic melt and/or exsolved CO2-rich volatiles (Lassen, Clear Lake, Mammoth Mountain). In the shallow crust, however, hydrothermal waters/gases are likely involved in the generation of LP seismicity (Long Valley Caldera).

  18. Crustal deformation in Great California Earthquake cycles

    NASA Technical Reports Server (NTRS)

    Li, Victor C.; Rice, James R.

    1987-01-01

    A model in which coupling is described approximately through a generalized Elsasser model is proposed for computation of the periodic crustal deformation associated with repeated strike-slip earthquakes. The model is found to provide a more realistic physical description of tectonic loading than do simpler kinematic models. Parameters are chosen to model the 1857 and 1906 San Andreas ruptures, and predictions are found to be consistent with data on variations of contemporary surface strain and displacement rates as a function of distance from the 1857 and 1906 rupture traces. Results indicate that the asthenosphere appropriate to describe crustal deformation on the earthquake cycle time scale lies in the lower crust and perhaps the crust-mantle transition zone.

  19. Southern California Earthquake Center (SCEC) Summer Internship Programs

    NASA Astrophysics Data System (ADS)

    Benthien, M. L.; Perry, S.; Jordan, T. H.

    2004-12-01

    For the eleventh consecutive year, the Southern California Earthquake Center (SCEC) coordinated undergraduate research experiences in summer 2004, allowing 35 students with a broad array of backgrounds and interests to work with the world's preeminent earthquake scientists and specialists. Students participate in interdisciplinary, system-level earthquake science and information technology research, and several group activities throughout the summer. Funding for student stipends and activities is made possible by the NSF Research Experiences for Undergraduates (REU) program. SCEC coordinates two intern programs: The SCEC Summer Undergraduate Research Experience (SCEC/SURE) and the SCEC Undergraduate Summer in Earthquake Information Technology (SCEC/USEIT). SCEC/SURE interns work one-on-one with SCEC scientists at their institutions on a variety of earthquake science research projects. The goals of the program are to expand student participation in the earth sciences and related disciplines, encourage students to consider careers in research and education, and to increase diversity of students and researchers in the earth sciences. 13 students participated in this program in 2004. SCEC/USEIT is an NSF REU site that brings undergraduate students from across the country to the University of Southern California each summer. SCEC/USEIT interns interact in a team-oriented research environment and are mentored by some of the nation's most distinguished geoscience and computer science researchers. The goals of the program are to allow undergraduates to use advanced tools of information technology to solve problems in earthquake research; close the gap between computer science and geoscience; and engage non-geoscience majors in the application of earth science to the practical problems of reducing earthquake risk. SCEC/USEIT summer research goals are structured around a grand challenge problem in earthquake information technology. For the past three years the students have developed a new earthquake and fault visualization platform named "LA3D." 22 students participated in this program in 2004. SCEC Interns come together several times during the summer, beginning with a Communication Workshop that develops the student's oral and written communication skills. In mid-summer, a one-day SCEC Intern Colloquium is held, where student researchers present status reports on their research, followed by a three-day field trip of southern California geology and SCEC research locations. Finally, at the end of the summer each student presents a poster at the SCEC Annual Meeting.

  20. MOHO ORIENTATION BENEATH CENTRAL CALIFORNIA FROM REGIONAL EARTHQUAKE TRAVEL TIMES.

    USGS Publications Warehouse

    Oppenheimer, David H.; Eaton, Jerry P.

    1984-01-01

    This paper examines relative Pn arrival times, recorded by the U. S. Geological Survey seismic network in central and northern California from an azimuthally distributed set of regional earthquakes. Improved estimates are presented of upper mantle velocities in the Coast Ranges, Great Valley, and Sierra Nevada foothills and estimates of the orientation of the Moho throughout this region. Finally, the azimuthal distribution of apparent velocities, corrected for dip and individual station travel time effects, is then studied for evidence of upper mantle velocity anisotropy and for indications of lower crustal structure in central California.

  1. Tidal stress triggering of earthquakes in Southern California

    NASA Astrophysics Data System (ADS)

    Bucholc, Magda; Steacy, Sandy

    2016-02-01

    We analyse the influence of the solid Earth tides and ocean loading on the occurrence time of Southern California earthquakes. For each earthquake, we calculate tidal Coulomb failure stress and stress rate on a fault plane that is assumed to be controlled by the orientation of the adjacent fault. To reduce bias when selecting data for testing the tide-earthquake relationship, we create four earthquake catalogs containing events within 1, 1.5, 2.5, and 5 km of nearest faults. We investigate the difference in seismicity rates at times of positive and negative tidal stresses/stress rates given three different cases. We consider seismicity rates during times of positive versus negative stress and stress rate, as well as 2 and 3 hours surrounding the local tidal stress extremes. We find that tidal influence on earthquake occurrence is found to be statistically non-random only in close proximity to tidal extremes meaning that magnitude of tidal stress plays an important role in tidal triggering. A non-random tidal signal is observed for the reverse events. Along with a significant increase in earthquake rates around tidal Coulomb stress maxima, the strength of tidal correlation is found to be closely related to the amplitude of the peak tidal Coulomb stress (τp). The most effective tidal triggering is found for τp ≥ 1 kPa which is much smaller than thresholds suggested for static and dynamic triggering of aftershocks.

  2. Tidal stress triggering of earthquakes in Southern California

    NASA Astrophysics Data System (ADS)

    Bucholc, Magda; Steacy, Sandy

    2016-05-01

    We analyse the influence of the solid Earth tides and ocean loading on the occurrence time of Southern California earthquakes. For each earthquake, we calculate tidal Coulomb failure stress and stress rate on a fault plane that is assumed to be controlled by the orientation of the adjacent fault. To reduce bias when selecting data for testing the tide-earthquake relationship, we create four earthquake catalogues containing events within 1, 1.5, 2.5 and 5 km of nearest faults. We investigate the difference in seismicity rates at times of positive and negative tidal stresses/stress rates given three different cases. We consider seismicity rates during times of positive versus negative stress and stress rate, as well as 2 and 3 hr surrounding the local tidal stress extremes. We find that tidal influence on earthquake occurrence is found to be statistically non-random only in close proximity to tidal extremes meaning that magnitude of tidal stress plays an important role in tidal triggering. A non-random tidal signal is observed for the reverse events. Along with a significant increase in earthquake rates around tidal Coulomb stress maxima, the strength of tidal correlation is found to be closely related to the amplitude of the peak tidal Coulomb stress (τp). The most effective tidal triggering is found for τp ≥ 1 kPa, which is much smaller than thresholds suggested for static and dynamic triggering of aftershocks.

  3. Dynamic models of an earthquake and tsunami offshore Ventura, California

    USGS Publications Warehouse

    Kenny J. Ryan; Geist, Eric L.; Barall, Michael; David D. Oglesby

    2015-01-01

    The Ventura basin in Southern California includes coastal dip-slip faults that can likely produce earthquakes of magnitude 7 or greater and significant local tsunamis. We construct a 3-D dynamic rupture model of an earthquake on the Pitas Point and Lower Red Mountain faults to model low-frequency ground motion and the resulting tsunami, with a goal of elucidating the seismic and tsunami hazard in this area. Our model results in an average stress drop of 6 MPa, an average fault slip of 7.4 m, and a moment magnitude of 7.7, consistent with regional paleoseismic data. Our corresponding tsunami model uses final seafloor displacement from the rupture model as initial conditions to compute local propagation and inundation, resulting in large peak tsunami amplitudes northward and eastward due to site and path effects. Modeled inundation in the Ventura area is significantly greater than that indicated by state of California's current reference inundation line.

  4. Losses to single-family housing from ground motions in the 1994 Northridge, California, earthquake

    USGS Publications Warehouse

    Wesson, R.L.; Perkins, D.M.; Leyendecker, E.V.; Roth, R.J., Jr.; Petersen, M.D.

    2004-01-01

    The distributions of insured losses to single-family housing following the 1994 Northridge, California, earthquake for 234 ZIP codes can be satisfactorily modeled with gamma distributions. Regressions of the parameters in the gamma distribution on estimates of ground motion, derived from ShakeMap estimates or from interpolated observations, provide a basis for developing curves of conditional probability of loss given a ground motion. Comparison of the resulting estimates of aggregate loss with the actual aggregate loss gives satisfactory agreement for several different ground-motion parameters. Estimates of loss based on a deterministic spatial model of the earthquake ground motion, using standard attenuation relationships and NEHRP soil factors, give satisfactory results for some ground-motion parameters if the input ground motions are increased about one and one-half standard deviations above the median, reflecting the fact that the ground motions for the Northridge earthquake tended to be higher than the median ground motion for other earthquakes with similar magnitude. The results give promise for making estimates of insured losses to a similar building stock under future earthquake loading. ?? 2004, Earthquake Engineering Research Institute.

  5. Dynamics of liquefaction during the 1987 superstition hills, california, earthquake.

    PubMed

    Holzer, T L; Hanks, T C; Youd, T L

    1989-04-01

    Simultaneous measurements of seismically induced pore-water pressure changes and surface and subsurface accelerations at a site undergoing liquefaction caused by the Superstition Hills, California, earthquake (24 November 1987; M = 6.6) reveal that total pore pressures approached lithostatic conditions, but, unexpectedly, after most of the strong motion ceased. Excess pore pressures were generated once horizontal acceleration exceeded a threshold value. PMID:17818846

  6. Nonlinear site response in medium magnitude earthquakes near Parkfield, California

    USGS Publications Warehouse

    Rubinstein, Justin L.

    2011-01-01

    Careful analysis of strong-motion recordings of 13 medium magnitude earthquakes (3.7 ≤ M ≤ 6.5) in the Parkfield, California, area shows that very modest levels of shaking (approximately 3.5% of the acceleration of gravity) can produce observable changes in site response. Specifically, I observe a drop and subsequent recovery of the resonant frequency at sites that are part of the USGS Parkfield dense seismograph array (UPSAR) and Turkey Flat array. While further work is necessary to fully eliminate other models, given that these frequency shifts correlate with the strength of shaking at the Turkey Flat array and only appear for the strongest shaking levels at UPSAR, the most plausible explanation for them is that they are a result of nonlinear site response. Assuming this to be true, the observation of nonlinear site response in small (M M 6.5 San Simeon earthquake and the 2004 M 6 Parkfield earthquake).

  7. Intensity earthquake scenario (scenario event - a damaging earthquake with higher probability of occurrence) for the city of Sofia

    NASA Astrophysics Data System (ADS)

    Aleksandrova, Irena; Simeonova, Stela; Solakov, Dimcho; Popova, Maria

    2014-05-01

    Among the many kinds of natural and man-made disasters, earthquakes dominate with regard to their social and economical impact on the urban environment. Global seismic risk to earthquakes are increasing steadily as urbanization and development occupy more areas that a prone to effects of strong earthquakes. Additionally, the uncontrolled growth of mega cities in highly seismic areas around the world is often associated with the construction of seismically unsafe buildings and infrastructures, and undertaken with an insufficient knowledge of the regional seismicity peculiarities and seismic hazard. The assessment of seismic hazard and generation of earthquake scenarios is the first link in the prevention chain and the first step in the evaluation of the seismic risk. The earthquake scenarios are intended as a basic input for developing detailed earthquake damage scenarios for the cities and can be used in earthquake-safe town and infrastructure planning. The city of Sofia is the capital of Bulgaria. It is situated in the centre of the Sofia area that is the most populated (the population is of more than 1.2 mil. inhabitants), industrial and cultural region of Bulgaria that faces considerable earthquake risk. The available historical documents prove the occurrence of destructive earthquakes during the 15th-18th centuries in the Sofia zone. In 19th century the city of Sofia has experienced two strong earthquakes: the 1818 earthquake with epicentral intensity I0=8-9 MSK and the 1858 earthquake with I0=9-10 MSK. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK). Almost a century later (95 years) an earthquake of moment magnitude 5.6 (I0=7-8 MSK) hit the city of Sofia, on May 22nd, 2012. In the present study as a deterministic scenario event is considered a damaging earthquake with higher probability of occurrence that could affect the city with intensity less than or equal to VIII. The usable and realistic ground motion maps for urban areas are generated: - either from the assumption of a "reference earthquake" - or directly, showing values of macroseimic intensity generated by a damaging, real earthquake. In the study, applying deterministic approach, earthquake scenario in macroseismic intensity ("model" earthquake scenario) for the city of Sofia is generated. The deterministic "model" intensity scenario based on assumption of a "reference earthquake" is compared with a scenario based on observed macroseimic effects caused by the damaging 2012 earthquake (MW5.6). The difference between observed (Io) and predicted (Ip) intensities values is analyzed.

  8. Northridge, California earthquake of January 17, 1994: Performance of gas transmission pipelines. Technical report

    SciTech Connect

    O'Rourke, T.D.; Palmer, M.C.

    1994-05-16

    On January 17, 1994 at 4:31 a.m., a magnitude 6.6 earthquake struck the Los Angeles metropolitan area. Epicentered in the San Fernando Valley town of Northridge, California, the earthquake caused serious damage to buildings and sections of elevated freeways; ignited at least one hundred fires as it ruptured gas pipelines; and disrupted water supply systems. This reconnaissance report provides a performance analysis of gas transmission lines, both during this earthquake and during previous earthquakes, in Southern California.

  9. Cascadia Earthquake and Tsunami Scenario for California's North Coast

    NASA Astrophysics Data System (ADS)

    Dengler, L.

    2006-12-01

    In 1995 the California Division of Mines and Geology (now the California Geological Survey) released a planning scenario for an earthquake on the southern portion of the Cascadia subduction zone (CSZ). This scenario was the 8th and last of the Earthquake Planning Scenarios published by CDMG. It was the largest magnitude CDMG scenario, an 8.4 earthquake rupturing the southern 200 km of the CSZ, and it was the only scenario to include tsunami impacts. This scenario event has not occurred in historic times and depicts impacts far more severe than any recent earthquake. The local tsunami hazard is new; there is no written record of significant local tsunami impact in the region. The north coast scenario received considerable attention in Humboldt and Del Norte Counties and contributed to a number of mitigation efforts. The Redwood Coast Tsunami Work Group (RCTWG), an organization of scientists, emergency managers, government agencies, and businesses from Humboldt, Mendocino, and Del Norte Counties, was formed in 1996 to assist local jurisdictions in understanding the implications of the scenario and to promote a coordinated, consistent mitigation program. The group has produced print and video materials and promoted response and evacuation planning. Since 1997 the RCTWG has sponsored an Earthquake Tsunami Education Room at county fairs featuring preparedness information, hands-on exhibits and regional tsunami hazard maps. Since the development of the TsunamiReady Program in 2001, the RCTWG facilitates community TsunamiReady certification. To assess the effectiveness of mitigation efforts, five telephone surveys between 1993 and 2001 were conducted by the Humboldt Earthquake Education Center. A sixth survey is planned for this fall. Each survey includes between 400 and 600 respondents. Over the nine year period covered by the surveys, the percent with houses secured to foundations has increased from 58 to 80 percent, respondents aware of a local tsunami hazard increased from 51 to 73 percent and knowing what the Cascadia subduction zone is from 16 to 42 percent. It is not surprising that the earlier surveys showed increases as several strong earthquakes occurred in the area between 1992 and 1995 and there was considerable media attention. But the 2001 survey, seven years after the last widely felt event, still shows significant increases in almost all preparedness indicators. The 1995 CDMG scenario was not the sole reason for the increased interest in earthquake and tsunami hazards in the area, but the scenario gave government recognition to an event that was previously only considered seriously in the scientific community and has acted as a catalyst for mitigation and planning efforts.

  10. Chapter C. The Loma Prieta, California, Earthquake of October 17, 1989 - Landslides

    USGS Publications Warehouse

    Keefer, David K., (Edited By)

    1998-01-01

    Central California, in the vicinity of San Francisco and Monterey Bays, has a history of fatal and damaging landslides, triggered by heavy rainfall, coastal and stream erosion, construction activity, and earthquakes. The great 1906 San Francisco earthquake (MS=8.2-8.3) generated more than 10,000 landslides throughout an area of 32,000 km2; these landslides killed at least 11 people and caused substantial damage to buildings, roads, railroads, and other civil works. Smaller numbers of landslides, which caused more localized damage, have also been reported from at least 20 other earthquakes that have occurred in the San Francisco Bay-Monterey Bay region since 1838. Conditions that make this region particularly susceptible to landslides include steep and rugged topography, weak rock and soil materials, seasonally heavy rainfall, and active seismicity. Given these conditions and history, it was no surprise that the 1989 Loma Prieta earthquake generated thousands of landslides throughout the region. Landslides caused one fatality and damaged at least 200 residences, numerous roads, and many other structures. Direct damage from landslides probably exceeded $30 million; additional, indirect economic losses were caused by long-term landslide blockage of two major highways and by delays in rebuilding brought about by concern over the potential long-term instability of some earthquake-damaged slopes.

  11. The California Post-Earthquake Information Clearinghouse: A Plan to Learn From the Next Large California Earthquake

    NASA Astrophysics Data System (ADS)

    Loyd, R.; Walter, S.; Fenton, J.; Tubbesing, S.; Greene, M.

    2008-12-01

    In the rush to remove debris after a damaging earthquake, perishable data related to a wide range of impacts on the physical, built and social environments can be lost. The California Post-Earthquake Information Clearinghouse is intended to prevent this data loss by supporting the earth scientists, engineers, and social and policy researchers who will conduct fieldwork in the affected areas in the hours and days following the earthquake to study these effects. First called for by Governor Ronald Reagan following the destructive M6.5 San Fernando earthquake in 1971, the concept of the Clearinghouse has since been incorporated into the response plans of the National Earthquake Hazard Reduction Program (USGS Circular 1242). This presentation is intended to acquaint scientists with the purpose, functions, and services of the Clearinghouse. Typically, the Clearinghouse is set up in the vicinity of the earthquake within 24 hours of the mainshock and is maintained for several days to several weeks. It provides a location where field researchers can assemble to share and discuss their observations, plan and coordinate subsequent field work, and communicate significant findings directly to the emergency responders and to the public through press conferences. As the immediate response effort winds down, the Clearinghouse will ensure that collected data are archived and made available through "lessons learned" reports and publications that follow significant earthquakes. Participants in the quarterly meetings of the Clearinghouse include representatives from state and federal agencies, universities, NGOs and other private groups. Overall management of the Clearinghouse is delegated to the agencies represented by the authors above.

  12. Some facts about aftershocks to large earthquakes in California

    USGS Publications Warehouse

    Jones, Lucile M.; Reasenberg, Paul A.

    1996-01-01

    Earthquakes occur in clusters. After one earthquake happens, we usually see others at nearby (or identical) locations. To talk about this phenomenon, seismologists coined three terms foreshock , mainshock , and aftershock. In any cluster of earthquakes, the one with the largest magnitude is called the mainshock; earthquakes that occur before the mainshock are called foreshocks while those that occur after the mainshock are called aftershocks. A mainshock will be redefined as a foreshock if a subsequent event in the cluster has a larger magnitude. Aftershock sequences follow predictable patterns. That is, a sequence of aftershocks follows certain global patterns as a group, but the individual earthquakes comprising the group are random and unpredictable. This relationship between the pattern of a group and the randomness (stochastic nature) of the individuals has a close parallel in actuarial statistics. We can describe the pattern that aftershock sequences tend to follow with well-constrained equations. However, we must keep in mind that the actual aftershocks are only probabilistically described by these equations. Once the parameters in these equations have been estimated, we can determine the probability of aftershocks occurring in various space, time and magnitude ranges as described below. Clustering of earthquakes usually occurs near the location of the mainshock. The stress on the mainshock's fault changes drastically during the mainshock and that fault produces most of the aftershocks. This causes a change in the regional stress, the size of which decreases rapidly with distance from the mainshock. Sometimes the change in stress caused by the mainshock is great enough to trigger aftershocks on other, nearby faults. While there is no hard "cutoff" distance beyond which an earthquake is totally incapable of triggering an aftershock, the vast majority of aftershocks are located close to the mainshock. As a rule of thumb, we consider earthquakes to be aftershocks if they are located within a characteristic distance from the mainshock. This distance is usually taken to be one or two times the length of the fault rupture associated with the mainshock. For example, if the mainshock ruptured a 100 km length of a fault, subsequent earthquakes up to 100-200 km away from the mainshock rupture would be considered aftershocks. The fault rupture length was approximately 15 km in the 1994 Northridge earthquake, and 430 km in the great 1906 earthquake.

  13. Spectral Element Moment Tensor Inversions for Earthquakes in Southern California

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Komatitsch, D.; Tromp, J.

    2003-12-01

    We have developed and implemented a Centroid Moment-Tensor (CMT) inversion procedure to determine source parameters for southern California earthquakes. The method is based upon spectral-element simulations of regional seismic wave propagation in a recently developed three-dimensional southern California model. Sensitivity to source parameters is determined by numerically calculating the Fréchet derivatives required for the CMT inversion. We use a combination of waveform and waveform-envelope misfit criteria, and facilitate pure double-couple or zero-trace moment-tensor inversions. The technique is applied to six recent southern California earthquakes: the September~9, 2001, Mw = 4.2 Hollywood event, the October~31, 2001, Mw=4.9 Anza event, the September~3, 2002, Mw = 4.2 Yorba Linda event, the February~22, 2003, Mw = 5.2 Big Bear event and Mw = 4.5 Big Bear aftershock, and the July~15, 2003, Mw =3.8 Lucerne Valley event. Using more than half of the available three-component data at periods of 6~seconds and longer, the focal mechanisms, locations, and magnitudes we obtain are in good agreement with estimates based upon classical body-wave, surface-wave, and first-motion inversions.

  14. Physical model for earthquakes, 2. Application to southern California

    SciTech Connect

    Rundle, J.B.

    1988-06-10

    The purpose of this paper is to apply ideas developed in a previous paper to the construction of a detailed model for earthquake dynamics in southern California. The basis upon which the approach is formulated is that earthquakes are perturbations on, or more specifically fluctuations about, the long-term motions of the plates. This concept is made mathematically precise by means of a ''fluctuation hypothesis,'' which states that all physical quantities associated with earthquakes can be expressed as integral expansions in a fluctuating quantity called the ''offset phase.'' While in general, the frictional stick-slip properties of the complex, interacting faults should properly come out of the underlying physics, a simplification is made here, and a simple, spatially varying friction law is assumed. Together with the complex geometry of the major active faults, an assumed, spatially varying Earth rheology, the average rates of long-term offsets on all the major faults, and the friction coefficients, one can generate synthetic earthquake histories for comparison to the real data.

  15. ERTS Applications in earthquake research and mineral exploration in California

    NASA Technical Reports Server (NTRS)

    Abdel-Gawad, M.; Silverstein, J.

    1973-01-01

    Examples that ERTS imagery can be effectively utilized to identify, locate, and map faults which show geomorphic evidence of geologically recent breakage are presented. Several important faults not previously known have been identified. By plotting epicenters of historic earthquakes in parts of California, Sonora, Mexico, Arizona, and Nevada, we found that areas known for historic seismicity are often characterized by abundant evidence of recent fault and crustal movements. There are many examples of seismically quiet areas where outstanding evidence of recent fault movements is observed. One application is clear: ERTS-1 imagery could be effectively utilized to delineate areas susceptible to earthquake recurrence which, on the basis of seismic data alone, may be misleadingly considered safe. ERTS data can also be utilized in planning new sites in the geophysical network of fault movement monitoring and strain and tilt measurements.

  16. Earthquake epicenters and fault intersections in central and southern California

    NASA Technical Reports Server (NTRS)

    Abdel-Gawad, M. (Principal Investigator); Silverstein, J.

    1972-01-01

    The author has identifed the following significant results. ERTS-1 imagery provided evidence for the existence of short transverse fault segments lodged between faults of the San Andreas system in the Coast Ranges, California. They indicate that an early episode of transverse shear has affected the Coast Ranges prior to the establishment of the present San Andreas fault. The fault has been offset by transverse faults of the Transverse Ranges. It appears feasible to identify from ERTS-1 imagery geomorphic criteria of recent fault movements. Plots of historic earthquakes in the Coast Ranges and western Transverse Ranges show clusters in areas where structures are complicated by interaction of tow active fault systems. A fault lineament apparently not previously mapped was identified in the Uinta Mountains, Utah. Part of the lineament show evidence of recent faulting which corresponds to a moderate earthquake cluster.

  17. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  18. Rupture directivity of moderate earthquakes in northern California

    USGS Publications Warehouse

    Seekins, Linda C.; Boatwright, John

    2010-01-01

    We invert peak ground velocity and acceleration (PGV and PGA) to estimate rupture direction and rupture velocity for 47 moderate earthquakes (3.5≥M≥5.4) in northern California. We correct sets of PGAs and PGVs recorded at stations less than 55–125 km, depending on source depth, for site amplification and source–receiver distance, then fit the residual peak motions to the unilateral directivity function of Ben-Menahem (1961). We independently invert PGA and PGV. The rupture direction can be determined using as few as seven peak motions if the station distribution is sufficient. The rupture velocity is unstable, however, if there are no takeoff angles within 30° of the rupture direction. Rupture velocities are generally subsonic (0.5β–0.9β); for stability, we limit the rupture velocity at v=0.92β, the Rayleigh wave speed. For 73 of 94 inversions, the rupture direction clearly identifies one of the nodal planes as the fault plane. The 35 strike-slip earthquakes have rupture directions that range from nearly horizontal (6 events) to directly updip (5 events); the other 24 rupture partly along strike and partly updip. Two strike-slip earthquakes rupture updip in one inversion and downdip in the other. All but 1 of the 11 thrust earthquakes rupture predominantly updip. We compare the rupture directions for 10 M≥4.0 earthquakes to the relative location of the mainshock and the first two weeks of aftershocks. Spatial distributions of 8 of 10 aftershock sequences agree well with the rupture directivity calculated for the mainshock.

  19. Subduction zone earthquake probably triggered submarine hydrocarbon seepage offshore Pakistan

    NASA Astrophysics Data System (ADS)

    Fischer, David; José M., Mogollón; Michael, Strasser; Thomas, Pape; Gerhard, Bohrmann; Noemi, Fekete; Volkhard, Spiess; Sabine, Kasten

    2014-05-01

    Seepage of methane-dominated hydrocarbons is heterogeneous in space and time, and trigger mechanisms of episodic seep events are not well constrained. It is generally found that free hydrocarbon gas entering the local gas hydrate stability field in marine sediments is sequestered in gas hydrates. In this manner, gas hydrates can act as a buffer for carbon transport from the sediment into the ocean. However, the efficiency of gas hydrate-bearing sediments for retaining hydrocarbons may be corrupted: Hypothesized mechanisms include critical gas/fluid pressures beneath gas hydrate-bearing sediments, implying that these are susceptible to mechanical failure and subsequent gas release. Although gas hydrates often occur in seismically active regions, e.g., subduction zones, the role of earthquakes as potential triggers of hydrocarbon transport through gas hydrate-bearing sediments has hardly been explored. Based on a recent publication (Fischer et al., 2013), we present geochemical and transport/reaction-modelling data suggesting a substantial increase in upward gas flux and hydrocarbon emission into the water column following a major earthquake that occurred near the study sites in 1945. Calculating the formation time of authigenic barite enrichments identified in two sediment cores obtained from an anticlinal structure called "Nascent Ridge", we find they formed 38-91 years before sampling, which corresponds well to the time elapsed since the earthquake (62 years). Furthermore, applying a numerical model, we show that the local sulfate/methane transition zone shifted upward by several meters due to the increased methane flux and simulated sulfate profiles very closely match measured ones in a comparable time frame of 50-70 years. We thus propose a causal relation between the earthquake and the amplified gas flux and present reflection seismic data supporting our hypothesis that co-seismic ground shaking induced mechanical fracturing of gas hydrate-bearing sediments creating pathways for free gas to migrate from a shallow reservoir within the gas hydrate stability zone into the water column. Our results imply that free hydrocarbon gas trapped beneath a local gas hydrate seal was mobilized through earthquake-induced mechanical failure and in that way circumvented carbon sequestration within the sediment. These findings lead to conclude that hydrocarbon seepage triggered by earthquakes can play a role for carbon budgets at other seismically active continental margins. The newly identified process presented in our study is conceivable to help interpret data from similar sites. Reference: Fischer, D., Mogollon, J.M., Strasser, M., Pape, T., Bohrmann, G., Fekete, N., Spieß, V. and Kasten, S., 2013. Subduction zone earthquake as potential trigger of submarine hydrocarbon seepage. Nature Geoscience 6: 647-651.

  20. The 1868 Hayward fault, California, earthquake: Implications for earthquake scaling relations on partially creeping faults

    USGS Publications Warehouse

    Hough, Susan E.; Martin, Stacey

    2015-01-01

    The 21 October 1868 Hayward, California, earthquake is among the best-characterized historical earthquakes in California. In contrast to many other moderate-to-large historical events, the causative fault is clearly established. Published magnitude estimates have been fairly consistent, ranging from 6.8 to 7.2, with 95% confidence limits including values as low as 6.5. The magnitude is of particular importance for assessment of seismic hazard associated with the Hayward fault and, more generally, to develop appropriate magnitude–rupture length scaling relations for partially creeping faults. The recent reevaluation of archival accounts by Boatwright and Bundock (2008), together with the growing volume of well-calibrated intensity data from the U.S. Geological Survey “Did You Feel It?” (DYFI) system, provide an opportunity to revisit and refine the magnitude estimate. In this study, we estimate the magnitude using two different methods that use DYFI data as calibration. Both approaches yield preferred magnitude estimates of 6.3–6.6, assuming an average stress drop. A consideration of data limitations associated with settlement patterns increases the range to 6.3–6.7, with a preferred estimate of 6.5. Although magnitude estimates for historical earthquakes are inevitably uncertain, we conclude that, at a minimum, a lower-magnitude estimate represents a credible alternative interpretation of available data. We further discuss implications of our results for probabilistic seismic-hazard assessment from partially creeping faults.

  1. Helium soil-gas variations associated with recent central California earthquakes: precursor or coincidence?

    USGS Publications Warehouse

    Reimer, G.M.

    1981-01-01

    Decreases in the helium concentration of soil-gas have been observed to precede six of eight recent central California earthquakes. Ten monitoring stations were established near Hollister, California and along the San Andreas Fault to permit gas collection. The data showed decreases occurring a few weeks before the earthquakes and concentratiosn returned to prequake levels either shortly before or after the earthquakes.-Author

  2. Virtual California: Earthquake Statistics, Surface Deformation Patterns, Surface Gravity Changes and InSAR Interferograms for Arbitrary Fault Geometries

    NASA Astrophysics Data System (ADS)

    Schultz, K.; Sachs, M. K.; Heien, E. M.; Rundle, J. B.; Fernandez, J.; Turcotte, D.; Donnellan, A.

    2014-12-01

    With the ever increasing number of geodetic monitoring satellites, it is vital to have a variety of geophysical numerical simulators to produce sample/model datasets. Just as hurricane forecasts are derived from the consensus among multiple atmospheric models, earthquake forecasts cannot be derived from a single comprehensive model. Here we present the functionality of Virtual California, a numerical simulator that can generate sample surface deformations, surface gravity changes, and InSAR interferograms in addition to producing earthquake statistics and forecasts.Virtual California is a boundary element code designed to explore the seismicity of today's fault systems. For arbitrary input fault geometry, Virtual California can output simulated seismic histories of 50,000 years or more. Using co-seismic slips from the output data, we generate surface deformation maps, surface gravity change maps, and InSAR interferograms as viewed by an orbiting satellite. Furthermore, using the times between successive earthquakes we generate probability distributions and earthquake forecasts.Virtual California is now supported by the Computational Infrastructure for Geodynamics. The source code is available for download and it comes with a users' manual. The manual includes instructions on how to generate fault models from scratch, how to deploy the simulator across a parallel computing environment, etc.http://geodynamics.org/cig/software/vc/

  3. Probability of a given-magnitude earthquake induced by a fluid injection

    NASA Astrophysics Data System (ADS)

    Shapiro, S. A.; Dinske, C.; Kummerow, J.

    2007-11-01

    Fluid injections in geothermic and hydrocarbon reservoirs induce small earthquakes (-3 < M < 2). Occasionally, however, earthquakes with larger magnitudes (M ~ 4) occur. We investigate magnitude distributions and show that for a constant injection pressure the probability to induce an earthquake with a magnitude larger than a given value increases with injection time corresponding to a bi-logarithmical law with a proportionality coefficient close to one. We find that the process of pressure diffusion in a poroelastic medium with randomly distributed sub-critical cracks obeying a Gutenberg-Richter relation well explains our observations. The magnitude distribution is mainly inherited from the statistics of pre-existing fracture systems. The number of earthquakes greater than a given magnitude also increases with the strength of the injection source and the tectonic activity of the injection site. Our formulation provides a way to estimate expected magnitudes of induced earthquakes. It can be used to avoid significant earthquakes by correspondingly planning fluid injections.

  4. Aftershocks and triggered events of the Great 1906 California earthquake

    USGS Publications Warehouse

    Meltzner, A.J.; Wald, D.J.

    2003-01-01

    The San Andreas fault is the longest fault in California and one of the longest strike-slip faults in the world, yet little is known about the aftershocks following the most recent great event on the San Andreas, the Mw 7.8 San Francisco earthquake on 18 April 1906. We conducted a study to locate and to estimate magnitudes for the largest aftershocks and triggered events of this earthquake. We examined existing catalogs and historical documents for the period April 1906 to December 1907, compiling data on the first 20 months of the aftershock sequence. We grouped felt reports temporally and assigned modified Mercalli intensities for the larger events based on the descriptions judged to be the most reliable. For onshore and near-shore events, a grid-search algorithm (derived from empirical analysis of modern earthquakes) was used to find the epicentral location and magnitude most consistent with the assigned intensities. For one event identified as far offshore, the event's intensity distribution was compared with those of modern events, in order to contrain the event's location and magnitude. The largest aftershock within the study period, an M ???6.7 event, occurred ???100 km west of Eureka on 23 April 1906. Although not within our study period, another M ???6.7 aftershock occurred near Cape Mendocino on 28 October 1909. Other significant aftershocks included an M ???5.6 event near San Juan Bautista on 17 May 1906 and an M ???6.3 event near Shelter Cove on 11 August 1907. An M ???4.9 aftershock occurred on the creeping segment of the San Andreas fault (southeast of the mainshock rupture) on 6 July 1906. The 1906 San Francisco earthquake also triggered events in southern California (including separate events in or near the Imperial Valley, the Pomona Valley, and Santa Monica Bay), in western Nevada, in southern central Oregon, and in western Arizona, all within 2 days of the mainshock. Of these trigerred events, the largest were an M ???6.1 earthquake near Brawley and an M ???5.0 event under or near Santa Monica Bay, 11.3 and 31.3 hr after the San Francisco mainshock, respectively. The western Arizona event is inferred to have been triggered dynamically. In general, the largest aftershocks occurred at the ends of the 1906 rupture or away from the rupture entirely; very few significant aftershocks occurred along the mainshock rupture itself. The total number of large aftershocks was less than predicted by a generic model based on typical California mainshock-aftershock statistics, and the 1906 sequence appears to have decayed more slowly than average California sequences. Similarities can be drawn between the 1906 aftershock sequence and that of the 1857 (Mw 7.9) San Andreas fault earthquake.

  5. Earthquake early warning in California: Evaluating Hardware and Software

    NASA Astrophysics Data System (ADS)

    Hellweg, M.; Allen, R.; Boese, M.; Brown, H.; Cua, G.; Given, D.; Hauksson, E.; Heaton, T.; Jordan, T.; Khainovski, O.; Maechling, P.; Neuhauser, D.; Oppenheimer, D.; Solanki, K.; Zeleznik, M.

    2008-12-01

    Three earthquake early warning (EEW) algorithms are currently being evaluated within the California Integrated Seismic Network (CISN) with support from the US Geological Survey. The evaluation encompasses two aspects: their operation using data from throughout the state under real time conditions, and assessment of their alerts at a web-accessible testing center. An EEW system rapidly detects the initiation of earthquakes and predicts their ground shaking. Its purpose is to provide warning of potentially damaging ground motion in a target region before the strong shaking arrives. One of the three algorithms implemented at the CISN data centers uses a single station, or 'On-site' approach (Caltech). The other two, 'ElarmS' (UC Berkeley) and 'Virtual Seismologist' (VS, Caltech/ETH), are network-based. Although single station alerts can be delivered more quickly than those from a network-based system, more of them tend to be false warnings. Network-based algorithms for EEW require that data be gathered at a central site for joint processing. The two network-based systems, ElarmS and VS, run 15 s behind real time in order to gather ~90% of station data before processing. The EEW alert testing center was developed by the Southern California Earthquake Center (SCEC). Results from the various algorithms are collected automatically. Automatically generated performance summaries allow the comparison of the EEW alerts with each other and with earthquakes within the region. Performance criteria include promptness of the alert, earthquake location and magnitude. Provisions have also been made to assess false alerts, ground motion predictions and uncertainties. In addition to evaluating the three algorithms in terms for separate and joint reliability, we review the needs of EEW with respect to instrumentation and data latency. Possible warning times will typically range from seconds to tens of seconds, and each second delay means a decrease in the available warning time. Minimal latency is therefore important to warning systems. As testing progresses, we are formulating specifications for geophysical networks that can provide real time data in a robust fashion.

  6. 1957 Gobi-Altay, Mongolia, earthquake as a prototype for southern California's most devastating earthquake

    USGS Publications Warehouse

    Bayarsayhan, C.; Bayasgalan, A.; Enhtuvshin, B.; Hudnut, K.W.; Kurushin, R.A.; Molnar, P.; Olziybat, M.

    1996-01-01

    The 1957 Gobi-Altay earthquake was associated with both strike-slip and thrust faulting, processes similar to those along the San Andreas fault and the faults bounding the San Gabriel Mountains just north of Los Angeles, California. Clearly, a major rupture either on the San Andreas fault north of Los Angeles or on the thrust faults bounding the Los Angeles basin poses a serious hazard to inhabitants of that area. By analogy with the Gobi-Altay earthquake, we suggest that simultaneous rupturing of both the San Andreas fault and the thrust faults nearer Los Angeles is a real possibility that amplifies the hazard posed by ruptures on either fault system separately.

  7. SCIGN; new Southern California GPS network advances the study of earthquakes

    USGS Publications Warehouse

    Hudnut, Ken; King, Nancy

    2001-01-01

    Southern California is a giant jigsaw puzzle, and scientists are now using GPS satellites to track the pieces. These puzzle pieces are continuously moving, slowly straining the faults in between. That strain is then eventually released in earthquakes. The innovative Southern California Integrated GPS Network (SCIGN) tracks the motions of these pieces over most of southern California with unprecedented precision. This new network greatly improves the ability to assess seismic hazards and quickly measure the larger displacements that occur during and immediatelyafter earthquakes.

  8. Catalog of earthquakes along the San Andreas fault system in Central California: January-March, 1972

    USGS Publications Warehouse

    Wesson, R.L.; Bennett, R.E.; Meagher, K.L.

    1973-01-01

    Numerous small earthquakes occur each day in the Coast Ranges of Central California. The detailed study of these earthquakes provides a tool for gaining insight into the tectonic and physical processes responsible for the generation of damaging earthquakes. This catalog contains the fundamental parameters for earthquakes located within and adjacent to the seismograph network operated by the National Center for Earthquake Research (NCER), U.S. Geological Survey, during the period January - March, 1972. The motivation for these detailed studies has been described by Pakiser and others (1969) and by Eaton and others (1970). Similar catalogs of earthquakes for the years 1969, 1970 and 1971 have been prepared by Lee and others (1972 b,c,d). The basic data contained in these catalogs provide a foundation for further studies. This catalog contains data on 1,718 earthquakes in Central California. Of particular interest is a sequence of earthquakes in the Bear Valley area which contained single shocks with local magnitudes of S.O and 4.6. Earthquakes from this sequence make up roughly 66% of the total and are currently the subject of an interpretative study. Arrival times at 118 seismograph stations were used to locate the earthquakes listed in this catalog. Of these, 94 are telemetered stations operated by NCER. Readings from the remaining 24 stations were obtained through the courtesy of the Seismographic Stations, University of California, Berkeley (UCB); the Earthquake Mechanism Laboratory, National Oceanic and Atmospheric Administration, San Francisco (EML); and the California Department of Water Resources, Sacramento. The Seismographic Stations of the University of California, Berkeley,have for many years published a bulletin describing earthquakes in Northern California and the surrounding area, and readings at UCB Stations from more distant events. The purpose of the present catalog is not to replace the UCB Bulletin, but rather to supplement it, by describing the seismicity of a portion of central California in much greater detail.

  9. Catalog of earthquakes along the San Andreas fault system in Central California, April-June 1972

    USGS Publications Warehouse

    Wesson, R.L.; Bennett, R.E.; Lester, F.W.

    1973-01-01

    Numerous small earthquakes occur each day in the coast ranges of Central California. The detailed study of these earthquakes provides a tool for gaining insight into the tectonic and physical processes responsible for the generation of damaging earthquakes. This catalog contains the fundamental parameters for earthquakes located within and adjacent to the seismograph network operated by the National Center for Earthquake Research (NCER), U.S. Geological Survey, during the period April - June, 1972. The motivation for these detailed studies has been described by Pakiser and others (1969) and by Eaton and others (1970). Similar catalogs of earthquakes for the years 1969, 1970 and 1971 have been prepared by Lee and others (1972 b, c, d). A catalog for the first quarter of 1972 has been prepared by Wesson and others (1972). The basic data contained in these catalogs provide a foundation for further studies. This catalog contains data on 910 earthquakes in Central California. A substantial portion of the earthquakes reported in this catalog represents a continuation of the sequence of earthquakes in the Bear Valley area which began in February, 1972 (Wesson and others, 1972). Arrival times at 126 seismograph stations were used to locate the earthquakes listed in this catalog. Of these, 101 are telemetered stations operated by NCER. Readings from the remaining 25 stations were obtained through the courtesy of the Seismographic Stations, University of California, Berkeley (UCB); the Earthquake Mechanism Laboratory, National Oceanic and Atmospheric Administration, San Francisco (EML); and the California Department of Water Resources, Sacramento. The Seismographic Stations of the University of California, Berkeley, have for many years published a bulletin describing earthquakes in Northern California and the surrounding area, and readings at UCB Stations from more distant events. The purpose of the present catalog is not to replace the UCB Bulletin, but rather to supplement it, by describing the seismicity of a portion of central California in much greater detail.

  10. Simulations of the 1906 San Francisco Earthquake and Scenario Earthquakes in Northern California

    NASA Astrophysics Data System (ADS)

    Larsen, S.; Dreger, D.; Dolenc, D.

    2006-12-01

    3-D simulations of seismic ground motions are performed to better characterize the 1906 San Francisco earthquake and to investigate the seismic consequences from scenario events in northern California. Specifically, we perform simulations of: 1) the 1906 earthquake, which bilaterally ruptured a 480-km segment of the San Andreas fault from San Juan Bautista to Cape Mendocino (epicenter a few kilometers off the coast of San Francisco); 2) large scenario San Andreas events with different epicentral locations; and 3) smaller scenario events along faults local to the San Francisco Bay Area. Simulations of the 1906 earthquake indicate that significant ground motion occurred up and down the northern California coast and out into the Central Valley. Comparisons between the simulated motions and observed data (e.g., shaking intensities, Boatwright and Bundock, 2005), suggest that the moment magnitude of this event was between M7.8 and M7.9. Simulations of 1906-like scenario events along the San Andreas fault reveal that ground motions in the San Francisco Bay Area and in the Sacramento Delta region would be significantly stronger for earthquakes initiating along the northern section of the fault and rupturing to the southeast. Simulations of smaller scenario events in the San Francisco Bay Area indicate areas of concentrated shaking. These simulations are performed using a recently developed 3-D geologic model of northern California (Brocher and Thurber, 2005; Jachens et al., 2005), together with finite-difference codes (E3D and a new public domain package). The effects of topography and attenuation are included. The full computational domain spans most of the geologic model and is 630x320x50 km3. The minimum S-wave velocity is constrained to 500 m/s, except in water. Frequencies up to 1.0 Hz are modeled. The grid spacing ranges from 75 m to 200 m. High performance supercomputers are used for the simulations, which include models of over 23 billion grid nodes using 2000 processors. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.

  11. Bridge pier failure probabilities under combined hazard effects of scour, truck and earthquake. Part II: failure probabilities

    NASA Astrophysics Data System (ADS)

    Liang, Zach; Lee, George C.

    2013-06-01

    In many regions of the world, a bridge will experience multiple extreme hazards during its expected service life. The current American Association of State Highway and Transportation Officials (AASHTO) load and resistance factor design (LRFD) specifications are formulated based on failure probabilities, which are fully calibrated for dead load and non-extreme live loads. Design against earthquake load effect is established separately. Design against scour effect is also formulated separately by using the concept of capacity reduction (or increased scour depth). Furthermore, scour effect cannot be linked directly to an LRFD limit state equation because the latter is formulated using force-based analysis. This paper (in two parts) presents a probability-based procedure to estimate the combined hazard effects on bridges due to truck, earthquake and scour, by treating the effect of scour as an equivalent load effect so that it can be included in reliability-based failure calculations. In Part I of this series, the general principle for treating the scour depth as an equivalent load effect is presented. In Part II, the corresponding bridge failure probability, the occurrence of scour as well as simultaneously having both truck load and equivalent scour load effect are quantitatively discussed. The key formulae of the conditional partial failure probabilities and the necessary conditions are established. In order to illustrate the methodology, an example of dead, truck, earthquake and scour effects on a simple bridge pile foundation is represented.

  12. Bridge pier failure probabilities under combined hazard effects of scour, truck and earthquake. Part I: occurrence probabilities

    NASA Astrophysics Data System (ADS)

    Liang, Zach; Lee, George C.

    2013-06-01

    In many regions of the world, a bridge will experience multiple extreme hazards during its expected service life. The current American Association of State Highway and Transportation Officials (AASHTO) load and resistance factor design (LRFD) specifications are formulated based on failure probabilities, which are fully calibrated for dead load and nonextreme live loads. Design against earthquake loads is established separately. Design against scour effect is also formulated separately by using the concept of capacity reduction (or increased scour depth). Furthermore, scour effect cannot be linked directly to an LRFD limit state equation, because the latter is formulated using force-based analysis. This paper (in two parts) presents a probability-based procedure to estimate the combined hazard effects on bridges due to truck, earthquake and scour, by treating the effect of scour as an equivalent load effect so that it can be included in reliability-based bridge failure calculations. In Part I of this series, the general principle of treating the scour depth as an equivalent load effect is presented. The individual and combined partial failure probabilities due to truck, earthquake and scour effects are described. To explain the method of including non-force-based natural hazards effects, two types of common scour failures are considered. In Part II, the corresponding bridge failure probability, the occurrence of scour as well as simultaneously having both truck load and equivalent scour load are quantitatively discussed.

  13. Significance of stress transfer in time-dependent earthquake probability calculations

    USGS Publications Warehouse

    Parsons, T.

    2005-01-01

    A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.

  14. Recalculated probability of M ≥ 7 earthquakes beneath the Sea of Marmara, Turkey

    USGS Publications Warehouse

    Parsons, T.

    2004-01-01

    New earthquake probability calculations are made for the Sea of Marmara region and the city of Istanbul, providing a revised forecast and an evaluation of time-dependent interaction techniques. Calculations incorporate newly obtained bathymetric images of the North Anatolian fault beneath the Sea of Marmara [Le Pichon et al., 2001; Armijo et al., 2002]. Newly interpreted fault segmentation enables an improved regional A.D. 1500-2000 earthquake catalog and interevent model, which form the basis for time-dependent probability estimates. Calculations presented here also employ detailed models of coseismic and postseismic slip associated with the 17 August 1999 M = 7.4 Izmit earthquake to investigate effects of stress transfer on seismic hazard. Probability changes caused by the 1999 shock depend on Marmara Sea fault-stressing rates, which are calculated with a new finite element model. The combined 2004-2034 regional Poisson probability of M≥7 earthquakes is ~38%, the regional time-dependent probability is 44 ± 18%, and incorporation of stress transfer raises it to 53 ± 18%. The most important effect of adding time dependence and stress transfer to the calculations is an increase in the 30 year probability of a M ??? 7 earthquake affecting Istanbul. The 30 year Poisson probability at Istanbul is 21%, and the addition of time dependence and stress transfer raises it to 41 ± 14%. The ranges given on probability values are sensitivities of the calculations to input parameters determined by Monte Carlo analysis; 1000 calculations are made using parameters drawn at random from distributions. Sensitivities are large relative to mean probability values and enhancements caused by stress transfer, reflecting a poor understanding of large-earthquake aperiodicity.

  15. Spatial patterns of aftershocks of shallow focus earthquakes in California and implications for deep focus earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    1989-01-01

    Previous workers have pioneered statistical techniques to study the spatial distribution of aftershocks with respect to the focal mechanism of the main shock. Application of these techniques to deep focus earthquakes failed to show clustering of aftershocks near the nodal planes of the main shocks. To better understand the behaviour of these statistics, this study applies them to the aftershocks of six large shallow focus earthquakes in California (August 6, 1979, Coyote Lake; May 2, 1983, Coalinga; April 24, 1984, Morgan Hill; August 4, 1985, Kettleman Hills; July 8, 1986, North Palm Springs; and October 1, 1987, Whittier Narrows). The large number of aftershocks accurately located by dense local networks allows us to treat these aftershock sequences individually instead of combining them, as was done for the deep earthquakes. The results for individual sequences show significant clustering about the closest nodal plane and the strike direction for five of the sequences and about the presumed fault plane for all six sequences. This implies that the previously developed method does work properly. The reasons for the lack of clustering about main shock nodal planes for deep focus aftershocks are discussed. -from Author

  16. Three-dimensional tomography of the 1992 southern California earthquake sequence: Constraints on dynamic earthquake rupture?

    NASA Astrophysics Data System (ADS)

    Lees, Jonathan M.; Nicholson, Craig

    1993-05-01

    Tomographic inversion of P-wave arrival times from aftershocks of 1992 southern California earthquakes is used to produce three dimensional images of subsurface velocity. The preliminary 1992 data set, augmented by the 1986 M 5.9 North Palm Springs sequence, consists of 6458 high-quality events recorded by the permanent regional network—providing 76306 raypaths for inversion. The target area consisted of a 104 x 104 x 32 km3 volume divided into 52 x 52 x 10 rectilinear blocks. Significant velocity perturbations appear to correlate with rupture properties of recent major earthquakes. Preliminary results indicate that a low-velocity anomaly separates the dynamic rupture of the M 6.5 Big Bear event from the M 7.4 Landers main shock; a similar low-velocity region separates the M 6.1 Joshua Tree sequence from the Landers rupture.High-velocity anomalies occur at or near nucleation sites of all four recent main shocks (North Palm Springs-Joshua Tree-LandersBig Bear). A high-velocity anomaly is present along the San Andreas fault between 5 and 12 km depth through San Gorgonio Pass; this high-velocity area may define an asperity where stress is concentrated and where future large earthquakes may begin.

  17. Probability estimates of seismic event occurrence compared to health hazards - Forecasting Taipei's Earthquakes

    NASA Astrophysics Data System (ADS)

    Fung, D. C. N.; Wang, J. P.; Chang, S. H.; Chang, S. C.

    2014-12-01

    Using a revised statistical model built on past seismic probability models, the probability of different magnitude earthquakes occurring within variable timespans can be estimated. The revised model is based on Poisson distribution and includes the use of best-estimate values of the probability distribution of different magnitude earthquakes recurring from a fault from literature sources. Our study aims to apply this model to the Taipei metropolitan area with a population of 7 million, which lies in the Taipei Basin and is bounded by two normal faults: the Sanchaio and Taipei faults. The Sanchaio fault is suggested to be responsible for previous large magnitude earthquakes, such as the 1694 magnitude 7 earthquake in northwestern Taipei (Cheng et. al., 2010). Based on a magnitude 7 earthquake return period of 543 years, the model predicts the occurrence of a magnitude 7 earthquake within 20 years at 1.81%, within 79 years at 6.77% and within 300 years at 21.22%. These estimates increase significantly when considering a magnitude 6 earthquake; the chance of one occurring within the next 20 years is estimated to be 3.61%, 79 years at 13.54% and 300 years at 42.45%. The 79 year period represents the average lifespan of the Taiwan population. In contrast, based on data from 2013, the probability of Taiwan residents experiencing heart disease or malignant neoplasm is 11.5% and 29%. The inference of this study is that the calculated risk that the Taipei population is at from a potentially damaging magnitude 6 or greater earthquake occurring within their lifetime is just as great as of suffering from a heart attack or other health ailments.

  18. Earthquake clusters in southern California I: Identification and stability

    NASA Astrophysics Data System (ADS)

    Zaliapin, Ilya; Ben-Zion, Yehuda

    2013-06-01

    We use recent results on statistical analysis of seismicity to present a robust method for comprehensive detection and analysis of earthquake clusters. The method is based on nearest-neighbor distances of events in space-time-energy domain. The method is applied to a 1981-2011 relocated seismicity catalog of southern California having 111,981 events with magnitudes m ≥ 2 and corresponding synthetic catalogs produced by the Epidemic Type Aftershock Sequence (ETAS) model. Analysis of the ETAS model demonstrates that the cluster detection results are accurate and stable with respect to (1) three numerical parameters of the method, (2) variations of the minimal reported magnitude, (3) catalog incompleteness, and (4) location errors. Application of the method to the observed catalog separates the 111,981 examined earthquakes into 41,393 statistically significant clusters comprised of foreshocks, mainshocks, and aftershocks. The results reproduce the essential known statistical properties of earthquake clusters, which provide overall support for the proposed technique. In addition, systematic analysis with our method allows us to detect several new features of seismicity that include (1) existence of a significant population of single-event clusters, (2) existence of foreshock activity in natural seismicity that exceeds expectation based on the ETAS model, and (3) dependence of all cluster properties, except area, on the magnitude difference of events from mainshocks but not on their absolute values. The classification of detected clusters into several major types, generally corresponding to singles, burst-like and swarm-like sequences, and correlations between different cluster types and geographic locations is addressed in a companion paper.

  19. Inventory of landslides triggered by the 1994 Northridge, California earthquake

    USGS Publications Warehouse

    Harp, Edwin L.; Jibson, Randall W.

    1995-01-01

    The 17 January 1994 Northridge, California, earthquake (M=6.7) triggered more than 11,000 landslides over an area of about 10,000 km?. Most of the landslides were concentrated in a 1,000-km? area that includes the Santa Susana Mountains and the mountains north of the Santa Clara River valley. We mapped landslides triggered by the earthquake in the field and from 1:60,000-scale aerial photography provided by the U.S. Air Force and taken the morning of the earthquake; these were subsequently digitized and plotted in a GIS-based format, as shown on the accompanying maps (which also are accessible via Internet). Most of the triggered landslides were shallow (1-5 m), highly disrupted falls and slides in weakly cemented Tertiary to Pleistocene clastic sediment. Average volumes of these types of landslides were less than 1,000 m?, but many had volumes exceeding 100,000 m?. Many of the larger disrupted slides traveled more than 50 m, and a few moved as far as 200 m from the bases of steep parent slopes. Deeper ( >5 m) rotational slumps and block slides numbered in the hundreds, a few of which exceeded 100,000 m? in volume. The largest triggered landslide was a block slide having a volume of 8X10E06 m?. Triggered landslides damaged or destroyed dozens of homes, blocked roads, and damaged oil-field infrastructure. Analysis of landslide distribution with respect to variations in (1) landslide susceptibility and (2) strong shaking recorded by hundreds of instruments will form the basis of a seismic landslide hazard analysis of the Los Angeles area.

  20. Formulation and Application of a Physically-Based Rupture Probability Model for Large Earthquakes on Subduction Zones: A Case Study of Earthquakes on Nazca Plate

    NASA Astrophysics Data System (ADS)

    Mahdyiar, M.; Galgana, G.; Shen-Tu, B.; Klein, E.; Pontbriand, C. W.

    2014-12-01

    Most time dependent rupture probability (TDRP) models are basically designed for a single-mode rupture, i.e. a single characteristic earthquake on a fault. However, most subduction zones rupture in complex patterns that create overlapping earthquakes of different magnitudes. Additionally, the limited historic earthquake data does not provide sufficient information to estimate reliable mean recurrence intervals for earthquakes. This makes it difficult to identify a single characteristic earthquake for TDRP analysis. Physical models based on geodetic data have been successfully used to obtain information on the state of coupling and slip deficit rates for subduction zones. Coupling information provides valuable insight into the complexity of subduction zone rupture processes. In this study we present a TDRP model that is formulated based on subduction zone slip deficit rate distribution. A subduction zone is represented by an integrated network of cells. Each cell ruptures multiple times from numerous earthquakes that have overlapping rupture areas. The rate of rupture for each cell is calculated using a moment balance concept that is calibrated based on historic earthquake data. The information in conjunction with estimates of coseismic slip from past earthquakes is used to formulate time dependent rupture probability models for cells. Earthquakes on the subduction zone and their rupture probabilities are calculated by integrating different combinations of cells. The resulting rupture probability estimates are fully consistent with the state of coupling of the subduction zone and the regional and local earthquake history as the model takes into account the impact of all large (M>7.5) earthquakes on the subduction zone. The granular rupture model as developed in this study allows estimating rupture probabilities for large earthquakes other than just a single characteristic magnitude earthquake. This provides a general framework for formulating physically-based rupture probability models for large earthquakes on subduction zones that is consistent with their true locking state and earthquake history. We will present the formulation of the proposed model and its application to the Nazca plate subduction zone.

  1. Catalog of earthquakes along the San Andreas fault system in Central California, July-September 1972

    USGS Publications Warehouse

    Wesson, R.L.; Meagher, K.L.; Lester, F.W.

    1973-01-01

    Numerous small earthquakes occur each day in the coast ranges of Central California. The detailed study of these earthquakes provides a tool for gaining insight into the tectonic and physical processes responsible for the generation of damaging earthquakes. This catalog contains the fundamental parameters for earthquakes located within and adjacent to the seismograph network operated by the National Center for Earthquake Research (NCER), U.S. Geological Survey, during the period July - September, 1972. The motivation for these detailed studies has been described by Pakiser and others (1969) and by Eaton and others (1970). Similar catalogs of earthquakes for the years 1969, 1970 and 1971 have been prepared by Lee and others (1972 b, c, d). Catalogs for the first and second quarters of 1972 have been prepared by Wessan and others (1972 a & b). The basic data contained in these catalogs provide a foundation for further studies. This catalog contains data on 1254 earthquakes in Central California. Arrival times at 129 seismograph stations were used to locate the earthquakes listed in this catalog. Of these, 104 are telemetered stations operated by NCER. Readings from the remaining 25 stations were obtained through the courtesy of the Seismographic Stations, University of California, Berkeley (UCB), the Earthquake Mechanism Laboratory, National Oceanic and Atmospheric Administration, San Francisco (EML); and the California Department of Water Resources, Sacramento. The Seismographic Stations of the University of California, Berkeley, have for many years published a bulletin describing earthquakes in Northern California and the surrounding area, and readings at UCB Stations from more distant events. The purpose of the present catalog is not to replace the UCB Bulletin, but rather to supplement it, by describing the seismicity of a portion of central California in much greater detail.

  2. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    NASA Astrophysics Data System (ADS)

    Yilmaz, Şeyda; Bayrak, Erdem; Bayrak, Yusuf

    2016-04-01

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed time using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.

  3. Tsunami Hazard in Crescent City, California from Kuril Islands earthquakes

    NASA Astrophysics Data System (ADS)

    Dengler, L.; Uslu, B.; Barberopoulou, A.

    2007-12-01

    On November 15, Crescent City in Del Norte County, California was hit by a series of tsunami surges generated by the M = 8.3 Kuril Islands earthquake causing an estimated 9.7 million (US dollars) in damages to the small boat basin. This was the first significant tsunami loss on US territory since the 1964 Alaska tsunami. The damage occurred nearly 8 hours after the official tsunami alert bulletins had been cancelled. The tsunami caused no flooding and did not exceed the ambient high tide level. All of the damage was caused by strong currents, estimated at 12 to 15 knots, causing the floating docks to be pinned against the pilings and water to flow over them. The event highlighted problems in warning criteria and communications for a marginal event with the potential for only localized impacts, the vulnerability of harbors from a relatively modest tsunami, and the particular exposure of the Crescent City harbor area to tsunamis. It also illustrated the poor understanding of local officials of the duration of tsunami hazard. As a result of the November tsunami, interim changes were made by WCATWC to address localized hazards in areas like Crescent City. On January 13, 2007 when a M = 8.1 earthquake occurred in the Kuril Islands, a formal procedure was in place for hourly conference calls between WCATWC, California State Office of Emergency Services officials, local weather Service Offices and local emergency officials, significantly improving the decision making process and the communication among the federal, state and local officials. Kuril Island tsunamis are relatively common at Crescent City. Since 1963, five tsunamis generated by Kuril Island earthquakes have been recorded on the Crescent City tide gauge, two with amplitudes greater than 0.5 m. We use the MOST model to simulate the 2006, 2007 and 1994 events and to examine the difference between damaging and non-damaging events at Crescent City. Small changes in the angle of the rupture zone results can result in a half meter difference in water heights. We also look at the contribution of fault segments along the Kuril subduction zone using the FACTS server to look at the potentially most damaging source regions for Crescent City. A similar-sized rupture as the November 15 event located further south along the Hokkaido - Honshu area of the subduction zone, is likely to produce a slightly larger amplitude signal with and even greater delay between the first wave arrivals and the largest waves.

  4. Heightened odds of large earthquakes near Istanbul: an interaction-based probability calculation

    USGS Publications Warehouse

    Parsons, T.; Toda, S.; Stein, R.S.; Barka, A.; Dieterich, J.H.

    2000-01-01

    We calculate the probability of strong shaking in Istanbul, an urban center of 10 million people, from the description of earthquakes on the North Anatolian fault system in the Marmara Sea during the past 500 years and test the resulting catalog against the frequency of damage in Istanbul during the preceding millennium, departing from current practice, we include the time-dependent effect of stress transferred by the 1999 moment magnitude M = 7.4 Izmit earthquake to faults nearer to Istanbul. We find a 62 ± 15% probability (one standard deviation) of strong shaking during the next 30 years and 32 ± 12% during the next decade.

  5. Simultaneous estimation of b-values and detection rates of earthquakes for the application to aftershock probability forecasting

    NASA Astrophysics Data System (ADS)

    Katsura, K.; Ogata, Y.

    2004-12-01

    Reasenberg and Jones [Science, 1989, 1994] proposed the aftershock probability forecasting based on the joint distribution [Utsu, J. Fac. Sci. Hokkaido Univ., 1970] of the modified Omori formula of aftershock decay and Gutenberg-Richter law of magnitude frequency, where the respective parameters are estimated by the maximum likelihood method [Ogata, J. Phys. Earth, 1983; Utsu, Geophys Bull. Hokkaido Univ., 1965, Aki, Bull. Earthq. Res. Inst., 1965]. The public forecast has been implemented by the responsible agencies in California and Japan. However, a considerable difficulty in the above procedure is that, due to the contamination of arriving seismic waves, detection rate of aftershocks is extremely low during a period immediately after the main shock, say, during the first day, when the forecasting is most critical for public in the affected area. Therefore, for the forecasting of a probability during such a period, they adopt a generic model with a set of the standard parameter values in California or Japan. For an effective and realistic estimation, I propose to utilize the statistical model introduced by Ogata and Katsura [Geophys. J. Int., 1993] for the simultaneous estimation of the b-values of Gutenberg-Richter law together with detection-rate (probability) of earthquakes of each magnitude-band from the provided data of all detected events, where the both parameters are allowed for changing in time. Thus, by using all detected aftershocks from the beginning of the period, we can estimate the underlying modified Omori rate of both detected and undetected events and their b-value changes, taking the time-varying missing rates of events into account. The similar computation is applied to the ETAS model for complex aftershock activity or regional seismicity where substantial missing events are expected immediately after a large aftershock or another strong earthquake in the vicinity. Demonstrations of the present procedure will be shown for the recent examples in Japan.

  6. What to Expect from the Virtual Seismologist: Delay Times and Uncertainties of Initial Earthquake Alerts in California

    NASA Astrophysics Data System (ADS)

    Behr, Y.; Cua, G. B.; Clinton, J. F.; Racine, R.; Meier, M.; Cauzzi, C.

    2013-12-01

    The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland, western Greece and Istanbul. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS installations in southern Italy, Romania, and Iceland are planned or underway. The possible use cases for an EEW system will be determined by the speed and reliability of earthquake source parameter estimates. A thorough understanding of both is therefore essential to evaluate the usefulness of VS. For California, we present state-wide theoretical alert times for hypothetical earthquakes by analyzing time delays introduced by the different components in the VS EEW system. Taking advantage of the fully probabilistic formulation of the VS algorithm we further present an improved way to describe the uncertainties of every magnitude estimate by evaluating the width and shape of the probability density function that describes the relationship between waveform envelope amplitudes and magnitude. We evaluate these new uncertainty values for past seismicity in California through off-line playbacks and compare them to the previously defined static definitions of uncertainty based on real-time detections. Our results indicate where VS alerts are most useful in California and also suggest where most effective improvements to the VS EEW system can be made.

  7. LLNL earthquake impact analysis committee report on the Livermore, California, earthquakes of January 24 and 26, 1980

    SciTech Connect

    Not Available

    1980-07-15

    The overall effects of the earthquakes of January 24 and 26, 1980, at the Lawrence Livermore National Laboratory in northern California are outlined. The damage caused by those earthquakes and how employees responded are discussed. The immediate emergency actions taken by management and the subsequent measures to resume operations are summarized. Long-range plans for recovery and repair, and the seisic history of the Livermore Valley region, various investigations concerning the design-basis earthquake (DBE), and seismic criteria for structures are reviewed. Following an analysis of the Laboratory's earthquake preparedness, emergency response, and related matters a series of conclusions and recommendations are presented. Appendixes provide additional information, such as persons interviewed, seismic and site maps, and a summary of the estimated costs incurred from the earthquakes.

  8. Earthquake alarm; operating the seismograph station at the University of California, Berkeley.

    USGS Publications Warehouse

    Stump, B.

    1980-01-01

    At the University of California seismographic stations, the task of locating and determining magnitudes for both local and distant earthquakes is a continuous one. Teleseisms must be located rapidly so that events that occur in the Pacific can be identified and the Pacific Tsunami Warning System alerted. For great earthquakes anywhere, there is a responsibility to notify public agencies such as the California Office of Emergency Services, the Federal Disaster Assistance Administration, the Earthquake Engineering Research Institute, the California Seismic Safety Commission, and the American Red Cross. In the case of damaging local earthquakes, it is necessary to alert also the California Department of Water Resources, California Division of Mines and Geology, U.S Army Corps of Engineers, Federal Bureau of Reclamation, and the Bay Area Rapid Transit. These days, any earthquakes that are felt in northern California cause immediate inquiries from the news media and an interested public. The series of earthquakes that jolted the Livermore area from January 24 to 26 1980, is a good case in point. 

  9. Earthquake prediction research at the Seismological Laboratory, California Institute of Technology

    USGS Publications Warehouse

    Spall, H.

    1979-01-01

    Nevertheless, basic earthquake-related information has always been of consuming interest to the public and the media in this part of California (fig. 2.). So it is not surprising that earthquake prediction continues to be a significant reserach program at the laboratory. Several of the current spectrum of projects related to prediction are discussed below. 

  10. Unacceptable Risk: Earthquake Hazard Mitigation in One California School District. Hazard Mitigation Case Study.

    ERIC Educational Resources Information Center

    California State Office of Emergency Services, Sacramento.

    Earthquakes are a perpetual threat to California's school buildings. School administrators must be aware that hazard mitigation means much more than simply having a supply of water bottles in the school; it means getting everyone involved in efforts to prevent tragedies from occurring in school building in the event of an earthquake. The PTA in…

  11. Earthquake probability at the Kashiwazaki Kariwa nuclear power plant, Japan, assessed using bandwidth optimization

    NASA Astrophysics Data System (ADS)

    Connor, C. B.; Connor, L. J.

    2007-12-01

    On July 16, 2007, a strong 6.8 magnitude earthquake occurred on Japan's west coast, rocking the nearby Kashiwazaki Kariwa nuclear power plant, the largest nuclear power station on Earth. Shaking during this event produced ground accelerations of ~680 gal, exceeding the plant seismic design specification of 273 gal. This occurrence renews concerns regarding seismic hazards at nuclear facilities located in regions with persistent earthquake activity. Seismic hazard assessments depend upon an understanding of the spatial distribution of earthquakes to effectively assess future earthquake hazards. Earthquake spatial density is best estimated using kernel density functions based on the locations of past seismic events. Two longstanding problems encountered when using kernel density estimation are the selection of an optimal smoothing bandwidth and the quantification of the uncertainty inherent in these estimates. Currently, kernel bandwidths are often selected subjectively and the uncertainty in spatial density estimation is not calculated. As a result, hazards with potentially large consequences for society are poorly estimated. We solve these two problems by employing an optimal bandwidth selector algorithm to objectively identify an appropriately sized kernel bandwidth based on earthquake locations from catalog databases and by assessing uncertainty in the spatial density estimate using a modified smoothed bootstrap technique. After applying these methods to the Kashiwazaki Kariwa site, the calculated probability of one or more Mw 6-7 earthquakes within 10 km of the site during a 40 yr facility lifetime is between 0.005 and 0.02 with 95 percent confidence. This result is made more robust by calculating similar probabilities using alternative databases of earthquake locations and magnitudes. The objectivity and quantitative robustness of these techniques make them extremely beneficial for seismic hazard assessment.

  12. Current Development at the Southern California Earthquake Data Center (SCEDC)

    NASA Astrophysics Data System (ADS)

    Appel, V. L.; Clayton, R. W.

    2005-12-01

    Over the past year, the SCEDC completed or is near completion of three featured projects: Station Information System (SIS) Development: The SIS will provide users with an interface into complete and accurate station metadata for all current and historic data at the SCEDC. The goal of this project is to develop a system that can interact with a single database source to enter, update and retrieve station metadata easily and efficiently. The system will provide accurate station/channel information for active stations to the SCSN real-time processing system, as will as station/channel information for stations that have parametric data at the SCEDC i.e., for users retrieving data via STP. Additionally, the SIS will supply information required to generate dataless SEED and COSMOS V0 volumes and allow stations to be added to the system with a minimum, but incomplete set of information using predefined defaults that can be easily updated as more information becomes available. Finally, the system will facilitate statewide metadata exchange for both real-time processing and provide a common approach to CISN historic station metadata. Moment Tensor Solutions: The SCEDC is currently archiving and delivering Moment Magnitudes and Moment Tensor Solutions (MTS) produced by the SCSN in real-time and post-processing solutions for events spanning back to 1999. The automatic MTS runs on all local events with magnitudes > 3.0, and all regional events > 3.5. The distributed solution automatically creates links from all USGS Simpson Maps to a text e-mail summary solution, creates a .gif image of the solution, and updates the moment tensor database tables at the SCEDC. Searchable Scanned Waveforms Site: The Caltech Seismological Lab has made available 12,223 scanned images of pre-digital analog recordings of major earthquakes recorded in Southern California between 1962 and 1992 at http://www.data.scec.org/research/scans/. The SCEDC has developed a searchable web interface that allows users to search the available files, select multiple files for download and then retrieve a zipped file containing the results. Scanned images of paper records for M>3.5 southern California earthquakes and several significant teleseisms are available for download via the SCEDC through this search tool.

  13. THE GREAT SOUTHERN CALIFORNIA SHAKEOUT: Earthquake Science for 22 Million People

    NASA Astrophysics Data System (ADS)

    Jones, L.; Cox, D.; Perry, S.; Hudnut, K.; Benthien, M.; Bwarie, J.; Vinci, M.; Buchanan, M.; Long, K.; Sinha, S.; Collins, L.

    2008-12-01

    Earthquake science is being communicated to and used by the 22 million residents of southern California to improve resiliency to future earthquakes through the Great Southern California ShakeOut. The ShakeOut began when the USGS partnered with the California Geological Survey, Southern California Earthquake Center and many other organizations to bring 300 scientists and engineers together to formulate a comprehensive description of a plausible major earthquake, released in May 2008, as the ShakeOut Scenario, a description of the impacts and consequences of a M7.8 earthquake on the Southern San Andreas Fault (USGS OFR2008-1150). The Great Southern California ShakeOut was a week of special events featuring the largest earthquake drill in United States history. The ShakeOut drill occurred in houses, businesses, and public spaces throughout southern California at 10AM on November 13, 2008, when southern Californians were asked to pretend that the M7.8 scenario earthquake had occurred and to practice actions that could reduce the impact on their lives. Residents, organizations, schools and businesses registered to participate in the drill through www.shakeout.org where they could get accessible information about the scenario earthquake and share ideas for better reparation. As of September 8, 2008, over 2.7 million confirmed participants had been registered. The primary message of the ShakeOut is that what we do now, before a big earthquake, will determine what our lives will be like after. The goal of the ShakeOut has been to change the culture of earthquake preparedness in southern California, making earthquakes a reality that are regularly discussed. This implements the sociological finding that 'milling,' discussing a problem with loved ones, is a prerequisite to taking action. ShakeOut milling is taking place at all levels from individuals and families, to corporations and governments. Actions taken as a result of the ShakeOut include the adoption of earthquake response technologies by Los Angeles Unified School District and a top to bottom examination of Los Angeles County Fire Department's earthquake response strategies.

  14. Liquefaction at Oceano, California, during the 2003 San Simeon earthquake

    USGS Publications Warehouse

    Holzer, T.L.; Noce, T.E.; Bennett, M.J.; Tinsley, J. C., III; Rosenberg, L.I.

    2005-01-01

    The 2003 M 6.5 San Simeon, California, earthquake caused liquefaction-induced lateral spreading at Oceano at an unexpectedly large distance from the seismogenic rupture. We conclude that the liquefaction was caused by ground motion that was enhanced by both rupture directivity in the mainshock and local site amplification by unconsolidated fine-grained deposits. Liquefaction occurred in sandy artificial fill and undisturbed eolian sand and fluvial deposits. The largest and most damaging lateral spread was caused by liquefaction of artificial fill; the head of this lateral spread coincided with the boundary between the artificial fill and undisturbed eolian sand deposits. Values of the liquefaction potential index, in general, were greater than 5 at liquefaction sites, the threshold value that has been proposed for liquefaction hazard mapping. Although the mainshock ground motion at Oceano was not recorded, peak ground acceleration was estimated to range from 0.25 and 0.28g on the basis of the liquefaction potential index and aftershock recordings. The estimates fall within the range of peak ground acceleration values associated with the modified Mercalli intensity = VII reported at the U.S. Geological Survey (USGS) "Did You Feel It?" web site.

  15. Error propagation in time-dependent probability of occurrence for characteristic earthquakes in Italy

    NASA Astrophysics Data System (ADS)

    Peruzza, Laura; Pace, Bruno; Cavallini, Fabio

    2010-01-01

    Time-dependent models for seismic hazard and earthquake probabilities are at the leading edge of research nowadays. In the framework of a 2-year national Italian project (2005-2007), we have applied the Brownian passage time (BPT) renewal model to the recently released Database of Individual Seismogenic Sources (DISS) to compute earthquake probability in the period 2007-2036. Observed interevent times on faults in Italy are absolutely insufficient to characterize the recurrence time. We, therefore, derived mean recurrence intervals indirectly. To estimate the uncertainty of the results, we resorted to the theory of error propagation with respect to the main parameters: magnitude and slip rate. The main issue concerned the high variability of slip rate, which could hardly be reduced by exploiting geodetic constraints. We did some validation tests, and interesting considerations were derived from seismic moment budgeting on the historical earthquake catalog. In a time-dependent perspective, i.e., when the date of the last event is known, only 10-15% of the 115 sources exhibit a probability of a characteristic earthquake in the next 30 years higher than the equivalent Poissonian probabilities. If we accept the Japanese conventional choice of probability threshold greater than 3% in 30 years to define highly probable sources, mainly intermediate earthquake faults with characteristic M < 6, having an elapsed time of 0.7-1.2 times the recurrence interval are the most prone sources. The number of highly probable sources rises by increasing the aperiodicity coefficient (from 14 sources in the case of variable ? ranging between 0.22 and 0.36 to 31 sources out of 115 in the case of an ? value fixed at 0.7). On the other hand, in stationary time-independent approaches, more than two thirds of all sources are considered probabilistically prone to an impending earthquake. The performed tests show the influence of the variability of the aperiodicity factor in the BPT renewal model on the absolute probability values. However, the influence on the relative ranking of sources is small. Future developments should give priority to a more accurate determination of the date of the last seismic event for a few seismogenic sources of the DISS catalog and to a careful check on the applicability of a purely characteristic model.

  16. Operational Earthquake Forecasting and Decision-Making in a Low-Probability Environment

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; the International Commission on Earthquake ForecastingCivil Protection

    2011-12-01

    Operational earthquake forecasting (OEF) is the dissemination of authoritative information about the time dependence of seismic hazards to help communities prepare for potentially destructive earthquakes. Most previous work on the public utility of OEF has anticipated that forecasts would deliver high probabilities of large earthquakes; i.e., deterministic predictions with low error rates (false alarms and failures-to-predict) would be possible. This expectation has not been realized. An alternative to deterministic prediction is probabilistic forecasting based on empirical statistical models of aftershock triggering and seismic clustering. During periods of high seismic activity, short-term earthquake forecasts can attain prospective probability gains in excess of 100 relative to long-term forecasts. The utility of such information is by no means clear, however, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing OEF in this sort of "low-probability environment." The need to move more quickly has been underscored by recent seismic crises, such as the 2009 L'Aquila earthquake sequence, in which an anxious public was confused by informal and inaccurate earthquake predictions. After the L'Aquila earthquake, the Italian Department of Civil Protection appointed an International Commission on Earthquake Forecasting (ICEF), which I chaired, to recommend guidelines for OEF utilization. Our report (Ann. Geophys., 54, 4, 2011; doi: 10.4401/ag-5350) concludes: (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and timely, and need to convey epistemic uncertainties. (b) Earthquake probabilities should be based on operationally qualified, regularly updated forecasting systems. (c) All operational models should be evaluated for reliability and skill by retrospective testing, and the models should be under continuous prospective testing against long-term forecasts and alternative time-dependent models. (d) Short-term models used in operational forecasting should be consistent with the long-term forecasts used in probabilistic seismic hazard analysis. (e) Alert procedures should be standardized to facilitate decisions at different levels of government, based in part on objective analysis of costs and benefits. (f) In establishing alert protocols, consideration should also be given to the less tangible aspects of value-of-information, such as gains in psychological preparedness and resilience. Authoritative statements of increased risk, even when the absolute probability is low, can provide a psychological benefit to the public by filling information vacuums that lead to informal predictions and misinformation. Formal OEF procedures based on probabilistic forecasting appropriately separate hazard estimation by scientists from the decision-making role of civil protection authorities. The prosecution of seven Italian scientists on manslaughter charges stemming from their actions before the L'Aquila earthquake makes clear why this separation should be explicit in defining OEF protocols.

  17. Estimated ground motion from the 1994 Northridge, California, earthquake at the site of interstate 10 and La Cienega Boulevard bridge collapse, West Los Angeles, California

    USGS Publications Warehouse

    Boore, D.M.; Gibbs, J.F.; Joyner, W.B.; Tinsley, J.C.; Ponti, D.J.

    2003-01-01

    We have estimated ground motions at the site of a bridge collapse during the 1994 Northridge, California, earthquake. The estimated motions are based on correcting motions recorded during the mainshock 2.3 km from the collapse site for the relative site response of the two sites. Shear-wave slownesses and damping based on analysis of borehole measurements at the two sites were used in the site response analysis. We estimate that the motions at the collapse site were probably larger, by factors ranging from 1.2 to 1.6, than at the site at which the ground motion was recorded, for periods less than about 1 sec.

  18. Paleoclimate Constraints on Channel Incision and Earthquake Slip on the San Andreas Fault, Carrizo Plain, California

    NASA Astrophysics Data System (ADS)

    Noriega, G. R.; Grant-Ludwig, L.; Akciz, S.; Arrowsmith, R.

    2008-12-01

    Laterally offset channels are one of the main geomorphological features that are commonly used to measure slip rates, and estimate the magnitude of paleoearthquakes and their recurrence intervals. Difficulty in constraining the age of channel incision, however, generally introduces large uncertainties into the calculations. We combine paleoclimate data from southern California with paleoseismologic data from the Carrizo section of the San Andreas Fault (SAF) to better interpret the age of some of the ephemeral stream channels that are offset by paleoearthquakes that ruptured this section of the SAF. In this study, we assume that runoff from precipitation is the main driving force of channel incision in the Carrizo Plain, and by identifying the extreme wet and dry periods based on the readily available climate data, particularly precipitation data, we narrow down the timing of initiation of channel incision. This allows us to examine the rate of channel incision across the San Andreas fault and compare it with the rate of seismic events. We identified several extreme climate events (wet years) as likely dates of channel incision events in the Carrizo Plain. The findings reveal new constraints on incision of channels in the Carrizo Plain. At the Bidart Fan site we propose that a channel (NW channel) that has been offset 14.6 to 18.4 m, could have been incised during an extreme wet year in either A.D. 1366 or A.D. 1418. A second channel (SE channel) that is offset 7 or 8 m probably incised in A.D 1642. Grant and Sieh (1994) indicated the 14.6 to 18.4 m offset of the NW channel is due to the 1857 and one or two previous earthquake(s). Grant and Sieh (1994) inferred 7 or 8 m offset of the SE channel is due to the 1857 earthquake alone. The new climate inferred dates of channel incision, along with earthquake dates from Akciz et al. (submitted) suggest the NE channel may have been offset by three or four earthquake events, and the SE channel has been offset by the 1857 and possibly by a previous earthquake.

  19. California Earthquake Clearinghouse Activation for August 24, 2014, M6.0 South Napa Earthquake

    NASA Astrophysics Data System (ADS)

    Rosinski, A.; Parrish, J.; Mccrink, T. P.; Tremayne, H.; Ortiz, M.; Greene, M.; Berger, J.; Blair, J. L.; Johnson, M.; Miller, K.; Seigel, J.; Long, K.; Turner, F.

    2014-12-01

    The Clearinghouse's principal functions are to 1) coordinate field investigations of earth scientists, engineers, and other participating researchers; 2) facilitate sharing of observations through regular meetings and through the Clearinghouse website; and 3) notify disaster responders of crucial observations or results. Shortly after 3:20 a.m., on August 24, 2014, Clearinghouse management committee organizations, the California Geological Survey (CGS), the Earthquake Engineering Research Institute (EERI), the United States Geological Survey (USGS), the California Office of Emergency Services (CalOES), and the California Seismic Safety Commission (CSSC), authorized activation of a virtual Clearinghouse and a physical Clearinghouse location. The California Geological Survey, which serves as the permanent, lead coordination organization for the Clearinghouse, provided all coordination with the state for all resources required for Clearinghouse activation. The Clearinghouse physical location, including mobile satellite communications truck, was opened at a Caltrans maintenance facility located at 3161 Jefferson Street, in Napa. This location remained active through August 26, 2014, during which time it drew the participation of over 100 experts from more than 40 different organizations, and over 1730 remote visitors via the Virtual Clearinghouse and online data compilation map. The Clearinghouse conducted three briefing calls each day with the State Operations Center (SOC) and Clearinghouse partners, and also conducted nightly briefings, accessible to remote participants via webex, with field personnel. Data collected by field researchers was compiled into a map through the efforts of EERI and USGS volunteers in the Napa Clearinghouse. EERI personnel continued to provide updates to the compilation map over an extended period of time following de-activation of the Clearinghouse. In addition, EERI managed the Clearinghouse website. Two overflights were conducted, for reconnaissance on August 24, with a scientist and an engineer, and on August 25, to collect high resolution still-frame imagery. Following de-activation of the Clearinghouse, a multi-agency, state-federal, cost-sharing agreement was reached to acquire airborne LiDAR of the region affected by surface fault rupture.

  20. Collaborative Projects at the Northern California Earthquake Data Center (NCEDC)

    NASA Astrophysics Data System (ADS)

    Neuhauser, D.; Oppenheimer, D.; Zuzlewski, S.; Gee, L.; Murray, M.; Bassett, A.; Prescott, W.; Romanowicz, B.

    2001-12-01

    The NCEDC is a joint project of the UC Berkeley Seismological Laboratory and the USGS Menlo Park to provide a long-term archive and distribution center for geophysical data for northern California. Most data are available via the Web at http://quake.geo.berkeley.edu and research accounts are available for access to specialized datasets. Current efforts continue to expand the available datasets and to enhance distribution methods. The NCEDC currently archives continuous and event seismic waveform data from the BDSN and the USGS NCSN. Data from the BDSN are available in SEED and work is underway to make NCSN data available in this format. This massive project requires assembling and tracking the instrument responses from over 5000 current and historic NCSN data channels. Event waveforms from specialized networks, such as Geysers and Parkfield, are also available. In collaboration with the USGS, the NCEDC has archived a total of 887 channels from 139 sites of the "USGS low-frequency" geophysical network (UL), including data from strainmeters, creep meters, magnetometers, water well levels, and tiltmeters. There are 486 current data channels being updated at the NCEDC on a daily basis. All UL data are available in SEED. Data from the BARD network of over 40 continuously recording GPS sites are archived at the NCEDC in both raw and RINEX format. The NCEDC is now the primary archive for survey-mode GPS and other geodetic data collected in northern California by the USGS, universities, and other agencies. All of the BARD data and GPS data archived from USGS Menlo Park surveys are now available from the NCEDC via FTP. To support more portable and uniform data query programs among data centers, the NCEDC developed a set of Generic Data Center Views (GDVs) that incorporates the basic information that most datacenters maintain about data channels, instrument responses, and waveform inventory. We defined MSQL (Meta SeismiQuery Language), a query language based on the SQL SELECT command, to perform queries on the GDVs, and developed a program which converts the MSQL to an SQL request. MSQL2SQL converts the MSQL command into a parse tree, and defines an API allowing each datacenter to traverse the parse tree and revise it to produce a data center-specific SQL request. The NCEDC converted the IRIS SeismiQuery program to use the GDVs and MSQL, installed it at the NCEDC, and distributed the software to IRIS, SCEC-DC, and other interested parties. The resulting program should be much easier to install and support at other data centers. The NCEDC is also working on several data center integration projects in order to provide users with seamless access to data. The NCEDC is collaborating with IRIS on the NETDC project and with UNAVCO on the GPS Seamless Archive Centers initiative. Through the newly formed California Integrated Seismic Network, we are working with the SCEC-DC to provide unified access to California earthquake data.

  1. Occurrence probability assessment of earthquake-triggered landslides with Newmark displacement values and logistic regression: The Wenchuan earthquake, China

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Song, Chongzhen; Lin, Qigen; Li, Juan

    2016-04-01

    The Newmark displacement model has been used to predict earthquake-triggered landslides. Logistic regression (LR) is also a common landslide hazard assessment method. We combined the Newmark displacement model and LR and applied them to Wenchuan County and Beichuan County in China, which were affected by the Ms. 8.0 Wenchuan earthquake on May 12th, 2008, to develop a mechanism-based landslide occurrence probability model and improve the predictive accuracy. A total of 1904 landslide sites in Wenchuan County and 3800 random non-landslide sites were selected as the training dataset. We applied the Newmark model and obtained the distribution of permanent displacement (Dn) for a 30 × 30 m grid. Four factors (Dn, topographic relief, and distances to drainages and roads) were used as independent variables for LR. Then, a combined model was obtained, with an AUC (area under the curve) value of 0.797 for Wenchuan County. A total of 617 landslide sites and non-landslide sites in Beichuan County were used as a validation dataset with AUC = 0.753. The proposed method may also be applied to earthquake-induced landslides in other regions.

  2. Liquefaction caused by the 2009 Olancha, California (USA), M5.2 earthquake

    USGS Publications Warehouse

    Holzer, T.L.; Jayko, A.S.; Hauksson, E.; Fletcher, J.P.B.; Noce, T.E.; Bennett, M.J.; Dietel, C.M.; Hudnut, K.W.

    2010-01-01

    The October 3, 2009 (01:16:00 UTC), Olancha M5.2 earthquake caused extensive liquefaction as well as permanent horizontal ground deformation within a 1.2 km2area earthquake in Owens Valley in eastern California (USA). Such liquefaction is rarely observed during earthquakes of M ≤ 5.2. We conclude that subsurface conditions, not unusual ground motion, were the primary factors contributing to the liquefaction. The liquefaction occurred in very liquefiable sands at shallow depth (< 2 m) in an area where the water table was near the land surface. Our investigation is relevant to both geotechnical engineering and geology. The standard engineering method for assessing liquefaction potential, the Seed–Idriss simplified procedure, successfully predicted the liquefaction despite the small earthquake magnitude. The field observations of liquefaction effects highlight a need for caution by earthquake geologists when inferring prehistoric earthquake magnitudes from paleoliquefaction features because small magnitude events may cause such features.

  3. Mapping the source of the 1983 Mw 6.5 Coalinga thrust earthquake (California)

    NASA Astrophysics Data System (ADS)

    Meunier, Patrick; Marc, Odin; Hovius, Niels

    2013-04-01

    We have recently shown that density patterns of co-seismic landslides associated to large thrust earthquakes can be used to map the area of maximum slip of the fault plan (Meunier et al., 2013), arguing that once adjusted for site effects, landslide distributions can supplement or replace instrumental records of earthquakes. We have applied our method to the 1983 Mw 6.5 Coalinga thrust earthquake (California). At the times of the main shock, the epicentral area of this earthquake was not covered with the dense network of accelerometers that has been installed since. Consequently, the slip inversion, inverted from leveling cross-sections and teleseismic data, is poorly constrained in comparison to the recent big thrust earthquakes we've been studied. We discuss the inversion of the source of this earthquake and compare its localisation with the one proposed by Stein and Ekstrom (1992).

  4. Real-time forecasts of tomorrow's earthquakes in California: a new mapping tool

    USGS Publications Warehouse

    Gerstenberger, Matt; Wiemer, Stefan; Jones, Lucy

    2004-01-01

    We have derived a multi-model approach to calculate time-dependent earthquake hazard resulting from earthquake clustering. This file report explains the theoretical background behind the approach, the specific details that are used in applying the method to California, as well as the statistical testing to validate the technique. We have implemented our algorithm as a real-time tool that has been automatically generating short-term hazard maps for California since May of 2002, at http://step.wr.usgs.gov

  5. UAVSAR and GPS Observations of Crustal Deformation in Southern California and Implications for Earthquake Risk

    NASA Astrophysics Data System (ADS)

    Donnellan, A.; Parker, J. W.; Lyzenga, G. A.; Rundle, J. B.; Grant Ludwig, L.; Granat, R. A.; Glasscoe, M. T.; Heflin, M. B.

    2010-12-01

    The 2010 El-Mayor Cucapah earthquake was the first earthquake to be observed with UAVSAR. UAVSAR observations, GPS time series analysis, and simulations suggest that the fault that ruptured in the earthquake is coupled to the Elsinore, San Jacinto, and San Andreas faults to the north. GPS and UAVSAR observations indicate a zone of shear that extends southward from the Big Bend of the San Andreas fault near Gorman through the San Fernando Valley towards the Newport-Inglewood fault. The zone steps over to the region of the Elsinore or San Jacinto faults, though the partitioning of strain between the two faults is not as clear. State changes in GPS time series data fall in line with the shear zone through the San Fernando Valley and extend northward from the El Mayor-Cucapah earthquake rupture. Seismicity hotspots also indicate elevated earthquake hazard near the San Fernando Valley and in the Inland Empire near the Elsinore and San Jacinto faults. Inversions of GPS velocity vectors favor a fault underlying the shear zone extending from the Big Bend to the Newport-Inglewood fault over substantial slip on the San Andreas fault under north of Los Angeles. Virtual California simulations of southern California are being analyzed for fault activity associated with the identified shear zone and for subsequent earthquakes that may be related to El Mayor-Cucapah type earthquakes in Baja.

  6. Changes in static stress on southern California faults after the 1992 Landers earthquake

    USGS Publications Warehouse

    Harris, R.A.; Simpson, R.W.

    1992-01-01

    THE magnitude 7.5 Landers earthquake of 28 June 1992 was the largest earthquake to strike California in 40 years. The slip that occurs in such an earthquake would be expected to induce large changes in the static stress on neighbouring faults; these changes in stress should in turn affect the likelihood of future earthquakes. Stress changes that load faults towards failure have been cited as the cause of small1-5, moderate6 and large7 earthquakes; conversely, those that relax neighbouring faults have been related to a decrease in seismicity5. Here we use an elastic half-space model8 to estimate the stress changes produced by the Landers earthquake on selected southern California faults, including the San Andreas. We find that the estimated stress changes are consistent with the triggering of four out of the five aftershocks with magnitude greater than 4.5, and that the largest changes (1-10 bar), occurring on part of the San Bernardino segment of the San Andreas fault, may have decreased the time to the next magnitude 8 earthquake by about 14 years.

  7. Earthquake-Induced Landslide Probability Derived From Four Different Methods and Result Comparison

    NASA Astrophysics Data System (ADS)

    Lee, C.

    2005-12-01

    This study analyzed landslides induced by the 1999 Chi-Chi, Taiwan earthquake at a test site in Central Taiwan, called Kuohsing, and landslide spatial probability maps for the test site were made. Landslides induced by the earthquake were extracted from SPOT imageries, Landslide potential factors, which include slope, slope aspect, terrain roughness, total curvature and slope height were derived from a 40m resolution DEM. Lithology and structural data were obtained from a 1 to 50 thousand scaled geological map. Earthquake strong-motion data were used to calculate Arias intensity and others. The state-of-the-art methods, which include two multivariate approach - discriminant analysis and logistic regression, an artificial neural network approach, and the Newmark's method, were used in the analyses. In the discriminant analysis, the output discriminant scores are used to develop landslide susceptibility index (LSI). In the logistic regression, an output probability is used as a LSI directly. In the artificial neural network approach, a fuzzy set concept for landslide and non-landslide was incorporated into the analysis so that the network can output a continuous spectrum for landslide and non-landslide membership, and a defuzzifier was used to obtain a nonfuzzy value for LSI. In the Newmark's method, the output value is a Newmark displacement (Dn). All LSIs and Dns are compared with the landslide inventory and then calculate the landslide ratio or probability of failure for each LSI or Dn interval. These were used to develop the probability of failure functions against LSIs or Dn. Landslide probability maps were then drawn by using the probability of failure functions. All the four methods obtain good result in predicting landslides. Four landslide probability maps show similar probability level and distribution pattern. Among the four methods, discriminant analysis and logistic regression are both stable and good in predicting landslides. The artificial neural network method is good also, but it revealed over-trained phenomenon at the hilly terrain in our test area. The performance of the Newmark's method is not so good as the other methods.

  8. Potential earthquake faults offshore Southern California, from the eastern Santa Barbara Channel south to Dana Point

    USGS Publications Warehouse

    Fisher, M.A.; Sorlien, C.C.; Sliter, R.W.

    2009-01-01

    Urban areas in Southern California are at risk from major earthquakes, not only quakes generated by long-recognized onshore faults but also ones that occur along poorly understood offshore faults. We summarize recent research findings concerning these lesser known faults. Research by the U.S. Geological Survey during the past five years indicates that these faults from the eastern Santa Barbara Channel south to Dana Point pose a potential earthquake threat. Historical seismicity in this area indicates that, in general, offshore faults can unleash earthquakes having at least moderate (M 5-6) magnitude. Estimating the earthquake hazard in Southern California is complicated by strain partitioning and by inheritance of structures from early tectonic episodes. The three main episodes are Mesozoic through early Miocene subduction, early Miocene crustal extension coeval with rotation of the Western Transverse Ranges, and Pliocene and younger transpression related to plate-boundary motion along the San Andreas Fault. Additional complication in the analysis of earthquake hazards derives from the partitioning of tectonic strain into strike-slip and thrust components along separate but kinematically related faults. The eastern Santa Barbara Basin is deformed by large active reverse and thrust faults, and this area appears to be underlain regionally by the north-dipping Channel Islands thrust fault. These faults could produce moderate to strong earthquakes and destructive tsunamis. On the Malibu coast, earthquakes along offshore faults could have left-lateral-oblique focal mechanisms, and the Santa Monica Mountains thrust fault, which underlies the oblique faults, could give rise to large (M ??7) earthquakes. Offshore faults near Santa Monica Bay and the San Pedro shelf are likely to produce both strike-slip and thrust earthquakes along northwest-striking faults. In all areas, transverse structures, such as lateral ramps and tear faults, which crosscut the main faults, could segment earthquake rupture zones. ?? 2009 The Geological Society of America.

  9. Southern California Earthquake Center--Virtual Display of Objects (SCEC-VDO): An Earthquake Research and Education Tool

    NASA Astrophysics Data System (ADS)

    Perry, S.; Maechling, P.; Jordan, T.

    2006-12-01

    Interns in the program Southern California Earthquake Center/Undergraduate Studies in Earthquake Information Technology (SCEC/UseIT, an NSF Research Experience for Undergraduates Site) have designed, engineered, and distributed SCEC-VDO (Virtual Display of Objects), an interactive software used by earthquake scientists and educators to integrate and visualize global and regional, georeferenced datasets. SCEC-VDO is written in Java/Java3D with an extensible, scalable architecture. An increasing number of SCEC-VDO datasets are obtained on the fly through web services and connections to remote databases; and user sessions may be saved in xml-encoded files. Currently users may display time-varying sequences of earthquake hypocenters and focal mechanisms, several 3-dimensional fault and rupture models, satellite imagery - optionally draped over digital elevation models - and cultural datasets including political boundaries. The ability to juxtapose and interactively explore these data and their temporal and spatial relationships has been particularly important to SCEC scientists who are evaluating fault and deformation models, or who must quickly evaluate the menace of evolving earthquake sequences. Additionally, SCEC-VDO users can annotate the display, plus script and render animated movies with adjustable compression levels. SCEC-VDO movies are excellent communication tools and have been featured in scientific presentations, classrooms, press conferences, and television reports.

  10. Chapter B. The Loma Prieta, California, Earthquake of October 17, 1989 - Forecasts

    USGS Publications Warehouse

    Harris, Ruth A.

    1998-01-01

    The magnitude (Mw) 6.9 Loma Prieta earthquake struck the San Francisco Bay region of central California at 5:04 p.m. P.d.t. on October 17, 1989, killing 62 people and generating billions of dollars in property damage. Scientists were not surprised by the occurrence of a destructive earthquake in this region and had, in fact, been attempting to forecast the location of the next large earthquake in the San Francisco Bay region for decades. This paper summarizes more than 20 scientifically based forecasts made before the 1989 Loma Prieta earthquake for a large earthquake that might occur in the Loma Prieta area. The forecasts geographically closest to the actual earthquake primarily consisted of right-lateral strike-slip motion on the San Andreas Fault northwest of San Juan Bautista. Several of the forecasts did encompass the magnitude of the actual earthquake, and at least one approximately encompassed the along-strike rupture length. The 1989 Loma Prieta earthquake differed from most of the forecasted events in two ways: (1) it occurred with considerable dip-slip in addition to strike-slip motion, and (2) it was much deeper than expected.

  11. Moment tensor inversions of M ~ 3 earthquakes in the Geysers geothermal fields, California

    NASA Astrophysics Data System (ADS)

    Guilhem, A.; Hutchings, L.; Dreger, D. S.; Johnson, L. R.

    2014-03-01

    Microearthquakes have come into high public awareness due to being induced by the development and exploitation of enhanced and natural geothermal fields, hydrofracturing, and CO2 sequestration sites. Characterizing and understanding the faulting process of induced earthquakes, which is generally achieved through moment tensor inversion, could both help in risk prediction and in reservoir development monitoring. However, this is a challenging task because of their lower signal-to-noise ratio at frequencies typically used in earthquake source analyses. Therefore, higher-resolution velocity models and modeling of seismic waves at higher frequencies are required. In this study, we examine both the potentials to obtain moment tensor solutions for small earthquakes and the uncertainty of those solutions. We utilize a short-period seismic network located in the Geysers geothermal field in northern California and limit our study to that which would be achieved by industry in a typical reservoir environment. We obtain full moment tensor solutions of M ~ 3 earthquakes using waveform modeling and first-motion inversions. We find that these two data sets give complimentary but yet different solutions. Some earthquakes correspond possibly to complex processes in which both shear and tensile failures occur simultaneously or sequentially. This illuminates the presence of fluids at depth and their role for the generation of these small-magnitude earthquakes. Finally, since first motions are routinely obtained for all magnitude earthquakes, our approach could be extended to small earthquakes where noise level and complex Green's functions prohibit using waveforms in moment tensor inversions.

  12. Properties of the probability distribution associated with the largest event in an earthquake cluster and their implications to foreshocks

    SciTech Connect

    Zhuang Jiancang; Ogata, Yosihiko

    2006-04-15

    The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata et al., Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method.

  13. Evidence for dyke intrusion earthquake mechanisms near long valley caldera, California

    USGS Publications Warehouse

    Julian, B.R.

    1983-01-01

    A re-analysis of the magnitude 6 earthquakes that occurred near Long Valley caldera in eastern California on 25 and 27 May 1980, suggests that at least two of them, including the largest, were probably caused by fluid injection along nearly vertical surfaces and not by slip on faults. Several investigators 1,2 have reported difficulty in explaining both the long-period surface-wave amplitudes and phases and the locally recorded short-period body-wave first motions from these events, using conventional double-couple (shear fault) source models. They attributed this difficulty to: (1) complex sources, not representable by single-fault models; (2) artefacts of the analysis methods used; or (3) effects of wave propagation through hypothetical structures beneath the caldera. We show here that the data agree well with the predictions for a compensated linear-vector dipole (CLVD) equivalent-force system3 with its principal extensional axis horizontal and trending N 55-65?? E. Such a mechanism is what would be expected for fluid injection into dykes striking N 25-35?? W, which is the approximate strike of numerous normal faults in the area. ?? 1983 Nature Publishing Group.

  14. Southern California Earthquake Center (SCEC) Communication, Education and Outreach Program

    NASA Astrophysics Data System (ADS)

    Benthien, M. L.

    2003-12-01

    The SCEC Communication, Education, and Outreach Program (CEO) offers student research experiences, web-based education tools, classroom curricula, museum displays, public information brochures, online newsletters, and technical workshops and publications. This year, much progress has been made on the development of the Electronic Encyclopedia of Earthquakes (E3), a collaborative project with CUREE and IRIS. The E3 development system is now fully operational, and 165 entries are in the pipeline. When complete, information and resources for over 500 Earth science and engineering topics will be included, with connections to curricular materials useful for teaching Earth Science, engineering, physics and mathematics. To coordinate activities for the 10-year anniversary of the Northridge Earthquake in 2004 (and beyond), the "Earthquake Country Alliance" is being organized by SCEC CEO to present common messages, to share or promote existing resources, and to develop new activities and products jointly (such as a new version of Putting Down Roots in Earthquake Country). The group includes earthquake science and engineering researchers and practicing professionals, preparedness experts, response and recovery officials, news media representatives, and education specialists. A web portal, http://www.earthquakecountry.info, is being developed established with links to web pages and descriptions of other resources and services that the Alliance members provide. Another ongoing strength of SCEC is the Summer Intern program, which now has a year-round counterpart with students working on IT projects at USC. Since Fall 2002, over 32 students have participated in the program, including 7 students working with scientists throughout SCEC, 17 students involved in the USC "Earthquake Information Technology" intern program, and 7 students involved in CEO projects. These and other activities of the SCEC CEO program will be presented, along with lessons learned during program design and implementation.

  15. Liquefaction-induced lateral spreading in Oceano, California, during the 2003 San Simeon Earthquake

    USGS Publications Warehouse

    Holzer, Thomas L.; Noce, Thomas E.; Bennett, Michael J.; Di Alessandro, Carola; Boatwright, John; Tinsley, John C., III; Sell, Russell W.; Rosenberg, Lewis I.

    2004-01-01

    The December 22, 2003, San Simeon, California, (M6.5) earthquake caused damage to houses, road surfaces, and underground utilities in Oceano, California. The community of Oceano is approximately 50 miles (80 km) from the earthquake epicenter. Damage at this distance from a M6.5 earthquake is unusual. To understand the causes of this damage, the U.S. Geological Survey conducted extensive subsurface exploration and monitoring of aftershocks in the months after the earthquake. The investigation included 37 seismic cone penetration tests, 5 soil borings, and aftershock monitoring from January 28 to March 7, 2004. The USGS investigation identified two earthquake hazards in Oceano that explain the San Simeon earthquake damage?site amplification and liquefaction. Site amplification is a phenomenon observed in many earthquakes where the strength of the shaking increases abnormally in areas where the seismic-wave velocity of shallow geologic layers is low. As a result, earthquake shaking is felt more strongly than in surrounding areas without similar geologic conditions. Site amplification in Oceano is indicated by the physical properties of the geologic layers beneath Oceano and was confirmed by monitoring aftershocks. Liquefaction, which is also commonly observed during earthquakes, is a phenomenon where saturated sands lose their strength during an earthquake and become fluid-like and mobile. As a result, the ground may undergo large permanent displacements that can damage underground utilities and well-built surface structures. The type of displacement of major concern associated with liquefaction is lateral spreading because it involves displacement of large blocks of ground down gentle slopes or towards stream channels. The USGS investigation indicates that the shallow geologic units beneath Oceano are very susceptible to liquefaction. They include young sand dunes and clean sandy artificial fill that was used to bury and convert marshes into developable lots. Most of the 2003 damage was caused by lateral spreading in two separate areas, one near Norswing Drive and the other near Juanita Avenue. The areas coincided with areas with the highest liquefaction potential found in Oceano. Areas with site amplification conditions similar to those in Oceano are particularly vulnerable to earthquakes. Site amplification may cause shaking from distant earthquakes, which normally would not cause damage, to increase locally to damaging levels. The vulnerability in Oceano is compounded by the widespread distribution of highly liquefiable soils that will reliquefy when ground shaking is amplified as it was during the San Simeon earthquake. The experience in Oceano can be expected to repeat because the region has many active faults capable of generating large earthquakes. In addition, liquefaction and lateral spreading will be more extensive for moderate-size earthquakes that are closer to Oceano than was the 2003 San Simeon earthquake. Site amplification and liquefaction can be mitigated. Shaking is typically mitigated in California by adopting and enforcing up-to-date building codes. Although not a guarantee of safety, application of these codes ensures that the best practice is used in construction. Building codes, however, do not always require the upgrading of older structures to new code requirements. Consequently, many older structures may not be as resistant to earthquake shaking as new ones. For older structures, retrofitting is required to bring them up to code. Seismic provisions in codes also generally do not apply to nonstructural elements such as drywall, heating systems, and shelving. Frequently, nonstructural damage dominates the earthquake loss. Mitigation of potential liquefaction in Oceano presently is voluntary for existing buildings, but required by San Luis Obispo County for new construction. Multiple mitigation procedures are available to individual property owners. These procedures typically involve either

  16. Distribution of intensity for the Westmorland, California, earthquake of April 26, 1981

    USGS Publications Warehouse

    Barnhard, L.M.; Thenhaus, P.C.; Algermissen, Sylvester Theodore

    1982-01-01

    The maximum Modified Mercalli intensity of the April 26, 1981 earthquake located 5 km northwest of Westmorland, California is VII. Twelve buildings in Westmorland were severely damaged with an additional 30 sustaining minor damage. Two brick parapets fell in Calipatria, 14 km northeast of Westmorland and 10 km from the earthquake epicenter. Significant damage in rural areas was restricted to unreinforced, concrete-lined irrigation canals. Liquefaction effects and ground slumping were widespread in rural areas and were the primary causes of road cracking. Preliminary local government estimates of property loss range from one to three million dollars (Imperial Valley Press, 1981). The earthquake was felt over an area of approximately 160,000 km2; about the same felt area of the October 15, 1979 (Reagor and others, 1980), and May 18, 1940 (Ulrich, 1941) Imperial Valley earthquakes.

  17. Hydrothermal response to a volcano-tectonic earthquake swarm, Lassen, California

    USGS Publications Warehouse

    Ingebritsen, Steven E.; Shelly, David R.; Hsieh, Paul A.; Clor, Laura; P.H. Seward; Evans, William C.

    2015-01-01

    The increasing capability of seismic, geodetic, and hydrothermal observation networks allows recognition of volcanic unrest that could previously have gone undetected, creating an imperative to diagnose and interpret unrest episodes. A November 2014 earthquake swarm near Lassen Volcanic National Park, California, which included the largest earthquake in the area in more than 60 years, was accompanied by a rarely observed outburst of hydrothermal fluids. Although the earthquake swarm likely reflects upward migration of endogenous H2O-CO2 fluids in the source region, there is no evidence that such fluids emerged at the surface. Instead, shaking from the modest sized (moment magnitude 3.85) but proximal earthquake caused near-vent permeability increases that triggered increased outflow of hydrothermal fluids already present and equilibrated in a local hydrothermal aquifer. Long-term, multiparametric monitoring at Lassen and other well-instrumented volcanoes enhances interpretation of unrest and can provide a basis for detailed physical modeling.

  18. Hydrothermal response to a volcano-tectonic earthquake swarm, Lassen, California

    NASA Astrophysics Data System (ADS)

    Ingebritsen, S. E.; Shelly, D. R.; Hsieh, P. A.; Clor, L. E.; Seward, P. H.; Evans, W. C.

    2015-11-01

    The increasing capability of seismic, geodetic, and hydrothermal observation networks allows recognition of volcanic unrest that could previously have gone undetected, creating an imperative to diagnose and interpret unrest episodes. A November 2014 earthquake swarm near Lassen Volcanic National Park, California, which included the largest earthquake in the area in more than 60 years, was accompanied by a rarely observed outburst of hydrothermal fluids. Although the earthquake swarm likely reflects upward migration of endogenous H2O-CO2 fluids in the source region, there is no evidence that such fluids emerged at the surface. Instead, shaking from the modest sized (moment magnitude 3.85) but proximal earthquake caused near-vent permeability increases that triggered increased outflow of hydrothermal fluids already present and equilibrated in a local hydrothermal aquifer. Long-term, multiparametric monitoring at Lassen and other well-instrumented volcanoes enhances interpretation of unrest and can provide a basis for detailed physical modeling.

  19. Prediction of central California earthquakes from soil-gas helium fluctuations

    USGS Publications Warehouse

    Reimer, G.M.

    1985-01-01

    The observations of short-term decreases in helium soil-gas concentrations along the San Andreas Fault in central California have been correlated with subsequent earthquake activity. The area of study is elliptical in shape with radii approximately 160??80 km, centered near San Benito, and with the major axis parallel to the Fault. For 83 percent of the M>4 earthquakes in this area a helium decrease preceded seismic activity by 1.5 to 6.5 weeks. There were several earthquakes without a decrease and several decreases without a corresponding earthquake. Owing to complex and unresolved interaction of many geophysical and geochemical parameters, no suitable model is yet developed to explain the observations. ?? 1985 Birkha??user Verlag.

  20. Injuries and Traumatic Psychological Exposures Associated with the South Napa Earthquake - California, 2014.

    PubMed

    Attfield, Kathleen R; Dobson, Christine B; Henn, Jennifer B; Acosta, Meileen; Smorodinsky, Svetlana; Wilken, Jason A; Barreau, Tracy; Schreiber, Merritt; Windham, Gayle C; Materna, Barbara L; Roisman, Rachel

    2015-01-01

    On August 24, 2014, at 3:20 a.m., a magnitude 6.0 earthquake struck California, with its epicenter in Napa County (1). The earthquake was the largest to affect the San Francisco Bay area in 25 years and caused significant damage in Napa and Solano counties, including widespread power outages, five residential fires, and damage to roadways, waterlines, and 1,600 buildings (2). Two deaths resulted (2). On August 25, Napa County Public Health asked the California Department of Public Health (CDPH) for assistance in assessing postdisaster health effects, including earthquake-related injuries and effects on mental health. On September 23, Solano County Public Health requested similar assistance. A household-level Community Assessment for Public Health Emergency Response (CASPER) was conducted for these counties in two cities (Napa, 3 weeks after the earthquake, and Vallejo, 6 weeks after the earthquake). Among households reporting injuries, a substantial proportion (48% in Napa and 37% in western Vallejo) reported that the injuries occurred during the cleanup period, suggesting that increased messaging on safety precautions after a disaster might be needed. One fifth of respondents overall (27% in Napa and 9% in western Vallejo) reported one or more traumatic psychological exposures in their households. These findings were used by Napa County Mental Health to guide immediate-term mental health resource allocations and to conduct public training sessions and education campaigns to support persons with mental health risks following the earthquake. In addition, to promote community resilience and future earthquake preparedness, Napa County Public Health subsequently conducted community events on the earthquake anniversary and provided outreach workers with psychological first aid training. PMID:26355257

  1. Instability model for recurring large and great earthquakes in southern California

    USGS Publications Warehouse

    Stuart, W.D.

    1985-01-01

    The locked section of the San Andreas fault in southern California has experienced a number of large and great earthquakes in the past, and thus is expected to have more in the future. To estimate the location, time, and slip of the next few earthquakes, an earthquake instability model is formulated. The model is similar to one recently developed for moderate earthquakes on the San Andreas fault near Parkfield, California. In both models, unstable faulting (the earthquake analog) is caused by failure of all or part of a patch of brittle, strain-softening fault zone. In the present model the patch extends downward from the ground surface to about 12 km depth, and extends 500 km along strike from Parkfield to the Salton Sea. The variation of patch strength along strike is adjusted by trial until the computed sequence of instabilities matches the sequence of large and great earthquakes since a.d. 1080 reported by Sieh and others. The last earthquake was the M=8.3 Ft. Tejon event in 1857. The resulting strength variation has five contiguous sections of alternately low and high strength. From north to south, the approximate locations of the sections are: (1) Parkfield to Bitterwater Valley, (2) Bitterwater Valley to Lake Hughes, (3) Lake Hughes to San Bernardino, (4) San Bernardino to Palm Springs, and (5) Palm Springs to the Salton Sea. Sections 1, 3, and 5 have strengths between 53 and 88 bars; sections 2 and 4 have strengths between 164 and 193 bars. Patch section ends and unstable rupture ends usually coincide, although one or more adjacent patch sections may fail unstably at once. The model predicts that the next sections of the fault to slip unstably will be 1, 3, and 5; the order and dates depend on the assumed length of an earthquake rupture in about 1700. ?? 1985 Birkha??user Verlag.

  2. Guide and Checklist for Nonstructural Earthquake Hazards in California Schools.

    ERIC Educational Resources Information Center

    2003

    The recommendations included in this document are intended to reduce seismic hazards associated with the non-structural components of schools buildings, including mechanical systems, ceiling systems, partitions, light fixtures, furnishings, and other building contents. It identifies potential earthquake hazards and provides recommendations for…

  3. Identification and Reduction of Nonstructural Earthquake Hazards in California Schools.

    ERIC Educational Resources Information Center

    Greene, Marjorie; And Others

    It is necessary to identify nonstructural hazards at the school site to reduce the possibly of injury in the event of an earthquake. Nonstructural hazards can occur in every part of a building and all of its contents with the exception of structure. In other words, nonstructural elements are everything but the columns, beams, floors, load-bearing…

  4. History of Modern Earthquake Hazard Mapping and Assessment in California Using a Deterministic or Scenario Approach

    NASA Astrophysics Data System (ADS)

    Mualchin, Lalliana

    2011-03-01

    Modern earthquake ground motion hazard mapping in California began following the 1971 San Fernando earthquake in the Los Angeles metropolitan area of southern California. Earthquake hazard assessment followed a traditional approach, later called Deterministic Seismic Hazard Analysis (DSHA) in order to distinguish it from the newer Probabilistic Seismic Hazard Analysis (PSHA). In DSHA, seismic hazard in the event of the Maximum Credible Earthquake (MCE) magnitude from each of the known seismogenic faults within and near the state are assessed. The likely occurrence of the MCE has been assumed qualitatively by using late Quaternary and younger faults that are presumed to be seismogenic, but not when or within what time intervals MCE may occur. MCE is the largest or upper-bound potential earthquake in moment magnitude, and it supersedes and automatically considers all other possible earthquakes on that fault. That moment magnitude is used for estimating ground motions by applying it to empirical attenuation relationships, and for calculating ground motions as in neo-DSHA (Z uccolo et al., 2008). The first deterministic California earthquake hazard map was published in 1974 by the California Division of Mines and Geology (CDMG) which has been called the California Geological Survey (CGS) since 2002, using the best available fault information and ground motion attenuation relationships at that time. The California Department of Transportation (Caltrans) later assumed responsibility for printing the refined and updated peak acceleration contour maps which were heavily utilized by geologists, seismologists, and engineers for many years. Some engineers involved in the siting process of large important projects, for example, dams and nuclear power plants, continued to challenge the map(s). The second edition map was completed in 1985 incorporating more faults, improving MCE's estimation method, and using new ground motion attenuation relationships from the latest published results at that time. CDMG eventually published the second edition map in 1992 following the Governor's Board of Inquiry on the 1989 Loma Prieta earthquake and at the demand of Caltrans. The third edition map was published by Caltrans in 1996 utilizing GIS technology to manage data that includes a simplified three-dimension geometry of faults and to facilitate efficient corrections and revisions of data and the map. The spatial relationship of fault hazards with highways, bridges or any other attribute can be efficiently managed and analyzed now in GIS at Caltrans. There has been great confidence in using DSHA in bridge engineering and other applications in California, and it can be confidently applied in any other earthquake-prone region. Earthquake hazards defined by DSHA are: (1) transparent and stable with robust MCE moment magnitudes; (2) flexible in their application to design considerations; (3) can easily incorporate advances in ground motion simulations; and (4) economical. DSHA and neo-DSHA have the same approach and applicability. The accuracy of DSHA has proven to be quite reasonable for practical applications within engineering design and always done with professional judgment. In the final analysis, DSHA is a reality-check for public safety and PSHA results. Although PSHA has been acclaimed as a better approach for seismic hazard assessment, it is DSHA, not PSHA, that has actually been used in seismic hazard assessment for building and bridge engineering, particularly in California.

  5. Moment accumulation rate on faults in California inferred from viscoelastic earthquake cycle models (Invited)

    NASA Astrophysics Data System (ADS)

    Johnson, K. M.

    2009-12-01

    Calculations of moment accumulation rates on active faults require knowledge of long-term fault slip rates and the area of the fault that is locked interseismically. These parameters are routinely estimated from geodetic data using elastic block models with back slip on dislocations in an elastic half-space. Yet, the elastic models are inconsistent with studies that infer postseismic viscous flow in the lower crust and mantle occurring for decades following large earthquakes. Viscous flow in the lower crust and mantle generates rapid, localized deformation early in the earthquake cycle and slower, more diffuse deformation later in the cycle. Elastic models which neglect this time-dependent flow process may lead to biased estimates of fault slip rates and locking distribution. To address this issue we have developed a three-dimensional earthquake cycle model consisting of fault-bounded blocks in an elastic crust overlying a viscoelastic lower crust and uppermost mantle. It is a kinematic model in which long-term motions of fault-bounded blocks is imposed. Interseismic locking of faults and associated deformation is modeled with steady back-slip on faults and imposed periodic earthquakes. Creep on unlocked portions of the faults occurs at constant stress and therefore the instantaneous creep rate is proportional to the instantaneous stressing rate on the fault. We compare geologic slip rate estimates in southern California with model estimates using GPS data and show that elastic block models underpredict slip rates on several faults that are late in the earthquake cycle and overpredict slip rates on faults that are early in the earthquake cycle. The viscoelastic cycle model, constrained by earthquake timing from the geologic record, predicts fault slip rates that are entirely consistent with geologic estimates for all major faults in southern California. For northern California, fault slip rate estimates using geodetic data appear not to be strongly dependent on model assumptions and are generally consistent with geologic estimates; therefore we focus on estimates of the distribution of interseismic locking of faults. We constrain the locking distribution using nearly a century of triangulation measurements of strain following the M7.8 1906 San Francisco earthquake, contemporary GPS velocities, geologic slip rate and earthquake timing data, and the viscoelastic earthquake cycle model with spatially variable distributions of locking and stress-driven creep. We find considerable lateral variations in locking depths in the San Francisco Bay area. Compared with our models of spatially variable locking distribution, models that assume a typical 15 km uniform locking depth overpredict the moment accumulation rate by a factor of 2-3 on the Peninsular San Andreas, Calaveras, Rodgers Creek, and Green Valley faults.

  6. Superficial simplicity of the 2010 El Mayorg-Cucapah earthquake of Baja California in Mexico

    USGS Publications Warehouse

    Wei, S.; Fielding, E.; Leprince, S.; Sladen, A.; Avouac, J.-P.; Helmberger, D.; Hauksson, E.; Chu, R.; Simons, M.; Hudnut, K.; Herring, T.; Briggs, R.

    2011-01-01

    The geometry of faults is usually thought to be more complicated at the surface than at depth and to control the initiation, propagation and arrest of seismic ruptures1-6. The fault system that runs from southern California into Mexico is a simple strike-slip boundary: the west side of California and Mexico moves northwards with respect to the east. However, the Mw 7.2 2010 El Mayorg-Cucapah earthquake on this fault system produced a pattern of seismic waves that indicates a far more complex source than slip on a planar strike-slip fault. Here we use geodetic, remote-sensing and seismological data to reconstruct the fault geometry and history of slip during this earthquake. We find that the earthquake produced a straight 120-km-long fault trace that cut through the Cucapah mountain range and across the Colorado River delta. However, at depth, the fault is made up of two different segments connected by a small extensional fault. Both segments strike N130 ??E, but dip in opposite directions. The earthquake was initiated on the connecting extensional fault and 15s later ruptured the two main segments with dominantly strike-slip motion. We show that complexities in the fault geometry at depth explain well the complex pattern of radiated seismic waves. We conclude that the location and detailed characteristics of the earthquake could not have been anticipated on the basis of observations of surface geology alone. ?? 2011 Macmillan Publishers Limited. All rights reserved.

  7. Felt reports and intensity assignments for aftershocks and triggered events of the great 1906 California earthquake

    USGS Publications Warehouse

    Meltzner, Aron J.; Wald, David J.

    2002-01-01

    The San Andreas fault is the longest fault in California and one of the longest strikeslip faults in the world, yet little is known about the aftershocks following the most recent great event on the San Andreas, the M 7.8 San Francisco earthquake, on 18 April 1906. This open-file report is a compilation of first-hand accounts (felt reports) describing aftershocks and triggered events of the 1906 earthquake, for the first twenty months of the aftershock sequence (through December 1907). The report includes a chronological catalog. For the larger events, Modified Mercalli intensities (MMIs) have been assigned based on the descriptions judged to be the most reliable.

  8. One hundred years of earthquake recording at the University of California

    USGS Publications Warehouse

    Bolt, B. A.

    1987-01-01

    The best seismographs then available arrived from England in 1887 and were installed at Lick Observatory on Mt.Hamilton and at the Students Astronomical Observatory on the Berkeley campus. The first California earthquake recorded by the Lick instrument was on April 24, 1887. These seismographic stations have functioned continuously from their founding to the present day, with improvements in instruments from time to time as technology advanced. Now they are part of a sesimogrpahic network of 16 stations recording with great completeness both local and distant earthquakes

  9. The 1987 Whittier Narrows, California, earthquake: A Metropolitan shock

    NASA Astrophysics Data System (ADS)

    Hauksson, Egill; Stein, Ross S.

    1989-07-01

    Just 3 hours after the Whittier Narrows earthquake struck, it became clear that a heretofore unseen geological structure was seismically active beneath metropolitan Los Angeles. Contrary to initial expectations of strike-slip or oblique-slip motion on the Whittier fault, whose north end abuts the aftershock zone, the focal mechanism of the mainshock showed pure thrust faulting on a deep gently inclined surface [Hauksson et al., 1988]. This collection of nine research reports spans the spectrum of seismological, geodetic, and geological investigations carried out as a result of the Whittier Narrows earthquake. Although unseen, the structure was not unforeseen. Namson [1987] had published a retrodeformable geologic cross section (meaning that the sedimentary strata could be restored to their original depositional position) 100 km to the west of the future earthquake epicenter in which blind, or subsurface, thrust faults were interpreted to be active beneath the folded southern Transverse Ranges. Working 25 km to the west, Hauksson [1987] had also found a surprising number of microearthquakes with thrust focal mechanisms south of the Santa Monica mountains, another clue to a subsurface system of thrust faults. Finally, Davis [1987] had presented a preliminary cross section only 18 km to the west of Whittier Narrows that identified as "fault B" the thrust that would rupture later that year. Not only was the earthquake focus and its orientation compatible with the 10-15 km depth and north dipping orientation of Davis' proposed thrust, but fault B appears to continue beneath the northern flank of the Los Angeles basin, skirting within 5 km of downtown Los Angeles, an area of dense commercial high-rise building development. These results are refined and extended by Davis et al. [this issue].

  10. The October 17, 1989, Loma Prieta, California, earthquake: selected photographs

    USGS Publications Warehouse

    Nakata, John K.; Meyer, C.E.; Wilshire, H.G.; Tinsley, J. C., III; Updegrove, W.S.; Peterson, D.M.; Ellen, S.D.; Haugerud, R.A.; McLaughlin, R.J.; Fisher, G.R.; Diggles, M.F.

    1999-01-01

    This CD-ROM contains 103 digitized color 35-mm images from Open-File Report 90-547 (Nakata and others, 1990). Our photographic coverage reflects the time and resources available immediately after the event and is not intended to portray the full extent of earthquake damage. This CD-ROM provides images for use by the interested public, multimedia producers, desktop publishers, and the high-end printing industry.

  11. A record of large earthquakes during the past two millennia on the southern Green Valley Fault, California

    USGS Publications Warehouse

    Lienkaemper, James J.; Baldwin, John N.; Turner, Robert; Sickler, Robert R.; Brown, Johnathan

    2013-01-01

    We document evidence for surface-rupturing earthquakes (events) at two trench sites on the southern Green Valley fault, California (SGVF). The 75-80-km long dextral SGVF creeps ~1-4 mm/yr. We identify stratigraphic horizons disrupted by upward-flowering shears and in-filled fissures unlikely to have formed from creep alone. The Mason Rd site exhibits four events from ~1013 CE to the Present. The Lopes Ranch site (LR, 12 km to the south) exhibits three events from 18 BCE to Present including the most recent event (MRE), 1610 ±52 yr CE (1σ) and a two-event interval (18 BCE-238 CE) isolated by a millennium of low deposition. Using Oxcal to model the timing of the 4-event earthquake sequence from radiocarbon data and the LR MRE yields a mean recurrence interval (RI or μ) of 199 ±82 yr (1σ) and ±35 yr (standard error of the mean), the first based on geologic data. The time since the most recent earthquake (open window since MRE) is 402 yr ±52 yr, well past μ~200 yr. The shape of the probability density function (pdf) of the average RI from Oxcal resembles a Brownian Passage Time (BPT) pdf (i.e., rather than normal) that permits rarer longer ruptures potentially involving the Berryessa and Hunting Creek sections of the northernmost GVF. The model coefficient of variation (cv, σ/μ) is 0.41, but a larger value (cv ~0.6) fits better when using BPT. A BPT pdf with μ of 250 yr and cv of 0.6 yields 30-yr rupture probabilities of 20-25% versus a Poisson probability of 11-17%.

  12. Stress transferred by the 1995 Mw = 6.9 Kobe, Japan, shock: Effect on aftershocks and future earthquake probabilities

    USGS Publications Warehouse

    Toda, S.; Stein, R.S.; Reasenberg, P.A.; Dieterich, J.H.; Yoshida, A.

    1998-01-01

    The Kobe earthquake struck at the edge of the densely populated Osaka-Kyoto corridor in southwest Japan. We investigate how the earthquake transferred stress to nearby faults, altering their proximity to failure and thus changing earthquake probabilities. We find that relative to the pre-Kobe seismicity, Kobe aftershocks were concentrated in regions of calculated Coulomb stress increase and less common in regions of stress decrease. We quantify this relationship by forming the spatial correlation between the seismicity rate change and the Coulomb stress change. The correlation is significant for stress changes greater than 0.2-1.0 bars (0.02-0.1 MPa), and the nonlinear dependence of seismicity rate change on stress change is compatible with a state- and rate-dependent formulation for earthquake occurrence. We extend this analysis to future mainshocks by resolving the stress changes on major faults within 100 km of Kobe and calculating the change in probability caused by these stress changes. Transient effects of the stress changes are incorporated by the state-dependent constitutive relation, which amplifies the permanent stress changes during the aftershock period. Earthquake probability framed in this manner is highly time-dependent, much more so than is assumed in current practice. Because the probabilities depend on several poorly known parameters of the major faults, we estimate uncertainties of the probabilities by Monte Carlo simulation. This enables us to include uncertainties on the elapsed time since the last earthquake, the repeat time and its variability, and the period of aftershock decay. We estimate that a calculated 3-bar (0.3-MPa) stress increase on the eastern section of the Arima-Takatsuki Tectonic Line (ATTL) near Kyoto causes fivefold increase in the 30-year probability of a subsequent large earthquake near Kyoto; a 2-bar (0.2-MPa) stress decrease on the western section of the ATTL results in a reduction in probability by a factor of 140 to 2000. The probability of a Mw = 6.9 earthquake within 50 km of Osaka during 1997-2007 is estimated to have risen from 5-6% before the Kobe earthquake to 7-11% afterward; during 1997-2027, it is estimated to have risen from 14-16% before Kobe to 16-22%.

  13. Chapter B. The Loma Prieta, California, Earthquake of October 17, 1989 - Highway Systems

    USGS Publications Warehouse

    Yashinsky, Mark

    1998-01-01

    This paper summarizes the impact of the Loma Prieta earthquake on highway systems. City streets, urban freeways, county roads, state routes, and the national highway system were all affected. There was damage to bridges, roads, tunnels, and other highway structures. The most serious damage occurred in the cities of San Francisco and Oakland, 60 miles from the fault rupture. The cost to repair and replace highways damaged by this earthquake was $2 billion. About half of this cost was to replace the Cypress Viaduct, a long, elevated double-deck expressway that had a devastating collapse which resulted in 42 deaths and 108 injuries. The earthquake also resulted in some positive changes for highway systems. Research on bridges and earthquakes began to be funded at a much higher level. Retrofit programs were started to upgrade the seismic performance of the nation's highways. The Loma Prieta earthquake changed earthquake policy and engineering practice for highway departments not only in California, but all over the world.

  14. The north-northwest aftershock pattern of the June 28, 1992 Landers earthquake and the probability of large earthquakes in Indian Wells Valley

    SciTech Connect

    Roquemore, G.R. . Dept. of Geosciences); Simila, G.A. . Dept. of Geological Sciences)

    1993-04-01

    Immediately following the June 28, 1992 Landers earthquake, a strong north-northwest pattern of aftershocks and triggered earthquakes developed. The most intense pattern developed between the north end of primary rupture on the Emerson fault and southern Owens Valley. The trend of seismicity cuts through the east-west trending Garlock fault at a high angle. The Garlock fault has no apparent affect on the trend or pattern. Within the aftershock zone, south of the Garlock fault, the Calico and Blackwater faults provide the most likely pathway for the Mojave shear zone into Indian Wells and Owens Valleys. In Indian Wells Valley the seismically active Little Lake fault aligns well with the Blackwater fault to the south and the southern Owens Valley fault zone to the north. Several recent research papers suggest that Optimum Coulomb failure stress changes caused by the Landers earthquake have enhanced the probability of earthquakes within the north-northwest trending aftershock zone. This increase has greater significance when the presumed Optimum Coulomb failure stress changes caused by the 1872 Owens Valley earthquake and its affects on Indian Wells Valley are considered. Indian Wells Valley and the Coso Volcanic field may have received two significant stress increases from earthquakes of magnitude 7.5 or greater in the last 120 years. If these two earthquakes increased the shear stress of aults in the Indian Wells/Coso areas, the most likely site for the next large earthquake within the Mojave shear zone may be there. The rate of seismicity within Indian Wells Valley had increased since 1980 including a magnitude 5.0 earthquake in 1982.

  15. The 2014 Mw 6.0 Napa Earthquake, California: Observations from Real-time GPS-enhanced Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Johanson, I. A.; Grapenthin, R.; Allen, R. M.

    2014-12-01

    Recently, progress has been made to demonstrate feasibility and benefits of including real-time GPS (rtGPS) in earthquake early warning and rapid response systems. While most concepts have yet to be integrated into operational environments, the Berkeley Seismological Laboratory is currently running an rtGPS based finite fault inversion scheme in true real-time, which is triggered by the seismic-based ShakeAlert system and then sends updated earthquake alerts to a test receiver. The Geodetic Alarm System (G-larmS) was online and responded to the 2014 Mw6.0 South Napa earthquake in California. We review G-larmS' performance during this event and for 13 aftershocks, and we present rtGPS observations and real-time modeling results for the main shock. The first distributed slip model and a magnitude estimate of Mw5.5 were available 24 s after the event origin time, which could be reduced to 14 s after a bug fix (~8 s S-wave travel time, ~6 s data latency). The system continued to re-estimate the magnitude once every second: it increased to Mw5.9 3 s after the first alert and stabilized at Mw5.8 after 15 s. G-larmS' solutions for the subsequent small magnitude aftershocks demonstrate that Mw~6.0 is the current limit for alert updates to contribute back to the seismic-based early warning system.

  16. The Southern California Earthquake Center/Undergraduate Studies in Earthquake Information Technology (SCEC/UseIT) Internship Program

    NASA Astrophysics Data System (ADS)

    Perry, S.; Jordan, T.

    2006-12-01

    Our undergraduate research program, SCEC/UseIT, an NSF Research Experience for Undergraduates site, provides software for earthquake researchers and educators, movies for outreach, and ways to strengthen the technical career pipeline. SCEC/UseIT motivates diverse undergraduates towards science and engineering careers through team-based research in the exciting field of earthquake information technology. UseIT provides the cross-training in computer science/information technology (CS/IT) and geoscience needed to make fundamental progress in earthquake system science. Our high and increasing participation of women and minority students is crucial given the nation"s precipitous enrollment declines in CS/IT undergraduate degree programs, especially among women. UseIT also casts a "wider, farther" recruitment net that targets scholars interested in creative work but not traditionally attracted to summer science internships. Since 2002, SCEC/UseIT has challenged 79 students in three dozen majors from as many schools with difficult, real-world problems that require collaborative, interdisciplinary solutions. Interns design and engineer open-source software, creating increasingly sophisticated visualization tools (see "SCEC-VDO," session IN11), which are employed by SCEC researchers, in new curricula at the University of Southern California, and by outreach specialists who make animated movies for the public and the media. SCEC-VDO would be a valuable tool for research-oriented professional development programs.

  17. Earthquake swarms and local crustal spreading along major strike-slip faults in California

    USGS Publications Warehouse

    Weaver, C.S.; Hill, D.P.

    1978-01-01

    Earthquake swarms in California are often localized to areas within dextral offsets in the linear trend in active fault strands, suggesting a relation between earthquake swarms and local crustal spreading. Local crustal spereading is required by the geometry of dextral offsets when, as in the San Andreas system, faults have dominantly strike-slip motion with right-lateral displacement. Three clear examples of this relation occur in the Imperial Valley, Coso Hot Springs, and the Danville region, all in California. The first two of these areas are known for their Holocene volcanism and geothermal potential, which is consistent with crustal spreading and magmatic intrusion. The third example, however, shows no evidence for volcanism or geothermal activity at the surface. ?? 1978 Birkha??user Verlag.

  18. The Loma Prieta earthquake of October 17, 1989 : a brief geologic view of what caused the Loma Prieta earthquake and implications for future California earthquakes: What happened ... what is expected ... what can be done.

    USGS Publications Warehouse

    Ward, Peter L.; Page, Robert A.

    1990-01-01

    The San Andreas fault, in California, is the primary boundary between the North American plate and the Pacific plate. Land west of the fault has been moving northwestward relative to land on the east at an average rate of 2 inches per year for millions of years. This motion is not constant but occurs typically in sudden jumps during large earthquakes. This motion is relentless; therefore earthquakes in California are inevitable.

  19. Living With Earthquakes in California: A Survivor's Guide

    NASA Astrophysics Data System (ADS)

    Grant, Lisa B.

    I write this review from a California government building in a roomful of somber, frightened strangers with armed sheriffs guarding the door. We are prohibited from leaving. A state of emergency has been declared, the airports are closed, the burly man next to me is tearing up, and all I can think of is getting home to my loved ones. What are the odds of being trapped in a jury room with armed guards and a television set, watching the collapse of the World Trade Center Towers and the smoking Pentagon? What are the odds of being trapped in a building, thinking of loved ones, as the Earth shakes, the furniture dances, and the ceiling falls when the long-awaited ‘Big One’ finally hits California? The analogy is sobering.

  20. Earthquake and Tsunami planning, outreach and awareness in Humboldt County, California

    NASA Astrophysics Data System (ADS)

    Ozaki, V.; Nicolini, T.; Larkin, D.; Dengler, L.

    2008-12-01

    Humboldt County has the longest coastline in California and is one of the most seismically active areas of the state. It is at risk from earthquakes located on and offshore and from tsunamis generated locally from faults associated with the Cascadia subduction zone (CSZ), other regional fault systems, and from distant sources elsewhere in the Pacific. In 1995 the California Division of Mines and Geology published the first earthquake scenario to include both strong ground shaking effects and a tsunami. As a result of the scenario, the Redwood Coast Tsunami Work Group (RCTWG), an organization of representatives from government agencies, tribes, service groups, academia and the private sector from the three northern coastal California counties, was formed in 1996 to coordinate and promote earthquake and tsunami hazard awareness and mitigation. The RCTWG and its member agencies have sponsored a variety of projects including education/outreach products and programs, tsunami hazard mapping, signage and siren planning, and has sponsored an Earthquake - Tsunami Education Room at the Humboldt County fair for the past eleven years. Three editions of Living on Shaky Ground an earthquake-tsunami preparedness magazine for California's North Coast, have been published since 1993 and a fourth is due to be published in fall 2008. In 2007, Humboldt County was the first region in the country to participate in a tsunami training exercise at FEMA's Emergency Management Institute in Emmitsburg, MD and the first area in California to conduct a full-scale tsunami evacuation drill. The County has conducted numerous multi-agency, multi-discipline coordinated exercises using county-wide tsunami response plan. Two Humboldt County communities were recognized as TsunamiReady by the National Weather Service in 2007. Over 300 tsunami hazard zone signs have been posted in Humboldt County since March 2008. Six assessment surveys from 1993 to 2006 have tracked preparedness actions and personal awareness of earthquake and tsunami hazards in the county and additional surveys have tracked public awareness and tourist concerns about tsunami hazard signs. Over the thirteen year period covered by the surveys, the percent with houses secured to foundations has increased from 58 to 80 percent, respondents aware of a local tsunami hazard increased from 51 to 73 percent and knowing what the Cascadia subduction zone is from 16 to 42 percent.

  1. Historigraphical analysis of the 1857 Ft. Tejon earthquake, San Andreas Fault, California: Preliminary results

    NASA Astrophysics Data System (ADS)

    Martindale, D.; Evans, J. P.

    2002-12-01

    Past historical analyses of the 1857 Forth Tejon earthquake include Townley and Allen (1939); Wood (1955) re-examined the earthquake and added some additional new material, and Agnew and Sieh (1978) published an extensive review of the previous publications and included primary sources not formerly known. Since 1978, most authors have reiterated the findings of Agnew and Sieh, with the exception of Meltzner and Wald's 1998 work that built on Sieh's foreshock research and included an extensive study of aftershocks. Approximately twenty-five years has past since the last full investigation of the event. In the last several decades, libraries and archives have continued to gather additional documents. Staff members continually inventory new and existing collections, making them accessible to researchers today. As a result, we are conducting an updated examination, with the hope of new insight regarding the 1857 Fort Tejon earthquake. We use a new approached to the topic: the research skills of a historian in collaboration with a geologist to generate quantitative data on the nature and location of ground shaking associated with the earthquake. We analyze documents from the Huntington Library, California State Historical Society, California State Library-California Room, Utah Historical Association Information Center, the Church of Jesus Christ of Latter-day Saints (LDS) Archives and Historical Department, Cal Tech Archives, the National Archives, and the Fort Tejon State Park. New facilities reviewed also include Utah State University, University of Utah, and the LDS Family History Center. Each facility not only provided formerly quoted sources, but many offered new materials. For example, previous scholars examined popular, well-known newspapers; yet, publications in smaller towns and in languages other than English, also existed. Thirty newspapers published in January 1857 were located. We find records of the event at least one year after the earthquake. One outcome of such a search includes letters, approximately eight pictures useful in structure-damage analysis. Over 170 newspapers were published during 1857 throughout California, Nevada, and New Mexico Territory, encompassing the area of Arizona and New Mexico today. Historical information regarding the settlement of areas also proved useful. Although earlier scholars knew of LDS settlement missions in San Bernardino, California and Las Vegas, Nevada, only brief information was located. Preliminary results include increasing the felt area to include Las Vegas, Nevada; support for a Mercalli Index of IX or even X for San Bernardino; VIII or greater for sites NE of Sacramento, a northwest to southeast rupture pattern, and reports of electromagnetic disturbances. Based on these results, we suggest that the 1857 Ft. Tejon earthquake be felt over a wider area, and in places created greater ground shaking, than previously documented.

  2. Time-Reversal to Estimate Focal Depth for Local, Shallow Earthquakes in Southern California

    NASA Astrophysics Data System (ADS)

    Pearce, F.; Lu, R.; Toksoz, N.

    2007-12-01

    Current approaches for focal depth estimation are typically based on travel times and result in large uncertainties primarily due to poor data coverage and inaccurate travel time picks. We propose an alternative method based on an adaptation of time-reversed acoustics (TRA). In the context of TRA theory, the autocorrelation of an earthquake recording can be thought of as the convolution of the source autocorrelation function with the autocorrelation of the Green's function describing propagation between source and receiver. Furthermore, the signal to noise ratio (S/N) of stationary phases in the Green's function may be improved by stacking the autocorrelations from many receivers. In this study, we employ such an approach to estimate the focal depth of shallow earthquakes based on the time lag between the direct P phase and pP converted phase, which is assumed to be stationary across the receiver array. Focal depth estimates are easily obtained by multiplying half the pP time lag by the average velocity above the earthquake. We apply this methodology to estimate focal depths for several local earthquakes in Southern California. Earthquake recordings were obtained from the Southern California Earthquake Center (SCEC) for events with accurate, independent estimates of focal depth below about 15 km, and local magnitudes between 4.0 and 6.0. We observe pP in the stacked autocorrelations that correspond to the focal depths listed in the SCEC catalog for earthquakes located throughout Southern California. The predictive capability of the method is limited by S/N, defined as the pP amplitude divided by the background noise level of the stacked correlation. By considering subsets of the Southern California array, we explore the sensitivity of the S/N on station density and location (i.e. epicentral distance & azimuth). We find S/N is generally better for subsets of receivers within regions with relatively simple geologic structure. We are currently developing an extension of this methodology using the time- frequency correlation function, which may significantly reduce the station coverage required for accurate focal depth estimation.

  3. Uncertainties in Earthquake Loss Analysis: A Case Study From Southern California

    NASA Astrophysics Data System (ADS)

    Mahdyiar, M.; Guin, J.

    2005-12-01

    Probabilistic earthquake hazard and loss analyses play important roles in many areas of risk management, including earthquake related public policy and insurance ratemaking. Rigorous loss estimation for portfolios of properties is difficult since there are various types of uncertainties in all aspects of modeling and analysis. It is the objective of this study to investigate the sensitivity of earthquake loss estimation to uncertainties in regional seismicity, earthquake source parameters, ground motions, and sites' spatial correlation on typical property portfolios in Southern California. Southern California is an attractive region for such a study because it has a large population concentration exposed to significant levels of seismic hazard. During the last decade, there have been several comprehensive studies of most regional faults and seismogenic sources. There have also been detailed studies on regional ground motion attenuations and regional and local site responses to ground motions. This information has been used by engineering seismologists to conduct regional seismic hazard and risk analysis on a routine basis. However, one of the more difficult tasks in such studies is the proper incorporation of uncertainties in the analysis. From the hazard side, there are uncertainties in the magnitudes, rates and mechanisms of the seismic sources and local site conditions and ground motion site amplifications. From the vulnerability side, there are considerable uncertainties in estimating the state of damage of buildings under different earthquake ground motions. From an analytical side, there are challenges in capturing the spatial correlation of ground motions and building damage, and integrating thousands of loss distribution curves with different degrees of correlation. In this paper we propose to address some of these issues by conducting loss analyses of a typical small portfolio in southern California, taking into consideration various source and ground motion uncertainties. The approach is designed to integrate loss distribution functions with different degrees of correlation for portfolio analysis. The analysis is based on USGS 2002 regional seismicity model.

  4. Processed seismic motion records from earthquakes (1982--1993): Recorded at Scotty`s Castle, California

    SciTech Connect

    Lum, P K; Honda, K K

    1993-10-01

    The 8mm data tape contains the processed seismic data of earthquakes recorded at Scotty`s Castle, California. The seismic data were recorded by seismographs maintained by the DOE/NV in Southern Nevada. Four files were generated from each seismic recorder. They are ``Uncorrected acceleration time histories, 2. corrected acceleration, velocity and displacement time histories, 3. original recording, and 4. Fourier amplitude spectra of acceleration.

  5. Earthquake-induced sediment failures on a 0.25o slope, Klamath River delta, California.

    USGS Publications Warehouse

    Field, M.E.; Gardner, J.V.; Jennings, A.E.; Edwards, B.D.

    1982-01-01

    On Nov. 8, 1980, a major earthquake (magnitude 6.5-7.2) occurred 60 km off the coast of N California. A survey of the area using high-resolution seismic-reflection and side-scan sonar equipment revealed the presence of extensive sediment failure and flows in a zone about 1 km wide and 20 km long that trends parallel to the shelf on the very gently sloping (less than 0.25o) Klamath River delta.-from Authors

  6. DEFORMATION NEAR THE EPICENTER OF THE 1984 ROUND VALLEY, CALIFORNIA, EARTHQUAKE.

    USGS Publications Warehouse

    Gross, W.K.; Savage, J.C.

    1985-01-01

    A trilateration network extending from near Mammoth Lakes to Bishop, California, was resurveyed following the November 23, 1984, Round Valley earthquake (M//L equals 5. 8). The network had previously been surveyed in 1982. Deformation apparently associated with the Round Valley earthquake was detected as well as deformation due to the expansion of a magma chamber 8 km beneath the resurgent dome in the Long Valley caldera and right-lateral slip on the uppermost 2 km of the 1983 rupture surface in the south moat of the caldera. The deformation associated with Round Valley earthquake suggests left-lateral slip on the north-northeasterly striking vertical plane defined by the aftershock hypocenters. (Edted author abstract) Refs.

  7. Focal mechanisms of Southern California offshore earthquakes: the effects of incomplete geographical data coverage on understanding rupture patterns

    NASA Astrophysics Data System (ADS)

    Brunner, K.; Kohler, M. D.; Weeraratne, D. S.

    2011-12-01

    Calculating accurate focal mechanisms for offshore seismic events is difficult due to a lack of nearby seismic stations, limited azimuthal coverage, and uncertain velocity structure. We conducted an experiment to determine what effect data from island seismic stations in Southern California (San Miguel, Santa Rosa, Santa Cruz, Santa Barbara, San Nicolas, Santa Catalina, and San Clemente Islands), and ocean bottom seismometers (OBSs) have on constraining focal mechanisms for earthquakes in the California Borderland with a local magnitude greater than three. Thirty-four OBSs were deployed in August of 2010 with the ALBACORE project to collect data for over a year before being recovered in September of 2011. Waveform data from those stations as well as the Southern California Seismic Network were analyzed to determine P-wave first-motion polarities for twenty-nine earthquakes with an acceptable signal-to-noise ratio. These data were then used to calculate focal mechanisms with and without the offshore stations using HASH v.1.2 [Hardebeck and Shearer, 2002], an algorithm that accounts for errors in earthquake location, velocity model, and polarity observations. Comparisons of these results show that including offshore stations improves the errors in fault plane uncertainty and solution probability due to the increased azimuthal coverage and smaller source-receiver distance. Plots of these solutions on maps of the offshore region indicate that the San Clemente fault, San Diego Trough fault, Palos Verdes fault, and additional unmapped faults are currently active. These observations agree with maps of more comprehensive seismicity patterns from the past twenty years. Additionally, the focal mechanisms show that the San Clemente fault, San Diego Trough fault, and a region south of San Nicolas Island all exhibit right lateral movement. The Palos Verdes fault exhibits reverse faulting and a region west of the northern Channel Islands exhibits normal faulting. These observations provide evidence that offshore faults are not purely strike-slip, but have normal and reverse slip, and present the possibility of producing tsunamis that could threaten the highly populated areas of Southern California.

  8. Cruise report for A1-98-SC southern California Earthquake Hazards Project

    USGS Publications Warehouse

    Normark, William R.; Bohannon, Robert G.; Sliter, Ray; Dunhill, Gita; Scholl, David W.; Laursen, Jane; Reid, Jane A.; Holton, David

    1999-01-01

    The focus of the Southern California Earthquake Hazards project, within the Western Region Coastal and Marine Geology team (WRCMG), is to identify the landslide and earthquake hazards and related ground-deformation processes that can potentially impact the social and economic well-being of the inhabitants of the Southern California coastal region, the most populated urban corridor along the U.S. Pacific margin. The primary objective is to help mitigate the earthquake hazards for the Southern California region by improving our understanding of how deformation is distributed (spatially and temporally) in the offshore with respect to the onshore region. To meet this overall objective, we are investigating the distribution, character, and relative intensity of active (i.e., primarily Holocene) deformation within the basins and along the shelf adjacent to the most highly populated areas (see Fig. 1). In addition, the project will examine the Pliocene-Pleistocene record of how this deformation has shifted in space and time. The results of this study should improve our knowledge of shifting deformation for both the long-term (105 to several 106 yr) and short-term (<50 ky) time frames and enable us to identify actively deforming structures that may constitute current significant seismic hazards.

  9. Statiscal analysis of an earthquake-induced landslide distribution - The 1989 Loma Prieta, California event

    USGS Publications Warehouse

    Keefer, D.K.

    2000-01-01

    The 1989 Loma Prieta, California earthquake (moment magnitude, M=6.9) generated landslides throughout an area of about 15,000 km2 in central California. Most of these landslides occurred in an area of about 2000 km2 in the mountainous terrain around the epicenter, where they were mapped during field investigations immediately following the earthquake. The distribution of these landslides is investigated statistically, using regression and one-way analysisof variance (ANOVA) techniques to determine how the occurrence of landslides correlates with distance from the earthquake source, slope steepness, and rock type. The landslide concentration (defined as the number of landslide sources per unit area) has a strong inverse correlation with distance from the earthquake source and a strong positive correlation with slope steepness. The landslide concentration differs substantially among the various geologic units in the area. The differences correlate to some degree with differences in lithology and degree of induration, but this correlation is less clear, suggesting a more complex relationship between landslide occurrence and rock properties. ?? 2000 Elsevier Science B.V. All rights reserved.

  10. [Engineering aspects of seismic behavior of health-care facilities: lessons from California earthquakes].

    PubMed

    Rutenberg, A

    1995-03-15

    The construction of health-care facilities is similar to that of other buildings. Yet the need to function immediately after an earthquake, the helplessness of the many patients and the high and continuous occupancy of these buildings, require that special attention be paid to their seismic performance. Here the lessons from the California experience are invaluable. In this paper the behavior of California hospitals during destructive earthquakes is briefly described. Adequate structural design and execution, and securing of nonstructural elements are required to ensure both safety of occupants, and practically uninterrupted functioning of equipment, mechanical and electrical services and other vital systems. Criteria for post-earthquake functioning are listed. In view of the hazards to Israeli hospitals, in particular those located along the Jordan Valley and the Arava, a program for the seismic evaluation of medical facilities should be initiated. This evaluation should consider the hazards from nonstructural elements, the safety of equipment and systems, and their ability to function after a severe earthquake. It should not merely concentrate on safety-related structural behavior. PMID:7750814

  11. Analysis of Earthquake Recordings Obtained from the Seafloor Earthquake Measurement System (SEMS) Instruments Deployed off the Coast of Southern California

    USGS Publications Warehouse

    Boore, D.M.; Smith, C.E.

    1999-01-01

    For more than 20 years, a program has been underway to obtain records of earthquake shaking on the seafloor at sites offshore of southern California, near oil platforms. The primary goal of the program is to obtain data that can help determine if ground motions at offshore sites are significantly different than those at onshore sites; if so, caution may be necessary in using onshore motions as the basis for the seismic design of oil platforms. We analyze data from eight earthquakes recorded at six offshore sites; these are the most important data recorded on these stations to date. Seven of the earthquakes were recorded at only one offshore station; the eighth event was recorded at two sites. The earthquakes range in magnitude from 4.7 to 6.1. Because of the scarcity of multiple recordings from any one event, most of the analysis is based on the ratio of spectra from vertical and horizontal components of motion. The results clearly show that the offshore motions have very low vertical motions compared to those from an average onshore site, particularly at short periods. Theoretical calculations find that the water layer has little effect on the horizontal components of motion but that it produces a strong spectral null on the vertical component at the resonant frequency of P waves in the water layer. The vertical-to-horizontal ratios for a few selected onshore sites underlain by relatively low shear-wave velocities are similar to the ratios from offshore sites for frequencies less than about one-half the water layer P-wave resonant frequency, suggesting that the shear-wave velocities beneath a site are more important than the water layer in determining the character of the ground motions at lower frequencies.

  12. Geophysical Investigations Along the Hayward Fault, Northern California, and Their Implications on Earthquake Hazards

    NASA Astrophysics Data System (ADS)

    Ponce, D. A.; Graymer, R. W.; Hildenbrand, T. G.; Jachens, R. C.; Simpson, R. W.

    2007-12-01

    Geophysical studies indicate that the Hayward Fault follows a pre-existing basement structure and that local geologic features play an important role in earthquake seismicity. The recent creeping trace of the Hayward Fault extends for about 90 km from San Pablo Bay in the northwest to Fremont in the southeast, and together with its northern extension, the Rodgers Creek Fault, is regarded as one of the most hazardous faults in northern California. The Hayward Fault is predominantly a right-lateral strike-slip fault that forms the western boundary of the East Bay Hills and separates Franciscan Complex rocks on the southwest from Coast Range Ophiolite and Great Valley Sequence basement rocks on the northeast. The Hayward Fault is characterized by distinct linear gravity and magnetic anomalies that correlate with changes in geology, structural trends, creep rates, and clusters of seismicity. These correlations indicate the existence of fault-zone discontinuities that probably reflect changes in mechanical properties. These fault-zone discontinuities may play a role in defining fault segments--locations where recurring seismic ruptures may tend to nucleate or terminate. Along the central part of the Hayward Fault, a prominent gravity and magnetic anomaly correlates with an exposed gabbro body, the San Leandro gabbro. Modeling of these anomalies reveals that the San Leandro gabbro is much more extensive in the subsurface than the outcrop pattern suggests, extending to a depth of about 6-8 km. The inferred extent of the San Leandro gabbro, it's geologic setting, and associated seismicity suggest that the Hayward Fault evolved from a pre-existing basement feature, similar to the ancestral Coast Range Fault. Combined modeling and relocated double-difference seismicity data indicate that the dip of the fault surface varies from near vertical in the north to about 75 degrees in the central part to about 50 degrees in the south near Fremont and ultimately connects with the central Calaveras Fault. A seismicity cluster along the western edge of the San Leandro gabbro and a bend in the fault associated with the gravity and magnetic high along the gabbro body suggests that this mafic body influences fault geometry and behavior, and may serve as a nucleation point for large earthquakes on the fault.

  13. Cruise report for 01-99-SC: southern California earthquake hazards project

    USGS Publications Warehouse

    Normark, William R.; Reid, Jane A.; Sliter, Ray W.; Holton, David; Gutmacher, Christina E.; Fisher, Michael A.; Childs, Jonathan R.

    1999-01-01

    The focus of the Southern California Earthquake Hazards project is to identify the landslide and earthquake hazards and related ground-deformation processes occurring in the offshore areas that have significant potential to impact the inhabitants of the Southern California coastal region. The project activity is supported through the Coastal and Marine Geology Program of the Geologic Division of the U. S. Geological Survey (USGS) and is a component of the Geologic Division's Science Strategy under Goal 1—Conduct Geologic Hazard Assessments for Mitigation Planning (Bohlen et al., 1998). The project research is specifically stated under Activity 1.1.2 of the Science Strategy: Earthquake Hazard Assessments and Loss Reduction Products in Urban Regions. This activity involves "research, seismic and geodetic monitoring, field studies, geologic mapping, and analyses needed to provide seismic hazard assessments of major urban centers in earthquake-prone regions including adjoining coastal and offshore areas." The southern California urban areas, which form the most populated urban corridor along the U.S. Pacific margin, are among a few specifically designated for special emphasis under the Division's science strategy (Bohlen et al., 1998). The primary objective of the project is to help mitigate the earthquake hazards for the Southern California region by improving our understanding of how deformation is distributed (spatially and temporally) in the offshore with respect to the onshore region. To meet this objective, we are conducting field investigations to observe the distribution, character, and relative intensity of active (i.e., primarily Holocene) deformation within the basins and along the shelf adjacent to the most highly populated areas (Fig. 1). In addition, acoustic imaging should help determine the subsurface dimensions of the faults and identify the size and frequency of submarine landslides, both of which are necessary for evaluating the potential for generating destructive tsunamis in the southern California offshore. In order to evaluate the strain associated with the offshore structures, the initial results from the field mapping under this project will be used to identify possible sites for deployment of acoustic geodetic instruments to monitor strain in the offshore region. A major goal of mapping under this project is to provide detailed geologic and geophysical information in GIS data bases that build on the earlier studies and use the new data to precisely locate active faults and to map recent submarine landslide deposits.

  14. A Composite Chronology of Earthquakes From the Bidart fan Paleoseismic Site, San Andreas Fault, California

    NASA Astrophysics Data System (ADS)

    Grant, L. B.; Arrowsmith, J. R.; Akciz, S.

    2005-12-01

    Chronologies of earthquakes spanning at least ten ruptures at multiple sites are required for developing robust models of fault behavior and forecasts of future earthquakes. Such a long chronology can be obtained by placing multiple trenches across the San Andreas fault at the Bidart alluvial fan paleoseismic site in the Carrizo Plain to capture the spatio-temporal record of earthquakes created by the interplay of surface rupture and spatially varying deposition. Exposures from one trench reveal evidence of at least 6 and probably 7 earthquakes since 3000 BP. Evidence of 7 earthquakes since 2200 BP has been interpreted from exposures in 3 other trenches. Analysis of exposures from two new trenches is in progress. Excavations reveal alternating sequences of depositional preservation and gaps in the record of earthquakes. The "gaps" are massive featureless zones caused by bioturbation of the fan surface while that portion of the fan was depositionally inactive. When the depositional record of 4 trenches is combined, it yields a composite chronology of at least10 surface ruptures over the last 3000 years, for a minimum average recurrence interval of 300 years if the most recent event exposed in all trenches is assumed to be the 1857 Fort Tejon earthquake. So far, the uncertainty in dates of pre-1857 ruptures ranges from decades to millennia, and at least 5 of the 10 recognized earthquakes are obscured by depositional gaps at one of the trench sites. Therefore, synchroneity of ruptures at different trench sites is difficult to establish, and there is the possibility that the existing record contains more than 10 earthquakes and/or additional ruptures may have occurred that are not preserved by deposition.

  15. WHITTIER NARROWS, CALIFORNIA EARTHQUAKE OF OCTOBER 1, 1987-PRELIMINARY ASSESSMENT OF STRONG GROUND MOTION RECORDS.

    USGS Publications Warehouse

    Brady, A.G.; Etheredge, E.C.; Porcella, R.L.

    1988-01-01

    More than 250 strong-motion accelerograph stations were triggered by the Whittier Narrows, California earthquake of 1 October 1987. Considering the number of multichannel structural stations in the area of strong shaking, this set of records is one of the more significant in history. Three networks, operated by the U. S. Geological Survey, the California Division of Mines and Geology, and the University of Southern California produced the majority of the records. The excellent performance of the instruments in these and the smaller arrays is attributable to the quality of the maintenance programs. Readiness for a magnitude 8 event is directly related to these maintenance programs. Prior to computer analysis of the analog film records, a number of important structural resonant modes can be identified, and frequencies and simple mode shapes have been scaled.

  16. Slip budget and potential for a M7 earthquake in central California

    NASA Astrophysics Data System (ADS)

    Harris, Ruth A.; Archuleta, Ralph J.

    1988-10-01

    The slip rate budget of the San Andreas fault (SAF) in central California, which is approximately 33 mm/yr, is accounted for by a change in the slip release mechanism along the fault. In the NW section of the fault, between Bear Valley and Monarch Peak, creep apparently accounts for the slip budget with the seismicity contributing negligibly. The section at Parkfield marks the transition from a creeping to a locked fault trace. Since the M8 1857 earthquake five M6 earthquakes have occurred but have not completely accounted for the slip budget. Southeast of Parkfield, the SAF has been locked since 1857. From Cholame to Bitterwater Valley this section now lags the deep slip by the amount of slip released in 1857; consequently faulting in this section could occur at the time of the next Parkfield earthquake. If this earthquake releases the slip deficit accumulated in the transition zone and in the locked zone, the earthquake will have a moment-magnitude M7.2.

  17. Seismic Constraints on Fault-Zone Rheology from Repeating Earthquakes at Parkfield, California

    NASA Astrophysics Data System (ADS)

    Taira, T.; Nadeau, R. M.; Dreger, D. S.

    2010-12-01

    Knowledge of subsurface fault-zone frictional properties is key to understanding the mechanics of postseismic deformation, aftershocks and the occurrences of triggered earthquakes following larger events. We show that cumulative seismic slips from repeating earthquake sequences can provide a means of inferring in-situ frictional parameters at seismogenic depth. Using the borehole HRSN seismic array near Parkfield, California we have monitored the temporal behavior of sequences of microrepeating earthquakes for over two decades, including the postseismic period of the 2004 M 6.0 Parkfield earthquake. Using repeating earthquake data from 27 microearthquake sequences extending over a depth range of 10 km, we estimate fault-zone rheological properties over this depth range by employing rate-strengthening friction [Perfettini and Avouac, 2005] and viscoelastic models [Montesi, 2004]. We find that 24 of 27 sequences can be well-explained by the rate-strengthening friction model. Following Perfettini and Avouac [2005], we then evaluate frictional parameter aσ where a is a frictional coefficient and σ is effective normal stress, by combining the static Coulomb stress change based on the coseismic slip model of the 2004 Parkfield mainshock [Kim and Dreger, 2008]. Our results suggest that aσ ranges from 0.01 MPa to 0.5 MPa, which is consistent with previous laboratory studies [Marone et al., 1991].

  18. Surface fault slip associated with the 2004 Parkfield, California, earthquake

    USGS Publications Warehouse

    Rymer, M.J.; Tinsley, J. C., III; Treiman, J.A.; Arrowsmith, J.R.; Ciahan, K.B.; Rosinski, A.M.; Bryant, W.A.; Snyder, H.A.; Fuis, G.S.; Toke, N.A.; Bawden, G.W.

    2006-01-01

    Surface fracturing occurred along the San Andreas fault, the subparallel Southwest Fracture Zone, and six secondary faults in association with the 28 September 2004 (M 6.0) Parkfield earthquake. Fractures formed discontinuous breaks along a 32-km-long stretch of the San Andreas fault. Sense of slip was right lateral; only locally was there a minor (1-11 mm) vertical component of slip. Right-lateral slip in the first few weeks after the event, early in its afterslip period, ranged from 1 to 44 mm. Our observations in the weeks following the earthquake indicated that the highest slip values are in the Middle Mountain area, northwest of the mainshock epicenter (creepmeter measurements indicate a similar distribution of slip). Surface slip along the San Andreas fault developed soon after the mainshock; field checks in the area near Parkfield and about 5 km to the southeast indicated that surface slip developed more than 1 hr but generally less than 1 day after the event. Slip along the Southwest Fracture Zone developed coseismically and extended about 8 km. Sense of slip was right lateral; locally there was a minor to moderate (1-29 mm) vertical component of slip. Right-lateral slip ranged from 1 to 41 mm. Surface slip along secondary faults was right lateral; the right-lateral component of slip ranged from 3 to 5 mm. Surface slip in the 1966 and 2004 events occurred along both the San Andreas fault and the Southwest Fracture Zone. In 1966 the length of ground breakage along the San Andreas fault extended 5 km longer than that mapped in 2004. In contrast, the length of ground breakage along the Southwest Fracture Zone was the same in both events, yet the surface fractures were more continuous in 2004. Surface slip on secondary faults in 2004 indicated previously unmapped structural connections between the San Andreas fault and the Southwest Fracture Zone, further revealing aspects of the structural setting and fault interactions in the Parkfield area.

  19. Real time test of the long-range aftershock algorithm as a tool for mid-term earthquake prediction in Southern California

    NASA Astrophysics Data System (ADS)

    Prozorov, A. G.; Schreider, S. Yu.

    1990-04-01

    Result of the algorithm of earthquake prediction, published in 1982, is examined in this paper. The algorithm is based on the hypothesis of long-range interaction between strong and moderate earthquakes in a region. It has been applied to the prediction of earthquakes with M≥6.4 in Southern California for the time interval 1932 1979. The retrospective results were as follows: 9 out of 10 strong earthquakes were predicted with average spatial accuracy of 58 km and average delay time (the time interval between a strong earthquake and its best precursor) 9.4 years varying from 0.8 to 27.9 years. During the time interval following the period studied in that publication, namely in 1980 1988, four earthquakes occurred in the region which had a magnitude of M≥6.4 at least in one of the catalogs: Caltech or NOAA. Three earthquakes—Coalinga of May, 1983, Chalfant Valley of July, 1985 and Superstition Hills of November, 1987—were successfully predicted by the published algorithm. The missed event is a couple of two Mammoth Lake earthquakes of May, 1980 which we consider as one event due to their time-space closeness. This event occurred near the northern boundary of the region, and it also would have been predicted if we had moved the northern boundary from 38°N to the 39°N; the precision of the prediction in this case would be 30 km. The average area declared by the algorithm as the area of increased probability of strong earthquake, e.g., the area within 111-km distance of all long-range aftershocks currently present on the map of the region during 1980 1988 is equal to 47% of the total area of the region if the latter is measured in accordance with the density distribution of earthquakes in California, approximated by the catalog of earthquakes with M≥5. In geometrical terms it is approximately equal to 17% of the total area. Thus the result of the real time test shows a 1.6 times increase of the occurrence of C-events in the alarmed area relative to the normal rate of seismicity. Due to the small size of the sample, it is of course, beyond the statistically significant value. We adjust the parameters of the algorithm in accordance with the new material and publish them here for further real-time testing.

  20. Web Services and Other Enhancements at the Northern California Earthquake Data Center

    NASA Astrophysics Data System (ADS)

    Neuhauser, D. S.; Zuzlewski, S.; Allen, R. M.

    2012-12-01

    The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, or MiniSEED depending on the service, and are compatible with the equivalent IRIS DMC web services. The NCEDC is currently providing the following Web Services: (1) Station inventory and channel response information delivered in StationXML format, (2) Channel response information delivered in RESP format, (3) Time series availability delivered in text and XML formats, (4) Single channel and bulk data request delivered in MiniSEED format. The NCEDC is also developing a rich Earthquake Catalog Web Service to allow users to query earthquake catalogs based on selection parameters such as time, location or geographic region, magnitude, depth, azimuthal gap, and rms. It will return (in QuakeML format) user-specified results that can include simple earthquake parameters, as well as observations such as phase arrivals, codas, amplitudes, and computed parameters such as first motion mechanisms, moment tensors, and rupture length. The NCEDC will work with both IRIS and the International Federation of Digital Seismograph Networks (FDSN) to define a uniform set of web service specifications that can be implemented by multiple data centers to provide users with a common data interface across data centers. The NCEDC now hosts earthquake catalogs and waveforms from the US Department of Energy (DOE) Enhanced Geothermal Systems (EGS) monitoring networks. These data can be accessed through the above web services and through special NCEDC web pages.

  1. Seismic velocity structure and earthquake relocation for the magmatic system beneath Long Valley Caldera, eastern California

    NASA Astrophysics Data System (ADS)

    Lin, Guoqing

    2015-04-01

    A new three-dimensional (3-D) seismic velocity model and high-precision location catalog for earthquakes between 1984 and 2014 are presented for Long Valley Caldera and its adjacent fault zones in eastern California. The simul2000 tomography algorithm is applied to derive the 3-D Vp and Vp/Vs models using first-arrivals of 1004 composite earthquakes obtained from the original seismic data at the Northern California Earthquake Data Center. The resulting Vp model reflects geological structures and agrees with previous local tomographic studies. The simultaneously resolved Vp/Vs model is a major contribution of this study providing an important complement to the Vp model for the interpretation of structural heterogeneities and physical properties in the study area. The caldera is dominated by low Vp anomalies at shallow depths due to postcaldera fill. High Vp and low Vp/Vs values are resolved from the surface to ~ 3.4 km depth beneath the center of the caldera, corresponding to the structural uplift of the Resurgent Dome. An aseismic body with low Vp and high Vp/Vs anomalies at 4.2-6.2 km depth below the surface is consistent with the location of partial melt suggested by previous studies based on Vp models only and the inflation source locations based on geodetic modeling. The Sierran crystalline rocks outside the caldera are generally characterized with high Vp and low Vp/Vs values. The newly resolved velocity model improves absolute location accuracy for the seismicity in the study area and ultimately provides the basis for a high-precision earthquake catalog based on similar-event cluster analysis and waveform cross-correlation data. The fine-scale velocity structure and precise earthquake relocations are useful for investigating magma sources, seismicity and stress interaction and other seismological studies in Long Valley.

  2. Earthquake Education and Public Information Centers: A Collaboration Between the Earthquake Country Alliance and Free-Choice Learning Institutions in California

    NASA Astrophysics Data System (ADS)

    Degroot, R. M.; Springer, K.; Brooks, C. J.; Schuman, L.; Dalton, D.; Benthien, M. L.

    2009-12-01

    In 1999 the Southern California Earthquake Center initiated an effort to expand its reach to multiple target audiences through the development of an interpretive trail on the San Andreas fault at Wallace Creek and an earthquake exhibit at Fingerprints Youth Museum in Hemet. These projects and involvement with the San Bernardino County Museum in Redlands beginning in 2007 led to the creation of Earthquake Education and Public Information Centers (EPIcenters) in 2008. The impetus for the development of the network was to broaden participation in The Great Southern California ShakeOut. In 2009 it has grown to be more comprehensive in its scope including its evolution into a statewide network. EPIcenters constitute a variety of free-choice learning institutions, representing museums, science centers, libraries, universities, parks, and other places visited by a variety of audiences including families, seniors, and school groups. They share a commitment to demonstrating and encouraging earthquake preparedness. EPIcenters coordinate Earthquake Country Alliance activities in their county or region, lead presentations or organize events in their communities, or in other ways demonstrate leadership in earthquake education and risk reduction. The San Bernardino County Museum (Southern California) and The Tech Museum of Innovation (Northern California) serve as EPIcenter regional coordinating institutions. They interact with over thirty institutional partners who have implemented a variety of activities from displays and talks to earthquake exhibitions. While many activities are focused on the time leading up to and just after the ShakeOut, most EPIcenter members conduct activities year round. Network members at Kidspace Museum in Pasadena and San Diego Natural History Museum have formed EPIcenter focus groups on early childhood education and safety and security. This presentation highlights the development of the EPIcenter network, synergistic activities resulting from this collaboration, and lessons learned from interacting with free-choice learning institutions.

  3. Earthquake.

    PubMed

    Cowen, A R; Denney, J P

    1994-04-01

    On January 25, 1 week after the most devastating earthquake in Los Angeles history, the Southern California Hospital Council released the following status report: 928 patients evacuated from damaged hospitals. 805 beds available (136 critical, 669 noncritical). 7,757 patients treated/released from EDs. 1,496 patients treated/admitted to hospitals. 61 dead. 9,309 casualties. Where do we go from here? We are still waiting for the "big one." We'll do our best to be ready when Mother Nature shakes, rattles and rolls. The efforts of Los Angeles City Fire Chief Donald O. Manning cannot be overstated. He maintained department command of this major disaster and is directly responsible for implementing the fire department's Disaster Preparedness Division in 1987. Through the chief's leadership and ability to forecast consequences, the city of Los Angeles was better prepared than ever to cope with this horrendous earthquake. We also pay tribute to the men and women who are out there each day, where "the rubber meets the road." PMID:10133439

  4. Earthquake Prediction and Forecasting

    NASA Astrophysics Data System (ADS)

    Jackson, David D.

    Prospects for earthquake prediction and forecasting, and even their definitions, are actively debated. Here, "forecasting" means estimating the future earthquake rate as a function of location, time, and magnitude. Forecasting becomes "prediction" when we identify special conditions that make the immediate probability much higher than usual and high enough to justify exceptional action. Proposed precursors run from aeronomy to zoology, but no identified phenomenon consistently precedes earthquakes. The reported prediction of the 1975 Haicheng, China earthquake is often proclaimed as the most successful, but the success is questionable. An earthquake predicted to occur near Parkfield, California in 19885 years has not happened. Why is prediction so hard? Earthquakes start in a tiny volume deep within an opaque medium; we do not know their boundary conditions, initial conditions, or material properties well; and earthquake precursors, if any, hide amongst unrelated anomalies. Earthquakes cluster in space and time, and following a quake earthquake probability spikes. Aftershocks illustrate this clustering, and later earthquakes may even surpass earlier ones in size. However, the main shock in a cluster usually comes first and causes the most damage. Specific models help reveal the physics and allow intelligent disaster response. Modeling stresses from past earthquakes may improve forecasts, but this approach has not yet been validated prospectively. Reliable prediction of individual quakes is not realistic in the foreseeable future, but probabilistic forecasting provides valuable information for reducing risk. Recent studies are also leading to exciting discoveries about earthquakes.

  5. The Redwood Coast Tsunami Work Group: Promoting Earthquake and Tsunami Resilience on California's North Coast

    NASA Astrophysics Data System (ADS)

    Dengler, L. A.; Henderson, C.; Larkin, D.; Nicolini, T.; Ozaki, V.

    2014-12-01

    In historic times, Northern California has suffered the greatest losses from tsunamis in the U.S. contiguous 48 states. 39 tsunamis have been recorded in the region since 1933, including five that caused damage. This paper describes the Redwood Coast Tsunami Work Group (RCTWG), an organization formed in 1996 to address the tsunami threat from both near and far sources. It includes representatives from government agencies, public, private and volunteer organizations, academic institutions, and individuals interested in working to reduce tsunami risk. The geographic isolation and absence of scientific agencies such as the USGS and CGS in the region, and relatively frequent occurrence of both earthquakes and tsunami events has created a unique role for the RCTWG, with activities ranging from basic research to policy and education and outreach programs. Regional interest in tsunami issues began in the early 1990s when there was relatively little interest in tsunamis elsewhere in the state. As a result, the group pioneered tsunami messaging and outreach programs. Beginning in 2008, the RCTWG has partnered with the National Weather Service and the California Office of Emergency Services in conducting the annual "live code" tsunami communications tests, the only area outside of Alaska to do so. In 2009, the RCTWG joined with the Southern California Earthquake Alliance and the Bay Area Earthquake Alliance to form the Earthquake Country Alliance to promote a coordinated and consistent approach to both earthquake and tsunami preparedness throughout the state. The RCTWG has produced and promoted a variety of preparedness projects including hazard mapping and sign placement, an annual "Earthquake - Tsunami Room" at County Fairs, public service announcements and print material, assisting in TsunamiReady community recognition, and facilitating numerous multi-agency, multidiscipline coordinated exercises, and community evacuation drills. Nine assessment surveys from 1993 to 2013 have tracked preparedness actions and personal awareness of tsunami hazards. Over the twenty-year period covered by the surveys, respondents aware of a local tsunami hazard increased from 51 to 90 percent and awareness of the Cascadia subduction zone increased from 16 to 60 percent.

  6. Mapping probability of fire occurrence in San Jacinto Mountains, California, USA

    NASA Astrophysics Data System (ADS)

    Chou, Yue Hong; Minnich, Richard A.; Chase, Richard A.

    1993-01-01

    An ecological data base for the San Jacinto Mountains, California, USA, was used to construct a probability model of wildland fire occurrence. The model incorporates both environmental and human factors, including vegetation, temperature, precipitation, human structures, and transportation. Spatial autocorrelation was examined for both fire activity and vegetation to determine the specification of neighborhood effects in the model. Parameters were estimated using stepwise logistic regressions. Among the explanatory variables, the variable that represents the neighborhood effects of spatial processes is shown to be of great importance in the distribution of wildland fires. An important implication of this result is that the management of wildland fires must take into consideration neighborhood effects in addition to environmental and human factors. The distribution of fire occurrence probability is more accurately mapped when the model incorporates the spatial term of neighborhood effects. The map of fire occurrence probability is useful for designing large-scale management strategies of wildfire prevention.

  7. Credible occurrence probabilities for extreme geophysical events: earthquakes, volcanic eruptions, magnetic storms

    USGS Publications Warehouse

    Love, Jeffrey J.

    2012-01-01

    Statistical analysis is made of rare, extreme geophysical events recorded in historical data -- counting the number of events $k$ with sizes that exceed chosen thresholds during specific durations of time $\\tau$. Under transformations that stabilize data and model-parameter variances, the most likely Poisson-event occurrence rate, $k/\\tau$, applies for frequentist inference and, also, for Bayesian inference with a Jeffreys prior that ensures posterior invariance under changes of variables. Frequentist confidence intervals and Bayesian (Jeffreys) credibility intervals are approximately the same and easy to calculate: $(1/\\tau)[(\\sqrt{k} - z/2)^{2},(\\sqrt{k} + z/2)^{2}]$, where $z$ is a parameter that specifies the width, $z=1$ ($z=2$) corresponding to $1\\sigma$, $68.3\\%$ ($2\\sigma$, $95.4\\%$). If only a few events have been observed, as is usually the case for extreme events, then these "error-bar" intervals might be considered to be relatively wide. From historical records, we estimate most likely long-term occurrence rates, 10-yr occurrence probabilities, and intervals of frequentist confidence and Bayesian credibility for large earthquakes, explosive volcanic eruptions, and magnetic storms.

  8. Analysis of Foreshock Sequences in California and Implications for Earthquake Triggering

    NASA Astrophysics Data System (ADS)

    Chen, Xiaowei; Shearer, Peter M.

    2016-01-01

    We analyze foreshock activity in California and compare observations with simulated catalogs based on a branching aftershock-triggering model. We first examine foreshock occurrence patterns for isolated M ≥ 5 earthquakes in southern California from 1981 to 2011 and in northern California from 1984 to 2009. Among the 64 M ≥ 5 mainshocks, excluding 3 swarms and 3 doubles, 53 % of the rest are preceded by at least one foreshock within 30 days and 5 km. Foreshock occurrence appears correlated with mainshock faulting type and depth. Foreshock area is correlated with the magnitude of the largest foreshock and the number of foreshocks, however, it is not correlated with mainshock magnitude. We then examine the occurrence pattern of all seismicity clusters without a minimum magnitude requirement, and the possibility that they are "foreshocks" of larger mainshocks. Only about 30 % of the small clusters lead to a larger cluster. About 66 % of the larger clusters have foreshock activities, and the spatial distribution pattern is similar to M ≥ 5 mainshocks, with lower occurrence rates in the Transverse Range and central California and higher occurrence rates in the Eastern California Shear Zone and the Bay Area. These results suggest that foreshock occurrence is largely controlled by the regional tectonic stress field and fault zone properties. In special cases, foreshock occurrence may be useful for short-term forecasting; however, foreshock properties are not reliably predictive of the magnitude of the eventual "mainshock". Comparison with simulated catalogs suggest that the "swarmy" features and foreshock occurrence rate in the observed catalogs are not well reproduced from common statistical models of earthquake triggering.

  9. Timing and slip for prehistoric earthquakes on the Superstition Mountain Fault, Imperial Valley, southern California

    NASA Astrophysics Data System (ADS)

    Gurrola, Larry D.; Rockwell, Thomas K.

    1996-03-01

    Trenches excavated across the Superstition Mountain fault in the Imperial Valley, California, have exposed evidence for four prehistorical earthquakes preserved in displaced lacustrine stratigraphy associated with ancient Lake Cahuilla. The presence of shoreline peat accumulations along with abundant detrital charcoal allows for high-precision age determination of some stratigraphic units, thereby providing constraints on the timing of three of the paleoearthquakes. These three events occurred within a 480- to 820-year interval during the past 1200 years. The most recent earthquake (event 1) occurred during a fluvial phase of deposition between A.D. 1440-1637, immediately prior to the inundation of the Cahuilla basin at about A.D. 1480 and 1660. A channel margin was offset 2.2 +0.4/-0.15 m in this rupture, suggesting an earthquake with a magnitude ≥7. The penultimate event (event 2) also occurred during fluvial deposition after A.D. 1280 but before another lakestand at A.D. 1440-1640. Lateral slip could not be resolved for event 2. However, based on juxtaposition of dissimilar units and the amount of deformation produced by this event, it is presumed that this was also a large earthquake. The timing of event 3 is constrained to have occurred between about A.D. 820 and 1280. This event is represented by several fractures and small displacements that rupture up to a distinct stratigraphic level or event horizon. Slip was not resolved for this event. Finally, the timing of event 4 is very poorly constrained to between A.D. 964 and 4670 B.C. Undoubtedly, many events may have occurred during this period. Notably, the past three earthquakes occurred within a period of less than 820 years, and it has been over 350 years since the last earthquake.

  10. Premonitory patterns of seismicity months before a large earthquake: five case histories in Southern California.

    PubMed

    Keilis-Borok, V I; Shebalin, P N; Zaliapin, I V

    2002-12-24

    This article explores the problem of short-term earthquake prediction based on spatio-temporal variations of seismicity. Previous approaches to this problem have used precursory seismicity patterns that precede large earthquakes with "intermediate" lead times of years. Examples include increases of earthquake correlation range and increases of seismic activity. Here, we look for a renormalization of these patterns that would reduce the predictive lead time from years to months. We demonstrate a combination of renormalized patterns that preceded within 1-7 months five large (M > or = 6.4) strike-slip earthquakes in southeastern California since 1960. An algorithm for short-term prediction is formulated. The algorithm is self-adapting to the level of seismicity: it can be transferred without readaptation from earthquake to earthquake and from area to area. Exhaustive retrospective tests show that the algorithm is stable to variations of its adjustable elements. This finding encourages further tests in other regions. The final test, as always, should be advance prediction. The suggested algorithm has a simple qualitative interpretation in terms of deformations around a soon-to-break fault: the blocks surrounding that fault began to move as a whole. A more general interpretation comes from the phenomenon of self-similarity since our premonitory patterns retain their predictive power after renormalization to smaller spatial and temporal scales. The suggested algorithm is designed to provide a short-term approximation to an intermediate-term prediction. It remains unclear whether it could be used independently. It seems worthwhile to explore similar renormalizations for other premonitory seismicity patterns. PMID:12482945

  11. Broadband records of earthquakes in deep gold mines and a comparison with results from SAFOD, California

    USGS Publications Warehouse

    McGarr, A.; Boettcher, M.; Fletcher, Joe B.; Sell, R.; Johnston, M.J.S.; Durrheim, R.; Spottiswoode, S.; Milev, A.

    2009-01-01

    For one week during September 2007, we deployed a temporary network of field recorders and accelerometers at four sites within two deep, seismically active mines. The ground-motion data, recorded at 200 samples/sec, are well suited to determining source and ground-motion parameters for the mining-induced earthquakes within and adjacent to our network. Four earthquakes with magnitudes close to 2 were recorded with high signal/noise at all four sites. Analysis of seismic moments and peak velocities, in conjunction with the results of laboratory stick-slip friction experiments, were used to estimate source processes that are key to understanding source physics and to assessing underground seismic hazard. The maximum displacements on the rupture surfaces can be estimated from the parameter Rv, where v is the peak ground velocity at a given recording site, and R is the hypocentral distance. For each earthquake, the maximum slip and seismic moment can be combined with results from laboratory friction experiments to estimate the maximum slip rate within the rupture zone. Analysis of the four M 2 earthquakes recorded during our deployment and one of special interest recorded by the in-mine seismic network in 2004 revealed maximum slips ranging from 4 to 27 mm and maximum slip rates from 1.1 to 6:3 m=sec. Applying the same analyses to an M 2.1 earthquake within a cluster of repeating earthquakes near the San Andreas Fault Observatory at Depth site, California, yielded similar results for maximum slip and slip rate, 14 mm and 4:0 m=sec.

  12. Premonitory patterns of seismicity months before a large earthquake: Five case histories in Southern California

    PubMed Central

    Keilis-Borok, V. I.; Shebalin, P. N.; Zaliapin, I. V.

    2002-01-01

    This article explores the problem of short-term earthquake prediction based on spatio-temporal variations of seismicity. Previous approaches to this problem have used precursory seismicity patterns that precede large earthquakes with “intermediate” lead times of years. Examples include increases of earthquake correlation range and increases of seismic activity. Here, we look for a renormalization of these patterns that would reduce the predictive lead time from years to months. We demonstrate a combination of renormalized patterns that preceded within 1–7 months five large (M ≥ 6.4) strike-slip earthquakes in southeastern California since 1960. An algorithm for short-term prediction is formulated. The algorithm is self-adapting to the level of seismicity: it can be transferred without readaptation from earthquake to earthquake and from area to area. Exhaustive retrospective tests show that the algorithm is stable to variations of its adjustable elements. This finding encourages further tests in other regions. The final test, as always, should be advance prediction. The suggested algorithm has a simple qualitative interpretation in terms of deformations around a soon-to-break fault: the blocks surrounding that fault began to move as a whole. A more general interpretation comes from the phenomenon of self-similarity since our premonitory patterns retain their predictive power after renormalization to smaller spatial and temporal scales. The suggested algorithm is designed to provide a short-term approximation to an intermediate-term prediction. It remains unclear whether it could be used independently. It seems worthwhile to explore similar renormalizations for other premonitory seismicity patterns. PMID:12482945

  13. Slip rates and interseismic locking depths of southern California faults inferred from viscoelastic earthquake cycle models

    NASA Astrophysics Data System (ADS)

    Chuang, R. Y.; Johnson, K. M.

    2009-12-01

    Estimates of fault slip rates and interseismic fault locking in southern California are important for seismic hazard assessments. However, some model estimates of slip rates from geodetic data are lower than geologic rates on the Garlock Fault and Mojave and San Bernadino segment of the San Andreas Fault, and locking depth, which varies along the strike, is poorly resolved. We designed several classes of lithosphere block models to examine the effect of model assumptions on slip rate and locking depth estimates. We show that a viscoelastic earthquake cycle model constrained by GPS data predicts slip rates that are entirely consistent with geologic slip rate estimates an all major faults in southern California. We found that the discrepancy between the geological and geodetic rates depends on the model used to fit GPS data. Our viscoelastic earthquake cycle model consists of fault-bounded blocks in an elastic crust overlying a viscoelastic lower crust and uppermost mantle. Interseismic locking of faults and associated deformation is modeled with steady back-slip on faults and imposed periodic earthquakes. Based on comparisons of an elastic block model and our viscoelastic cycle model, we conclude that elastic block models tend to underpredict slip rates on the Mojave and Carrizo segments of the San Andreas Fault and the Garlock Fault because these faults are mid to late in the earthquake cycle and current strain rates across these faults are lower than average due to viscous relaxation of the lower crust. We conclude that elastic block models overpredict the composite slip rate across the southern Mojave ECSZ because these faults are in the early phase of the composite earthquake cycle and current deformation rates across this region are higher than average because of accelerated viscous flow in the lower crust. We consider the influence of model assumptions on locking depth estimates by simultaneously estimating the distribution of interseismic fault locking depths and fault slip rates. In one model, we assume deformation is steady with time, and faults are either locked or creeping at constant resistive shear stress. In inversions with this model, we solve for the distribution of locked and creeping patches on faults. In another class of lithosphere block models, we impose periodic earthquakes and consider time-variable viscous flow of the asthenosphere. In this model we assume that faults are locked above some depth during the interseismic period and creep below this depth. Preliminary results show that the steady model favors deep locking depths, while the earthquake cycle model prefers moderate locking depths for the San Andreas fault system.

  14. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  15. Products and Services Available from the Southern California Earthquake Data Center (SCEDC) and the Southern California Seismic Network (SCSN)

    NASA Astrophysics Data System (ADS)

    Yu, E.; Bhaskaran, A.; Chen, S.; Chowdhury, F. R.; Meisenhelter, S.; Hutton, K.; Given, D.; Hauksson, E.; Clayton, R. W.

    2010-12-01

    Currently the SCEDC archives continuous and triggered data from nearly 5000 data channels from 425 SCSN recorded stations, processing and archiving an average of 12,000 earthquakes each year. The SCEDC provides public access to these earthquake parametric and waveform data through its website www.data.scec.org and through client applications such as STP and DHI. This poster will describe the most significant developments at the SCEDC in the past year. Updated hardware: ? The SCEDC has more than doubled its waveform file storage capacity by migrating to 2 TB disks. New data holdings: ? Waveform data: Beginning Jan 1, 2010 the SCEDC began continuously archiving all high-sample-rate strong-motion channels. All seismic channels recorded by SCSN are now continuously archived and available at SCEDC. ? Portable data from El Mayor Cucapah 7.2 sequence: Seismic waveforms from portable stations installed by researchers (contributed by Elizabeth Cochran, Jamie Steidl, and Octavio Lazaro-Mancilla) have been added to the archive and are accessible through STP either as continuous data or associated with events in the SCEDC earthquake catalog. This additional data will help SCSN analysts and researchers improve event locations from the sequence. ? Real time GPS solutions from El Mayor Cucapah 7.2 event: Three component 1Hz seismograms of California Real Time Network (CRTN) GPS stations, from the April 4, 2010, magnitude 7.2 El Mayor-Cucapah earthquake are available in SAC format at the SCEDC. These time series were created by Brendan Crowell, Yehuda Bock, the project PI, and Mindy Squibb at SOPAC using data from the CRTN. The El Mayor-Cucapah earthquake demonstrated definitively the power of real-time high-rate GPS data: they measure dynamic displacements directly, they do not clip and they are also able to detect the permanent (coseismic) surface deformation. ? Triggered data from the Quake Catcher Network (QCN) and Community Seismic Network (CSN): The SCEDC in cooperation with QCN and CSN is exploring ways to archive and distribute data from high density low cost networks. As a starting point the SCEDC will store a dataset from QCN and CSN and distribute it through a separate STP client. New archival methods: ? The SCEDC is exploring the feasibility of archiving and distributing waveform data using cloud computing such as Google Apps. A month of continuous data from the SCEDC archive will be stored in Google Apps and a client developed to access it in a manner similar to STP. XML formats: ? The SCEDC is now distributing earthquake parameter data through web services in QuakeML format. ? The SCEDC in collaboration with the Northern California Earthquake Data Center (NCEDC) and USGS Golden has reviewed and revised the StationXML format to produce version 2.0. The new version includes a rules on extending the schema, use of named complex types, and greater consistency in naming conventions. Based on this work we plan to develop readers and writers of the StationXML format.

  16. The structure of the crust and distribution of earthquakes in southern California

    NASA Astrophysics Data System (ADS)

    Nazareth, Julie Jeannine

    2002-09-01

    The lithologically and tectonically complex crust of southern California and the current broad deformation zone accommodating the relative motion between the Pacific and North American plates, result in significant variations in style, depth distribution, and rate of earthquakes, and thus also in the seismic hazard across southern California. Although the thickness of the seismogenic crust is an important parameter in seismic hazard analysis, it has never been determined systematically for southern California. Seismogenic thickness can be predicted by the depth distribution of the moment release of regional seismicity. The seismogenic thickness of southern California is highly variable, ranging from less than 10 km in the Salton Trough to greater than 25 km at the southwestern edge of the San Joaquin Valley. On average, the seismogenic thickness of southern California is 15.0 km. Seismogenic thickness along the major strike slip systems of southern California can vary significantly along strike. Fault segmentation based upon surface features does not correspond to the variation in seismogenic thickness and thus the potential down-dip width of the fault. A model of the broad scale features of the crust and upper mantle structure of the borderland-continent transition zone adjacent to Los Angeles constrains the crustal thickness and the location and width of the transition zone. The data require the Moho to deepen significantly to the north, dramatically increasing the crustal thickness over a relatively short distance of 20--25 km. The Moho is coherent and laterally continuous beneath the Inner California Borderland and transition zone. The Inner Borderland seems to be modified and thickened oceanic crust, with the oceanic upper mantle intact beneath it. The static stress change triggering model has some validity and can be useful in explaining apparently triggered seismicity within one fault length of a large mainshock. However, because its applicability varies between different sequences, its general application to seismic hazard evaluation requires more refinement and the inclusion of parameters such as tectonic regime, regional stress state, and fault strength.

  17. Wastewater disposal and earthquake swarm activity at the southern end of the Central Valley, California

    NASA Astrophysics Data System (ADS)

    Goebel, T. H. W.; Hosseini, S. M.; Cappa, F.; Hauksson, E.; Ampuero, J. P.; Aminzadeh, F.; Saleeby, J. B.

    2016-02-01

    Fracture and fault zones can channel fluid flow and transmit injection-induced pore pressure changes over large distances (>km), at which seismicity is rarely suspected to be human induced. We use seismicity analysis and hydrogeological models to examine the role of seismically active faults in inducing earthquakes. We analyze a potentially injection-induced earthquake swarm with three events above M4 near the White Wolf fault (WWF). The swarm deviates from classic main aftershock behavior, exhibiting uncharacteristically low Gutenberg-Richter b of 0.6, and systematic migration patterns. Some smaller events occurred southeast of the WWF in an area of several disposal wells, one of which became active just 5 months before the main swarm activity. Hydrogeological modeling revealed that wastewater disposal likely contributed to seismicity via localized pressure increase along a seismically active fault. Our results suggest that induced seismicity may remain undetected in California without detailed analysis of local geologic setting, seismicity, and fluid diffusion.

  18. Cruise report for A1-00-SC southern California earthquake hazards project, part A

    USGS Publications Warehouse

    Gutmacher, Christina E.; Normark, William R.; Ross, Stephanie L.; Edwards, Brian D.; Sliter, Ray; Hart, Patrick; Cooper, Becky; Childs, Jon; Reid, Jane A.

    2000-01-01

    A three-week cruise to obtain high-resolution boomer and multichannel seismic-reflection profiles supported two project activities of the USGS Coastal and Marine Geology (CMG) Program: (1) evaluating the earthquake and related geologic hazards posed by faults in the near offshore area of southern California and (2) determining the pathways through which sea-water is intruding into aquifers of Los Angeles County in the area of the Long Beach and Los Angeles harbors. The 2000 cruise, A1-00-SC, is the third major data-collection effort in support of the first objective (Normark et al., 1999a, b); one more cruise is planned for 2002. This report deals primarily with the shipboard operations related to the earthquake-hazard activity. The sea-water intrusion survey is confined to shallow water and the techniques used are somewhat different from that of the hazards survey (see Edwards et al., in preparation).

  19. Stress/strain changes and triggered seismicity following the Mw 7.3 Landers, California earthquake

    NASA Astrophysics Data System (ADS)

    Gomberg, Joan

    1996-01-01

    Calculations of dynamic stresses and strains, constrained by broadband seismograms, are used to investigate their role in generating the remotely triggered seismicity that followed the June 28, 1992 Mw7.3 Landers, California earthquake. I compare straingrams and dynamic Coulomb failure functions calculated for the Landers earthquake at sites that did experience triggered seismicity with those at sites that did not. Bounds on triggering thresholds are obtained from analysis of dynamic strain spectra calculated for the Landers and Mw6.1 Joshua Tree, California earthquakes at various sites, combined with results of static strain investigations by others. I interpret three principal results of this study with those of a companion study by Gomberg and Davis [this issue]. First, the dynamic elastic stress changes themselves cannot explain the spatial distribution of triggered seismicity, particularly the lack of triggered activity along the San Andreas fault system. In addition to the requirement to exceed a Coulomb failure stress level, this result implies the need to invoke and satisfy the requirements of appropriate slip instability theory. Second, results of this study are consistent with the existence of frequency- or rate-dependent stress/strain triggering thresholds, inferred from the companion study and interpreted in terms of earthquake initiation involving a competition of processes, one promoting failure and the other inhibiting it. Such competition is also part of relevant instability theories. Third, the triggering threshold must vary from site to site, suggesting that the potential for triggering strongly depends on site characteristics and response. The lack of triggering along the San Andreas fault system may be correlated with the advanced maturity of its fault gouge zone; the strains from the Landers earthquake were either insufficient to exceed its larger critical slip distance or some other critical failure parameter; or the faults failed stably as aseismic creep events. Variations in the triggering threshold at sites of triggered seismicity may be attributed to variations in gouge zone development and properties. Finally, these interpretations provide ready explanations for the time delays between the Landers earthquake and the triggered events.

  20. Web Services and Data Enhancements at the Northern California Earthquake Data Center

    NASA Astrophysics Data System (ADS)

    Neuhauser, D. S.; Zuzlewski, S.; Lombard, P. N.; Allen, R. M.

    2013-12-01

    The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, simple text, or MiniSEED depending on the service and selected output format. The NCEDC offers the following web services that are compliant with the International Federation of Digital Seismograph Networks (FDSN) web services specifications: (1) fdsn-dataselect: time series data delivered in MiniSEED format, (2) fdsn-station: station and channel metadata and time series availability delivered in StationXML format, (3) fdsn-event: earthquake event information delivered in QuakeML format. In addition, the NCEDC offers the the following IRIS-compatible web services: (1) sacpz: provide channel gains, poles, and zeros in SAC format, (2) resp: provide channel response information in RESP format, (3) dataless: provide station and channel metadata in Dataless SEED format. The NCEDC is also developing a web service to deliver timeseries from pre-assembled event waveform gathers. The NCEDC has waveform gathers for ~750,000 northern and central California events from 1984 to the present, many of which were created by the USGS NCSN prior to the establishment of the joint NCSS (Northern California Seismic System). We are currently adding waveforms to these older event gathers with time series from the UCB networks and other networks with waveforms archived at the NCEDC, and ensuring that the waveform for each channel in the event gathers have the highest quality waveform from the archive.

  1. Open Access to Decades of NCSN Waveforms at the Northern California Earthquake Data Center

    NASA Astrophysics Data System (ADS)

    Neuhauser, D.; Klein, F.; Zuzlewski, S.; Jensen, E. G.; Oppenheimer, D.; Gee, L.; Romanowicz, B.

    2003-12-01

    The USGS in Menlo Park has operated the Northern California Seismic Network (NCSN) since 1967 and has generated digital seismograms since 1984. Since its inception, the NCSN has recorded 2900 distinct channels at over 500 distinct sites. Although originally used only for earthquake location and coda magnitude, these seismograms are now of interest to seismologists for studying earth structure, precision relocations through cross correlation timing, and analysis of strong motion records. Until recently, the NCSN waveform data were available only through research accounts and special request methods due to incomplete instrument responses. Over the past 2 years, the USGS has assembled the necessary descriptions for both historic and current NCSN instrumentation. The NCEDC and USGS jointly developed a procedure to assemble the hardware attributes and instrument responses for the NCSN data channels using a combination of a simple spreadsheet that defines the attributes of each data channel, and a limited number of attribute files for classes of sensors and shared digitizers. These files are used by programs developed by the NCEDC to populate the NCEDC hardware tracking database tables and then to generate both the simple response and the full SEED instrument response database tables. As a result, the NCSN waveform data can now be distributed in SEED format with any of the NCEDC standard waveform request methods. The NCEDC provides access to waveform data through Web forms, email requests, and programming interfaces. The SeismiQuery Web interface provides information about data holdings. NetDC allows users to retrieve inventory information, instrument responses, and waveforms in SEED format. STP provides both a Web and programming interface to retrieve data in SEED or other user-friendly formats. Through the California Integrated Seismic Network, we are working with the SCEDC to provide unified access to California earthquake data. The NCEDC is a joint project of the UC Berkeley Seismological Laboratory and the USGS Menlo Park that provides a long-term archive and distribution center for geophysical data for northern California.

  2. GPS Time Series Analysis of Southern California Associated with the 2010 M7.2 El Mayor/Cucapah Earthquake

    NASA Technical Reports Server (NTRS)

    Granat, Robert; Donnellan, Andrea

    2011-01-01

    The Magnitude 7.2 El-Mayor/Cucapah earthquake the occurred in Mexico on April 4, 2012 was well instrumented with continuous GPS stations in California. Large Offsets were observed at the GPS stations as a result of deformation from the earthquake providing information about the co-seismic fault slip as well as fault slip from large aftershocks. Information can also be obtained from the position time series at each station.

  3. A Strong Stress Shadow Effect from the 1992 M=7.3 Landers, California, Earthquake

    NASA Astrophysics Data System (ADS)

    Toda, S.; Stein, R. S.; Beroza, G. C.

    2010-12-01

    Both dynamic and static stress have the potential to trigger earthquakes within several rupture dimensions of a mainshock. It has, however, proven extraordinarily difficult to disentangle their contribution to aftershocks and subsequent mainshocks. There are nevertheless ways to discriminate between the two: Only dynamic stress can trigger quakes in the far field (tens of rupture dimensions from the source), and only static stress has an identified physical mechanism to produce stress shadows, areas where the stress is calculated to have dropped. The seismicity rate would be expected to drop in the stress shadows, and so they provide a key falsification test of the static stress hypothesis, and thus have been subject to intensive investigation and debate. Here we show that where the April 1992 Mw=6.1 Joshua Tree, California, aftershock zone was subjected to a static stress increase from the June 1992 Mw=7.3 Landers earthquake, the rate of aftershocks jumped, and where the aftershock zone was subjected to a static stress decrease on the likely earthquake nodal planes, seismicity abruptly shut down. This stress shadow interaction was first noted by Zanzerkia (Stanford Univ. Ph.D. thesis, 2003), and was subsequently investigated by Marsan and Nalbant (PAGEOPH, 2005); we have deepened these analyses by carrying out a systematic search of the source, receiver, and focal mechanism diversity, and a time and space study of catalog completeness. The sudden arrest of seismicity in the Joshua Tree aftershock zone upon the Landers earthquake demonstrates that static stress is a requisite—but not necessarily exclusive—element of earthquake triggering.

  4. Repeating Earthquake and Nonvolcanic Tremor Observations of Aseismic Deep Fault Transients in Central California.

    NASA Astrophysics Data System (ADS)

    Nadeau, R. M.; Traer, M.; Guilhem, A.

    2005-12-01

    Seismic indicators of fault zone deformation can complement geodetic measurements by providing information on aseismic transient deformation: 1) from deep within the fault zone, 2) on a regional scale, 3) with intermediate temporal resolution (weeks to months) and 4) that spans over 2 decades (1984 to early 2005), including pre- GPS and INSAR coverage. Along the San Andreas Fault (SAF) in central California, two types of seismic indicators are proving to be particularly useful for providing information on deep fault zone deformation. The first, characteristically repeating microearthquakes, provide long-term coverage (decades) on the evolution of aseismic fault slip rates at seismogenic depths along a large (~175 km) stretch of the SAF between the rupture zones of the ~M8 1906 San Francisco and 1857 Fort Tejon earthquakes. In Cascadia and Japan the second type of seismic indicator, nonvolcanic tremors, have shown a remarkable correlation between their activity rates and GPS and tiltmeter measurements of transient deformation in the deep (sub-seismogenic) fault zone. This correlation suggests that tremor rate changes and deep transient deformation are intimately related and that deformation associated with the tremor activity may be stressing the seismogenic zone in both areas. Along the SAF, nonvolcanic tremors have only recently been discovered (i.e., in the Parkfield-Cholame area), and knowledge of their full spatial extent is still relatively limited. Nonetheless the observed temporal correlation between earthquake and tremor activity in this area is consistent with a model in which sub-seismogenic deformation and seismogenic zone stress changes are closely related. We present observations of deep aseismic transient deformation associated with the 28 September 2004, M6 Parkfield earthquake from both repeating earthquake and nonvolcanic tremor data. Also presented are updated deep fault slip rate estimates from prepeating quakes in the San Juan Bautista area with an assessment of their significance to previously reported quasi-periodic slip rate pulses and small to moderate magnitude (> M3.5) earthquake occurrence in the area.

  5. Marine geology and earthquake hazards of the San Pedro Shelf region, southern California

    USGS Publications Warehouse

    Fisher, Michael A.; Normark, William R.; Langenheim, V.E.; Calvert, Andrew J.; Sliter, Ray

    2004-01-01

    High-resolution seismic-reflection data have been com- bined with a variety of other geophysical and geological data to interpret the offshore structure and earthquake hazards of the San Pedro Shelf, near Los Angeles, California. Prominent structures investigated include the Wilmington Graben, the Palos Verdes Fault Zone, various faults below the western part of the shelf and slope, and the deep-water San Pedro Basin. The structure of the Palos Verdes Fault Zone changes mark- edly southeastward across the San Pedro Shelf and slope. Under the northern part of the shelf, this fault zone includes several strands, but the main strand dips west and is probably an oblique-slip fault. Under the slope, this fault zone con- sists of several fault strands having normal separation, most of which dip moderately east. To the southeast near Lasuen Knoll, the Palos Verdes Fault Zone locally is a low-angle fault that dips east, but elsewhere near this knoll the fault appears to dip steeply. Fresh sea-floor scarps near Lasuen Knoll indi- cate recent fault movement. The observed regional structural variation along the Palos Verdes Fault Zone is explained as the result of changes in strike and fault geometry along a master strike-slip fault at depth. The shallow summit and possible wavecut terraces on Lasuen knoll indicate subaerial exposure during the last sea-level lowstand. Modeling of aeromagnetic data indicates the presence of a large magnetic body under the western part of the San Pedro Shelf and upper slope. This is interpreted to be a thick body of basalt of Miocene(?) age. Reflective sedimentary rocks overlying the basalt are tightly folded, whereas folds in sedimentary rocks east of the basalt have longer wavelengths. This difference might mean that the basalt was more competent during folding than the encasing sedimentary rocks. West of the Palos Verdes Fault Zone, other northwest-striking faults deform the outer shelf and slope. Evidence for recent movement along these faults is equivocal, because age dates on deformed or offset sediment are lacking.

  6. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  7. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  8. Calculation of earthquake rupture histories using a hybrid global search algorithm: Application to the 1992 Landers, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.

    1996-01-01

    A method is presented for the simultaneous calculation of slip amplitudes and rupture times for a finite fault using a hybrid global search algorithm. The method we use combines simulated annealing with the downhill simplex method to produce a more efficient search algorithm then either of the two constituent parts. This formulation has advantages over traditional iterative or linearized approaches to the problem because it is able to escape local minima in its search through model space for the global optimum. We apply this global search method to the calculation of the rupture history for the Landers, California, earthquake. The rupture is modeled using three separate finite-fault planes to represent the three main fault segments that failed during this earthquake. Both the slip amplitude and the time of slip are calculated for a grid work of subfaults. The data used consist of digital, teleseismic P and SH body waves. Long-period, broadband, and short-period records are utilized to obtain a wideband characterization of the source. The results of the global search inversion are compared with a more traditional linear-least-squares inversion for only slip amplitudes. We use a multi-time-window linear analysis to relax the constraints on rupture time and rise time in the least-squares inversion. Both inversions produce similar slip distributions, although the linear-least-squares solution has a 10% larger moment (7.3 ?? 1026 dyne-cm compared with 6.6 ?? 1026 dyne-cm). Both inversions fit the data equally well and point out the importance of (1) using a parameterization with sufficient spatial and temporal flexibility to encompass likely complexities in the rupture process, (2) including suitable physically based constraints on the inversion to reduce instabilities in the solution, and (3) focusing on those robust rupture characteristics that rise above the details of the parameterization and data set.

  9. Surface Deformation in Paso Robles, California, Associated with the December 22, 2003 San Simeon Earthquake from RADARSAT-1 Interferometry

    NASA Astrophysics Data System (ADS)

    Wicks, C.

    2006-12-01

    On December 22, 2003 a magnitude (Mw) 6.5 earthquake struck the central California coast in a sparsely populated area NE of San Simeon, California (Figure 1). In the city of Paso Robles (population ~28,000), about 39 km ESE of the epicenter, the two deaths caused by the earthquake occurred in the collapse of a building. The city was also the locus of the maximum building damage sustained during the earthquake. To gain insight into the cause of the damage focused on Paso Robles, I use Canadian Space Agency RADARSAT-1 images, made available by NASA through the Alaska Satellite Facility, to study earthquake related surface deformation in the area. RADARSAT interferograms reveal two areas of apparent subsidence that are related to the earthquake, but separate from the main co-seismic deformation signal. One area in Templeton, California, ~8 km south of Paso Robles, coincides with the highest measurement of peak ground acceleration where the measured subsidence is most likely the result of an earthquake induced compaction event. The other area of subsidence, is concentrated in the southern half of the city of Paso Robles. The area of subsidence in Paso Robles is bounded on the NE by a steep NW trending gradient that corresponds with (and parallels) the trend of four new hot springs that formed immediately after the San Simeon earthquake (Wang et al., Geophys. Res. Lett., 2004). The steep deformation gradient also corresponds to the area of maximum damage and the location where the two earthquake related deaths occurred. The volume of the displaced surface area corresponds well with the amount of fluid produced by a hot spring that has been flowing continuously since it began flowing immediately after the San Simeon earthquake.

  10. Products and Services Available from the Southern California Earthquake Data Center (SCEDC) and the Southern California Seismic Network (SCSN)

    NASA Astrophysics Data System (ADS)

    Chen, S. E.; Yu, E.; Bhaskaran, A.; Chowdhury, F. R.; Meisenhelter, S.; Hutton, K.; Given, D.; Hauksson, E.; Clayton, R. W.

    2011-12-01

    Currently, the SCEDC archives continuous and triggered data from nearly 8400 data channels from 425 SCSN recorded stations, processing and archiving an average of 6.4 TB of continuous waveforms and 12,000 earthquakes each year. The SCEDC provides public access to these earthquake parametric and waveform data through its website www.data.scec.org and through client applications such as STP and DHI. This poster will describe the most significant developments at the SCEDC during 2011. New website design: ? The SCEDC has revamped its website. The changes make it easier for users to search the archive, discover updates and new content. These changes also improve our ability to manage and update the site. New data holdings: ? Post processing on El Mayor Cucapah 7.2 sequence continues. To date there have been 11847 events reviewed. Updates are available in the earthquake catalog immediately. ? A double difference catalog (Hauksson et. al 2011) spanning 1981 to 6/30/11 will be available for download at www.data.scec.org and available via STP. ? A focal mechanism catalog determined by Yang et al. 2011 is available for distribution at www.data.scec.org. ? Waveforms from Southern California NetQuake stations are now being stored in the SCEDC archive and available via STP as event associated waveforms. Amplitudes from these stations are also being stored in the archive and used by ShakeMap. ? As part of a NASA/AIST project in collaboration with JPL and SIO, the SCEDC will receive real time 1 sps streams of GPS displacement solutions from the California Real Time Network (http://sopac.ucsd.edu/projects/realtime; Genrich and Bock, 2006, J. Geophys. Res.). These channels will be archived at the SCEDC as miniSEED waveforms, which then can be distributed to the user community via applications such as STP. Improvements in the user tool STP: ? STP sac output now includes picks from the SCSN. New archival methods: ? The SCEDC is exploring the feasibility of archiving and distributing waveform data using cloud computing such as Google Apps. A month of continuous data from the SCEDC archive will be stored in Google Apps and a client developed to access it in a manner similar to STP. The data is stored in miniseed format with gzip compression. Time gaps between time series were padded with null values, which substantially increases search efficiency by make the records uniform in length.

  11. Caltech/USGS Southern California Seismic Network (SCSN): Infrastructure upgrade to support Earthquake Early Warning (EEW)

    NASA Astrophysics Data System (ADS)

    Bhadha, R. J.; Hauksson, E.; Boese, M.; Felizardo, C.; Thomas, V. I.; Yu, E.; Given, D. D.; Heaton, T. H.; Hudnut, K. W.

    2013-12-01

    The SCSN is the modern digital ground motion seismic network in Southern California and performs the following tasks: 1) Operates remote seismic stations and the central data processing systems in Pasadena; 2) Generates and reports real-time products including location, magnitude, ShakeMap, aftershock probabilities and others; 3) Responds to FEMA, CalOES, media, and public inquiries about earthquakes; 4) Manages the production, archival, and distribution of waveforms, phase picks, and other data at the SCEDC; 5) Contributes to development and implementation of the demonstration EEW system called CISN ShakeAlert. Initially, the ShakeAlert project was funded through the US Geological Survey (USGS) and in early 2012, the Gordon and Betty Moore Foundation provided three years of new funding for EEW research and development for the US west coast. Recently, we have also received some Urban Areas Security Initiative (UASI) funding to enhance the EEW capabilities for the local UASI region by making our system overall faster, more reliable and redundant than the existing system. The additional and upgraded stations will be capable of decreasing latency and ensuring data delivery by using more reliable and redundant telemetry pathways. Overall, this will enhance the reliability of the earthquake early warnings by providing denser station coverage and more resilient data centers than before. * Seismic Datalogger upgrade: replaces existing dataloggers with modern equipment capable of sending one-second uncompressed packets and utilizing redundant Ethernet telemetry. * GPS upgrade: replaces the existing GPS receivers and antennas, especially at "zipper array" sites near the major faults, with receivers that perform on-board precise point positioning to calculate position and velocity in real time and stream continuous data for use in EEW calculations. * New co-located seismic/GPS stations: increases station density and reduces early warning delays that are incurred by travel time of the seismic waves to the nearest station and will increase the reliability of the early warning with multiple measurements from more than one reporting station. * New server hardware: will allow for separate software development, testing/integration of algorithms and production systems capable of testing with current as well as playback of historical data. Also the new systems will be used to develop and test new EEW algorithms like slip detection (GPSlip) and Finite-Fault Rupture Detection (FinDer). * Standardization and Security: the new systems will allow us to standardize on hardware installation and configuration procedures. It will also enable us to implement the latest computer and network security measures to secure the data and internal processing from malicious threats. * System architecture: the new hardware will allow us to port existing EEW algorithms from Solaris to Linux. The new equipment will also allow us to experiment with different system architecture configurations like redundant servers with fail-over capabilities for the production EEW system. When installed the new and upgraded seismic dataloggers and GPS stations as well as the new server hardware will greatly improve the EEW capabilities of the SCSN network and the CISN ShakeAlert system in general providing more resilience, robustness and redundancy in the system.

  12. Do earthquakes exhibit self-organized criticality?

    PubMed

    Yang, Xiaosong; Du, Shuming; Ma, Jin

    2004-06-01

    If earthquakes are phenomena of self-organized criticality (SOC), statistical characteristics of the earthquake time series should be invariant after the sequence of events in an earthquake catalog are randomly rearranged. In this Letter we argue that earthquakes are unlikely phenomena of SOC because our analysis of the Southern California Earthquake Catalog shows that the first-return-time probability PM(T) is apparently changed after the time series is rearranged. This suggests that the SOC theory should not be used to oppose the efforts of earthquake prediction. PMID:15245263

  13. Basin Waves on a Seafloor Recording of the 1990 Upland, California, Earthquake: Implications for Ground Motions from a Larger Earthquake

    USGS Publications Warehouse

    Boore, D.M.

    1999-01-01

    The velocity and displacement time series from a recording on the seafloor at 74 km from the 1990 Upland earthquake (M = 5.6) are dominated by late-arriving waves with periods of 6 to 7 sec. These waves are probably surface waves traveling across the Los Angeles basin. Response spectra for the recording are in agreement with predictions from empirical regression equations and theoretical models for periods less than about 1 sec but are significantly larger than those predictions for longer periods. The longer-period spectral amplitudes are controlled by the late-arriving waves, which are not included in the theoretical models and are underrepresented in the data used in the empirical analyses. When the motions are scaled to larger magnitude, the results are in general agreement with simulations of wave propagation in the Los Angeles basin by Graves (1998).

  14. Aftershock Probabilities on Southern California Faults from a Million-Year RSQSim Catalog

    NASA Astrophysics Data System (ADS)

    Milner, K.; Jordan, T. H.; Richards-Dinger, K. B.; Dieterich, J. H.

    2012-12-01

    It is well known that the short-term rate of large aftershocks following a large earthquake on the Southern San Andreas Fault (SSAF) is high relative to the long-term rate (Reasenberg and Jones 1989), but the spatial distribution of the hazard increase is more challenging to quantify. We use a one-million-year synthetic earthquake catalog generated by the RSQSim physics-based simulator (Dieterich and Richards-Dinger 2010) to explore the spatiotemporal distribution of large aftershocks in the Southern California fault system. An interesting example is the Mojave section of the SSAF, where, in the first week following large (M7+) events, the average rate of equally large earthquakes increases by two orders of magnitude, and almost 3 orders of magnitude for M7.5+ events. The rate gains conform to magnitude-frequency distributions that are characteristic rather than Gutenberg-Richter: the rates of M7.5+ aftershocks are significantly higher than M7.0+ aftershocks. We quantify the spatial distribution of the hazard increase in the first year following initial events by analyzing participation rate gains on neighboring faults. For initial events on the Mojave section, the rate gains are highest on neighboring sections (suggesting an "unzipping" of the SSAF). In particular, the participation rate of the Coachella section in magnitude 7+ events increases tenfold, from ~10^-2.3 to ~10^-1.3. Similarly, the M7+ participation rate of the Carrizo section increases from ~10^-2.3 to ~10^-1.5, a rate gain of 6, and the Western Garlock's from ~10^-2.9 to ~10^-2.0, a rate gain of 8. The rate gain on the San Jacinto fault is much smaller, about 1.5. Elastic rebound explains the decrease by a factor of 5-10 in the M7+ participation rate on the rupture surface. This methodology is highly dependent on the fault geometries involved, and a variety of patterns are observed for initial events on different fault sections. We illustrate how post-event rate gains can be carried through to hazard calculations, producing short-term hazard and risk estimates.

  15. Introducing ShakeMap to potential users in Puerto Rico using scenarios of damaging historical and probable earthquakes

    NASA Astrophysics Data System (ADS)

    Huerfano, V. A.; Cua, G.; von Hillebrandt, C.; Saffar, A.

    2007-12-01

    The island of Puerto Rico has a long history of damaging earthquakes. Major earthquakes from off-shore sources have affected Puerto Rico in 1520, 1615, 1670, 1751, 1787, 1867, and 1918 (Mueller et al, 2003; PRSN Catalogue). Recent trenching has also yielded evidence of possible M7.0 events inland (Prentice, 2000). The high seismic hazard, large population, high tsunami potential and relatively poor construction practice can result in a potentially devastating combination. Efficient emergency response in event of a large earthquake will be crucial to minimizing the loss of life and disruption of lifeline systems in Puerto Rico. The ShakeMap system (Wald et al, 2004) developed by the USGS to rapidly display and disseminate information about the geographical distribution of ground shaking (and hence potential damage) following a large earthquake has proven to be a vital tool for post earthquake emergency response efforts, and is being adopted/emulated in various seismically active regions worldwide. Implementing a robust ShakeMap system is among the top priorities of the Puerto Rico Seismic Network. However, the ultimate effectiveness of ShakeMap in post- earthquake response depends not only on its rapid availability, but also on the effective use of the information it provides. We developed ShakeMap scenarios of a suite of damaging historical and probable earthquakes that severely impact San Juan, Ponce, and Mayagüez, the 3 largest cities in Puerto Rico. Earthquake source parameters were obtained from McCann and Mercado (1998); and Huérfano (2004). For historical earthquakes that generated tsunamis, tsunami inundation maps were generated using the TIME method (Shuto, 1991). The ShakeMap ground shaking maps were presented to local and regional governmental and emergency response agencies at the 2007 Annual conference of the Puerto Rico Emergency Management and Disaster Administration in San Juan, PR, and at numerous other emergency management talks and training sessions. Economic losses are estimated using the ShakeMap scenario ground motions (Saffar, 2007). The calibration tasks necessary in generating these scenarios (developing Vs30 maps, attenuation relationships) complement the on-going efforts of the Puerto Rico Seismic Network to generate ShakeMaps in real-time.

  16. Spatial-temporal variation of low-frequency earthquake bursts near Parkfield, California

    NASA Astrophysics Data System (ADS)

    Wu, Chunquan; Guyer, Robert; Shelly, David; Trugman, Daniel; Frank, William; Gomberg, Joan; Johnson, Paul

    2015-08-01

    Tectonic tremor (TT) and low-frequency earthquakes (LFEs) have been found in the deeper crust of various tectonic environments globally in the last decade. The spatial-temporal behaviour of LFEs provides insight into deep fault zone processes. In this study, we examine recurrence times from a 12-yr catalogue of 88 LFE families with ˜730 000 LFEs in the vicinity of the Parkfield section of the San Andreas Fault (SAF) in central California. We apply an automatic burst detection algorithm to the LFE recurrence times to identify the clustering behaviour of LFEs (LFE bursts) in each family. We find that the burst behaviours in the northern and southern LFE groups differ. Generally, the northern group has longer burst duration but fewer LFEs per burst, while the southern group has shorter burst duration but more LFEs per burst. The southern group LFE bursts are generally more correlated than the northern group, suggesting more coherent deep fault slip and relatively simpler deep fault structure beneath the locked section of SAF. We also found that the 2004 Parkfield earthquake clearly increased the number of LFEs per burst and average burst duration for both the northern and the southern groups, with a relatively larger effect on the northern group. This could be due to the weakness of northern part of the fault, or the northwesterly rupture direction of the Parkfield earthquake.

  17. Cross-fault triggering in the November 1987 Superstition Hills earthquake sequence, southern California

    SciTech Connect

    Hudnut, K.W.; Pacheco, J. Columbia University, New York, NY ); Seeber, L. )

    1989-02-01

    Two large strike-slip ruptures 11.4 hours apart occurred on intersecting, nearly orthogonal, vertical faults during the November 1987 Superstition Hills earthquake sequence in southern California. This sequence is the latest in a northwestward progression of earthquakes (1979, 1981, and 1987) rupturing a set of parallel left-lateral cross-faults that trend northeast between the Brawley seismic zone and Superstition Hills fault, a northwest trending main strand of the San Jacinto fault zone. The first large event (M{sub s} = 6.2) in the 1987 sequence ruptured the Elmore Ranch fault, a cross-fault that strikes northeasterly between the Brawley seismic zone and the Superstition Hills main fault. The second event (M{sub s} = 6.6) initiated its rupture at the intersection of the cross-fault and main fault and propagated towards the southeast along the main fault. The following hypotheses are advanced; (1) slip on the cross-fault locally decreased normal stress on the main fault, and triggered the main fault rupture after a delay; and (2) the delay was caused by fluid diffusion. It is inferred that the observed northwestward progression of ruptures on cross-faults may continue. The next cross-fault expected to rupture intersects both the San Andreas fault and the San Jacinto fault zone. The authors hypothesize that rupture of this cross-fault may trigger rupture on either of these main faults by a mechanism similar to that which occurred in the Superstition Hills earthquake sequence.

  18. A physical model for earthquakes. I - Fluctuations and interactions. II - Application to southern California

    NASA Technical Reports Server (NTRS)

    Rundle, John B.

    1988-01-01

    The idea that earthquakes represent a fluctuation about the long-term motion of plates is expressed mathematically through the fluctuation hypothesis, under which all physical quantities which pertain to the occurance of earthquakes are required to depend on the difference between the present state of slip on the fault and its long-term average. It is shown that under certain circumstances the model fault dynamics undergo a sudden transition from a spatially ordered, temporally disordered state to a spatially disordered, temporally ordered state, and that the latter stages are stable for long intervals of time. For long enough faults, the dynamics are evidently chaotic. The methods developed are then used to construct a detailed model for earthquake dynamics in southern California. The result is a set of slip-time histories for all the major faults, which are similar to data obtained by geological trenching studies. Although there is an element of periodicity to the events, the patterns shift, change and evolve with time. Time scales for pattern evolution seem to be of the order of a thousand years for average recurring intervals of about a hundred years.

  19. The 1979 Homestead Valley earthquake sequence, California: control of aftershocks and postseismic deformation.

    USGS Publications Warehouse

    Stein, R.S.; Lisowski, M.

    1983-01-01

    The coseismic slip and geometry of the March 15, 1979, Homestead Valley, California, earthquake sequence are well constrained by precise horizontal and vertical geodetic observations and by data from a dense local seismic network. These observations indicate 0.52 + or - 0.10 m of right-lateral slip and 0.17 + or - 0.04 m of reverse slip on a buried vertical 6-km-long and 5-km-deep fault and yield a mean static stress drop of 7.2 + or -1.3 MPa. The largest shock had Ms = 5.6. Observations of the ground rupture revealed up to 0.1 m of right-lateral slip on two mapped faults that are subparallel to the modeled seismic slip plane. In the 1.9 years since the earthquakes, geodetic network displacements indicate that an additional 60+ or -10 mm of postseismic creep took place. The rate of postseismic shear strain (0.53 + or - 0.13 mu rad/yr) measured within a 30 X 30-km network centered on the principal events was anomalously high compared to its preearthquake value and the postseismic rate in the adjacent network. This transient cannot be explained by postseismic slip on the seismic fault but rather indicates that broadside release of strain followed the earthquake sequence. -Authors

  20. Earthquake source mechanisms and transform fault tectonics in the Gulf of California

    NASA Technical Reports Server (NTRS)

    Goff, John A.; Bergman, Eric A.; Solomon, Sean C.

    1987-01-01

    The source parameters of 19 large earthquakes in the Gulf of California were determined from inversions of long-period P and SH waveforms. The goal was to understand the recent slip history of this dominantly transform boundary between the Pacific and North American plates as well as the effect on earthquake characteristics of the transition from young oceanic to continental lithosphere. For the better recorded transform events, the fault strike is resolved to + or - 4 deg at 90 percent confidence. The slip vectors thus provide important constraints on the direction of relative plate motion. Most centroid depths are poorly resolved because of tradeoffs between depth and source time function. On the basis of waveform modeling, historical seismicity, and other factors, it is appropriate to divide the Gulf into three distinct zones. The difference in seismic character among the three zones is likely the result of differing levels of maturity of the processes of rifting, generation of oceanic crust, and formation of stable oceanic transform faults. The mechanism of an earthquake on the Tres Marias Escarpment is characterized by thrust faulting and likely indicates the direction of relative motion between the Rivera and North American plates. This mechanism requires revision in plate velocity models which predict strike slip motion at this location.

  1. In Search for Thermal Precursors to Earthquakes in California Using MODIS Land Surface Temperature Data

    NASA Astrophysics Data System (ADS)

    Adams, D. A.; Eneva, M.

    2007-12-01

    We test claims that earthquakes are preceded by thermal anomalies by analyzing daily nighttime land surface temperatures (LSTs) derived from data collected by the MODIS (Moderate Resolution Imaging Spectroradiometer) instruments mounted on the Terra and Aqua satellites. Terra precedes Aqua by ~3 hours, so the LST difference between the two satellites provides an estimate of nighttime cooling/warming rates. The MODIS LST data used cover the period between 2000 and 2006 and are with ~1 km spatial resolution. They cover a 10°x10° tile including most of California, parts of neighboring Nevada and Arizona, and northern Mexico. Our focus is on quantifying various factors influencing the background variability of LSTs and estimating the uniqueness and statistical significance of any apparent LST anomalies. For this purpose, the LSTs and their Aqua-Terra differences are used to calculate parameters similar to the Robust Estimator of Thermal Infrared Anomalies (RETIRA) index, first described by Tramutoli (1998) and subsequently reported to show anomalously high values preceding a number of earthquakes (e.g., Tramutoli et al., 2005; Corrado et al., 2005; Genzano et al, 2007). We develop the RETIRA concept further in order to account better for meteorological and other effects on the LSTs. In particular, we quantify cloud edge effects and the effects of topography. The RETIRA index is computed for Terra LSTs, Aqua LSTs, and Aqua-Terra LST differences. We examine the relationship between M>4.5 earthquakes and anomalous RETIRA by generating movies of RETIRA time evolution. We also compare and test for statistical significance of the differences among four types of combined time periods - pre-seismic, during seismic clusters, post-seismic and seismically quiet periods. Although some statistically significant differences are established, they do not involve uniquely the pre-seismic periods and anomalies appear too ubiquitous during all types of periods to be useful for earthquake prediction. We will be further testing smaller areas around "hot spots" (Holliday and Rundle, 2005) and past earthquakes in California.

  2. Turbidity anomaly and probability of slope failure following the 2011 Great Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Noguchi, T.; Tanikawa, W.; Hirose, T.; Lin, W.; Kawagucci, S.; Yoshida, Y.; Honda, M. C.; Takai, K.; Kitazato, H.; Okamura, K.

    2011-12-01

    Turbidity anomaly at seafloor is often observed immediately after earthquakes (Thunnell et al., 1999: Mikada et al., 2006). Such turbidity anomaly at deepsea is thought to be results of the seismically induced landslides at trench slopes. Turbidity distribution was observed using turbidity meter (Seapoint Sensors Inc.) at the mainshock area of the 2011 off the Pacific coast Tohoku earthquake (Mw 9.0) one month after the event. Turbidity anomalies, in which the turbidity increased with depth, were observed near the seafloor at all four sites. The thickness of the anomalous zones increased with water depth; the thickness at station B, the deepest measurement site, was about 1300 m above the seafloor and the average particle concentration which is equivalent to turbidity in the zone was 1.5 mg/L. We analyzed the mineral composition and grain size distribution of the suspended particle collected one month after the earthquake and shallow sediment core collected before the earthquake at the mainshock area. The grain size of the suspended particles was ranged from 1 to 300μm, and XRD analysis confirmed the presence of chlorite, illite, quartz, and albite in the particles. These characteristics are similar to the subsurface sediment material. Earlier studies (Prior, 1984) have introduced a mathematical model for analysis of submarine slope stability that include the effect of vertical and horizontal seismic accelerations caused by the earthquake. We analyzed slope instability on the basis of their model using the physical properties (density and shear strength) of the shallow sediment core materials and the acceleration of 2011 off the Pacific coast Tohoku earthquake. Our results show that a submarine landslide can be induced by a very large ground acceleration, as high as 3 m/s2, even if the sediment layer on the sliding surface is not very thick. We interpret the high turbidity observed one month after the Tohoku earthquake as the result of thin submarine landsliding. Reference Mikada, H. K. Mitsuzawa, H. Matsumoto, T. Watanabe, S. Morita, R. Otsuka, H. Sugioka, T. Baba, E. Araki, K. Suyehiro, (2006), New discoveries in dynamics of an M8 earthquake-phenomena and their implications from the 2003 Tokachi-oki earthquake using a long term monitoring cabled observatory, Tectonophysics, 426, 95-105. Prior, D. B. (1984), Methods of stability analysis, in Slope Instability, Edited by Brunsden, D. and D. B. Prior, pp. 419-455, Wiley, New York. Thunnell, R., E. Tappa, R. Varela, M. Llano, Y. Astor, F. Muller-Karger, and R. Bohrer (1999), Increased marine sediment suspension and fluxes following an earthquake. Nature, 398, 233-236.

  3. High precision earthquake locations reveal seismogenic structure beneath Mammoth Mountain, California

    USGS Publications Warehouse

    Prejean, S.; Stork, A.; Ellsworth, W.; Hill, D.; Julian, B.

    2003-01-01

    In 1989, an unusual earthquake swarm occurred beneath Mammoth Mountain that was probably associated with magmatic intrusion. To improve our understanding of this swarm, we relocated Mammoth Mountain earthquakes using a double difference algorithm. Relocated hypocenters reveal that most earthquakes occurred on two structures, a near-vertical plane at 7-9 km depth that has been interpreted as an intruding dike, and a circular ring-like structure at ???5.5 km depth, above the northern end of the inferred dike. Earthquakes on this newly discovered ring structure form a conical section that dips outward away from the aseismic interior. Fault-plane solutions indicate that in 1989 the seismicity ring was slipping as a ring-normal fault as the center of the mountain rose with respect to the surrounding crust. Seismicity migrated around the ring, away from the underlying dike at a rate of ???0.4 km/month, suggesting that fluid movement triggered seismicity on the ring fault. Copyright 2003 by the American Geophysical Union.

  4. Holocene paleoseismicity, temporal clustering, and probabilities of future large (M > 7) earthquakes on the Wasatch fault zone, Utah

    USGS Publications Warehouse

    McCalpin, J.P.; Nishenko, S.P.

    1996-01-01

    The chronology of M>7 paleoearthquakes on the central five segments of the Wasatch fault zone (WFZ) is one of the best dated in the world and contains 16 earthquakes in the past 5600 years with an average repeat time of 350 years. Repeat times for individual segments vary by a factor of 2, and range from about 1200 to 2600 years. Four of the central five segments ruptured between ??? 620??30 and 1230??60 calendar years B.P. The remaining segment (Brigham City segment) has not ruptured in the past 2120??100 years. Comparison of the WFZ space-time diagram of paleoearthquakes with synthetic paleoseismic histories indicates that the observed temporal clusters and gaps have about an equal probability (depending on model assumptions) of reflecting random coincidence as opposed to intersegment contagion. Regional seismicity suggests that for exposure times of 50 and 100 years, the probability for an earthquake of M>7 anywhere within the Wasatch Front region, based on a Poisson model, is 0.16 and 0.30, respectively. A fault-specific WFZ model predicts 50 and 100 year probabilities for a M>7 earthquake on the WFZ itself, based on a Poisson model, as 0.13 and 0.25, respectively. In contrast, segment-specific earthquake probabilities that assume quasi-periodic recurrence behavior on the Weber, Provo, and Nephi segments are less (0.01-0.07 in 100 years) than the regional or fault-specific estimates (0.25-0.30 in 100 years), due to the short elapsed times compared to average recurrence intervals on those segments. The Brigham City and Salt Lake City segments, however, have time-dependent probabilities that approach or exceed the regional and fault specific probabilities. For the Salt Lake City segment, these elevated probabilities are due to the elapsed time being approximately equal to the average late Holocene recurrence time. For the Brigham City segment, the elapsed time is significantly longer than the segment-specific late Holocene recurrence time.

  5. Geodetic constraints on frictional properties and earthquake hazard in the Imperial Valley, Southern California

    NASA Astrophysics Data System (ADS)

    Lindsey, Eric O.; Fialko, Yuri

    2016-02-01

    We analyze a suite of geodetic observations across the Imperial Fault in southern California that span all parts of the earthquake cycle. Coseismic and postseismic surface slips due to the 1979 M 6.6 Imperial Valley earthquake were recorded with trilateration and alignment surveys by Harsh (1982) and Crook et al. (1982), and interseismic deformation is measured using a combination of multiple interferometric synthetic aperture radar (InSAR)-viewing geometries and continuous and survey-mode GPS. In particular, we combine more than 100 survey-mode GPS velocities with InSAR data from Envisat descending tracks 84 and 356 and ascending tracks 77 and 306 (149 total acquisitions), processed using a persistent scatterers method. The result is a dense map of interseismic velocities across the Imperial Fault and surrounding areas that allows us to evaluate the rate of interseismic loading and along-strike variations in surface creep. We compare available geodetic data to models of the earthquake cycle with rate- and state-dependent friction and find that a complete record of the earthquake cycle is required to constrain key fault properties including the rate-dependence parameter (a - b) as a function of depth, the extent of shallow creep, and the recurrence interval of large events. We find that the data are inconsistent with a high (>30 mm/yr) slip rate on the Imperial Fault and investigate the possibility that an extension of the San Jacinto-Superstition Hills Fault system through the town of El Centro may accommodate a significant portion of the slip previously attributed to the Imperial Fault. Models including this additional fault are in better agreement with the available observations, suggesting that the long-term slip rate of the Imperial Fault is lower than previously suggested and that there may be a significant unmapped hazard in the western Imperial Valley.

  6. Earthquake geology of the northern San Andreas Fault near Point Arena, California

    SciTech Connect

    Prentice, C.S.

    1989-01-01

    Excavations into a Holocene alluvial fan provided exposures of a record of prehistoric earthquakes near Point Arena, California. At least five earthquakes were recognized in the section. All of these occurred since the deposition of a unit that is approximately 2000 years old. Radiocarbon dating allows constraints to be placed on the dates of these earthquakes. A buried Holocene (2356-2709 years old) channel has been offset a maximum of 64 {plus minus} 2 meters. This implies a maximum slip rate of 25.5 {plus minus} 2.5 mm/yr. These data suggest that the average recurrence interval for great earthquakes on this segment of the San Andreas fault is long - between about 200 and 400 years. Offset marine terrace risers near Point Arena and an offset landslide near Fort Ross provide estimates of the average slip rate since Late Pleistocene time. Near Fort Ross, an offset landslide implies a slip rate of less than 39 mm/yr. Correlation and age estimates of two marine terrace risers across the San Andreas fault near Point Arena suggest slip rates of about 18-19 mm/yr since Late Pleistocene time. Tentative correlation of the Pliocene Ohlson Ranch Formation in northwestern Sonoma County with deposits 50 km to the northwest near Point Arean, provides piercing points to use in calculation of a Pliocene slip rate for the northern San Andreas fault. A fission-track age 3.3 {plus minus} 0.8 Ma was determined for zicrons separated from a tuff collected from the Ohlson Ranch Formation. The geomorphology of the region, especially of the two major river drainages, supports the proposed 50 km Pliocene offset. This implies a Pliocene slip rate of at least 12-20 mm/yr. These rates for different time periods imply that much of the Pacific-North American plate motion must be accommodated on other structures at this latitude.

  7. Source processes of industrially-induced earthquakes at the Geysers geothermal area, California

    USGS Publications Warehouse

    Ross, A.; Foulger, G.R.; Julian, B.R.

    1999-01-01

    Microearthquake activity at The Geysers geothermal area, California, mirrors the steam production rate, suggesting that the earthquakes are industrially induced. A 15-station network of digital, three-component seismic stations was operated for one month in 1991, and 3,900 earthquakes were recorded. Highly-accurate moment tensors were derived for 30 of the best recorded earthquakes by tracing rays through tomographically derived 3-D VP and VP / VS structures, and inverting P-and S-wave polarities and amplitude ratios. The orientations of the P-and T-axes are very scattered, suggesting that there is no strong, systematic deviatoric stress field in the reservoir, which could explain why the earthquakes are not large. Most of the events had significant non-double-couple (non-DC) components in their source mechanisms with volumetric components up to ???30% of the total moment. Explosive and implosive sources were observed in approximately equal numbers, and must be caused by cavity creation (or expansion) and collapse. It is likely that there is a causal relationship between these processes and fluid reinjection and steam withdrawal. Compensated linear vector dipole (CLVD) components were up to 100% of the deviatoric component. Combinations of opening cracks and shear faults cannot explain all the observations, and rapid fluid flow may also be involved. The pattern of non-DC failure at The Geysers contrasts with that of the Hengill-Grensdalur area in Iceland, a largely unexploited water-dominated field in an extensional stress regime. These differences are poorly understood but may be linked to the contrasting regional stress regimes and the industrial exploitation at The Geysers.

  8. Satellite IR Thermal Measurements Prior to the September 2004 Earthquakes in Central California

    NASA Technical Reports Server (NTRS)

    Ouzounov, D.; Logan, T.; Taylor, Patrick

    2004-01-01

    We present and discuss observed variations in thermal transients and radiation fields prior to the earthquakes of September 18 near Bodie (M5.5) and September 28,2004 near Parkfield(M6.0) in California. Previous analysis of earthquake events have indicated the presence of a thermal anomaly, where temperatures increased or did not return to its usual nighttime value. The procedures used in our work is to analyze weather satellite data taken at night and to record the general condition where the ground cools after sunset. Two days before the Bodie earthquake lower temperature radiation was observed by the NOAA/AVHRR satellite. This occurred when the entire region was relatively cloud-free. IR land surface nighttime temperature from the MODIS instrument rose to +4 C in a 100 km radius around the Bodie epicenter. The thermal transient field recorded by MODIS in the vicinity of Parkfield, also with a cloud free environment, was around +l C and it is significantly smaller than the Parkfield epicenter, however, for that period showed a steady increase 4 days prior to the earthquake and a significant drop of the night before the quake. Geosynchronous weather satellite thermal IR measurements taken every half hour from sunset to dawn, were also recorded for 10 days prior to the Parkfield event and 5 days after as well as the day of the quake. To establish a baseline we also obtained GOES data for the same Julian sets were then used to systematically observe and record any thermal anomaly prior to the events that deviated from the baseline. Our recent results support the hypothesis of a possible relationship between an thermodynamic processes produced by increasing tectonic stress in the Earth's crust and a subsequent electro-chemical interaction between this crust and the atmosphere/ionosphere.

  9. Source processes of industrially-induced earthquakes at The Geysers geothermal area, California

    SciTech Connect

    Ross, A.; Foulger, G.R.; Julian, B.R.

    1999-12-01

    Microearthquake activity at The Geysers geothermal area, California, mirrors the steam production rate, suggesting that the earthquakes are industrially induced. A 15-station network of digital, three-component seismic stations was operated for one month in 1991, and 3,900 earthquakes were recorded. Highly-accurate moment tensors were derived for 30 of the best recorded earthquakes by tracing rays through tomographically derived 3-D V{sub p} and V{sub p}/V{sub s} structures, and inverting P- and S-wave polarities and amplitude ratios. The orientations of the P- and T-axes are very scattered, suggesting that there is not strong, systematic deviatoric stress field in the reservoir, which could explain why the earthquakes are not large. Most of the events had significant non-double-couple (non-DC) components in their source mechanisms with volumetric components up to {approximately}30% of the total moment. Explosive and implosive sources were observed in approximately equal numbers, and must be caused by cavity creation (or expansion) and collapse. It is likely that there is a causal relationship between these processes and fluid reinjection and steam withdrawal. Compensated linear vector dipole (CLVD) components were up to 100% of the deviatoric component. Combinations of opening cracks and shear faults cannot explain all the observations, and rapid fluid flow may also be involved. The pattern of non-DC failure at The Geysers contrasts with that of the Hengill-Grensdalur area in Iceland, a largely unexploited water-dominated field in an extensional stress regime. These differences are poorly understood but may be linked to the contrasting regional stress regimes and the industrial exploitation at The Geysers.

  10. A new method to identify earthquake swarms applied to seismicity near the San Jacinto Fault, California

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Shearer, Peter M.

    2016-02-01

    Understanding earthquake clustering in space and time is important but also challenging because of complexities in earthquake patterns and the large and diverse nature of earthquake catalogs. Swarms are of particular interest because they likely result from physical changes in the crust, such as slow slip or fluid flow. Both swarms and clusters resulting from aftershock sequences can span a wide range of spatial and temporal scales. Here we test and implement a new method to identify seismicity clusters of varying sizes and discriminate them from randomly occurring background seismicity. Our method searches for the closest neighboring earthquakes in space and time and compares the number of neighbors to the background events in larger space/time windows. Applying our method to California's San Jacinto Fault Zone (SJFZ), we find a total of 89 swarm-like groups. These groups range in size from 0.14 to 7.23 km and last from 15 minutes to 22 days. The most striking spatial pattern is the larger fraction of swarms at the northern and southern ends of the SJFZ than its central segment, which may be related to more normal-faulting events at the two ends. In order to explore possible driving mechanisms, we study the spatial migration of events in swarms containing at least 20 events by fitting with both linear and diffusion migration models. Our results suggest that SJFZ swarms are better explained by fluid flow because their estimated linear migration velocities are far smaller than those of typical creep events while large values of best-fitting hydraulic diffusivity are found.

  11. Crustal velocities near Coalinga, California, modeled from a combined earthquake/explosion refraction profile

    USGS Publications Warehouse

    Macgregor-Scott, N.; Walter, A.

    1988-01-01

    Crustal velocity structure for the region near Coalinga, California, has been derived from both earthquake and explosion seismic phase data recorded along a NW-SE seismic-refraction profile on the western flank of the Great Valley east of the Diablo Range. Comparison of the two data sets reveals P-wave phases in common which can be correlated with changes in the velocity structure below the earthquake hypocenters. In addition, the earthquake records reveal secondary phases at station ranges of less than 20 km that could be the result of S- to P-wave conversions at velocity interfaces above the earthquake hypocenters. Two-dimensional ray-trace modeling of the P-wave travel times resulted in a P-wave velocity model for the western flank of the Great Valley comprised of: 1) a 7- to 9-km thick section of sedimentary strata with velocities similar to those found elsewhere in the Great Valley (1.6 to 5.2 km s-1); 2) a middle crust extending to about 14 km depth with velocities comparable to those reported for the Franciscan assemblage in the Diablo Range (5.6 to 5.9 km s-1); and 3) a 13- to 14-km thick lower crust with velocities similar to those reported beneath the Diablo Range and the Great Valley (6.5 to 7.30 km s-1). This lower crust may have been derived from subducted oceanic crust that was thickened by accretionary underplating or crustal shortening. -Authors

  12. A new method to identify earthquake swarms applied to seismicity near the San Jacinto Fault, California

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Shearer, Peter M.

    2016-05-01

    Understanding earthquake clustering in space and time is important but also challenging because of complexities in earthquake patterns and the large and diverse nature of earthquake catalogues. Swarms are of particular interest because they likely result from physical changes in the crust, such as slow slip or fluid flow. Both swarms and clusters resulting from aftershock sequences can span a wide range of spatial and temporal scales. Here we test and implement a new method to identify seismicity clusters of varying sizes and discriminate them from randomly occurring background seismicity. Our method searches for the closest neighbouring earthquakes in space and time and compares the number of neighbours to the background events in larger space/time windows. Applying our method to California's San Jacinto Fault Zone (SJFZ), we find a total of 89 swarm-like groups. These groups range in size from 0.14 to 7.23 km and last from 15 min to 22 d. The most striking spatial pattern is the larger fraction of swarms at the northern and southern ends of the SJFZ than its central segment, which may be related to more normal-faulting events at the two ends. In order to explore possible driving mechanisms, we study the spatial migration of events in swarms containing at least 20 events by fitting with both linear and diffusion migration models. Our results suggest that SJFZ swarms are better explained by fluid flow because their estimated linear migration velocities are far smaller than those of typical creep events while large values of best-fitting hydraulic diffusivity are found.

  13. Rates and patterns of surface deformation from laser scanning following the South Napa earthquake, California

    USGS Publications Warehouse

    DeLong, Stephen B.; Lienkaemper, James J.; Pickering, Alexandra J; Avdievitch, Nikita N.

    2015-01-01

    The A.D. 2014 M6.0 South Napa earthquake, despite its moderate magnitude, caused significant damage to the Napa Valley in northern California (USA). Surface rupture occurred along several mapped and unmapped faults. Field observations following the earthquake indicated that the magnitude of postseismic surface slip was likely to approach or exceed the maximum coseismic surface slip and as such presented ongoing hazard to infrastructure. Using a laser scanner, we monitored postseismic deformation in three dimensions through time along 0.5 km of the main surface rupture. A key component of this study is the demonstration of proper alignment of repeat surveys using point cloud–based methods that minimize error imposed by both local survey errors and global navigation satellite system georeferencing errors. Using solid modeling of natural and cultural features, we quantify dextral postseismic displacement at several hundred points near the main fault trace. We also quantify total dextral displacement of initially straight cultural features. Total dextral displacement from both coseismic displacement and the first 2.5 d of postseismic displacement ranges from 0.22 to 0.29 m. This range increased to 0.33–0.42 m at 59 d post-earthquake. Furthermore, we estimate up to 0.15 m of vertical deformation during the first 2.5 d post-earthquake, which then increased by ∼0.02 m at 59 d post-earthquake. This vertical deformation is not expressed as a distinct step or scarp at the fault trace but rather as a broad up-to-the-west zone of increasing elevation change spanning the fault trace over several tens of meters, challenging common notions about fault scarp development in strike-slip systems. Integrating these analyses provides three-dimensional mapping of surface deformation and identifies spatial variability in slip along the main fault trace that we attribute to distributed slip via subtle block rotation. These results indicate the benefits of laser scanner surveys along active faults and demonstrate that fine-scale variability in fault slip has been missed by traditional earthquake response methods.

  14. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  15. Earthquakes

    MedlinePlus

    ... National Science Foundation National Institute of Standards and Technology Publications If you require more information about any of these topics, the following resources may be helpful. America’s PrepareAthon! How to Prepare for Earthquakes Earthquake Preparedness: ...

  16. Preliminary Analysis of Remote Triggered Seismicity in Northern Baja California Generated by the 2011, Tohoku-Oki, Japan Earthquake

    NASA Astrophysics Data System (ADS)

    Wong-Ortega, V.; Castro, R. R.; Gonzalez-Huizar, H.; Velasco, A. A.

    2013-05-01

    We analyze possible variations of seismicity in the northern Baja California due to the passage of seismic waves from the 2011, M9.0, Tohoku-Oki, Japan earthquake. The northwestern area of Baja California is characterized by a mountain range composed of crystalline rocks. These Peninsular Ranges of Baja California exhibits high microseismic activity and moderate size earthquakes. In the eastern region of Baja California shearing between the Pacific and the North American plates takes place and the Imperial and Cerro-Prieto faults generate most of the seismicity. The seismicity in these regions is monitored by the seismic network RESNOM operated by the Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE). This network consists of 13 three-component seismic stations. We use the seismic catalog of RESNOM to search for changes in local seismic rates occurred after the passing of surface waves generated by the Tohoku-Oki, Japan earthquake. When we compare one month of seismicity before and after the M9.0 earthquake, the preliminary analysis shows absence of triggered seismicity in the northern Peninsular Ranges and an increase of seismicity south of the Mexicali valley where the Imperial fault jumps southwest and the Cerro Prieto fault continues.

  17. Westward-derived conglomerates in Moenkopi formation of Southeastern California, and their probable tectonic significance

    SciTech Connect

    Walker, J.D.; Burchfiel, B.C.; Royden, L.H.

    1983-02-01

    The upper part of the Moenkopi Formation in the Northern Clark Mountains, Southeastern California, contains conglomerate beds whose clasts comprise igneous, metamorphic, and sedimentary rocks. Metamorphic clasts include foliated granite, meta-arkose, and quarzite, probably derived from older Precambrian basement and younger Precambrian clastic rocks. Volcanic clasts are altered plagioclase-bearing rocks, and sedimentary clasts were derived from Paleozoic miogeoclinal rocks. Paleocurrent data indicate that the clasts had a source to the southwest. An age of late Early or early Middle Triassic has been tentatively assigned to these conglomerates. These conglomerates indicate that Late Permian to Early Triassic deformational events in this part of the orogen affected rocks much farther east than has been previously recognized.

  18. Tests of RTG (Real Time GIPSY) for Earthquake Early Warning and Response Applications in Southern California

    NASA Astrophysics Data System (ADS)

    King, N.; Hudnut, K.; Stark, K.; Aspiotes, A.

    2008-12-01

    Recent developments in high-rate real-time GPS technology and processing promise to improve the application of GPS to earthquake early warning and response. Point positioning processing algorithms, which do not require a reference station, are particularly attractive for these applications since any reference station will itself be displaced during a large earthquake. USGS Pasadena is testing one such software package, Real Time GIPSY (RTG), developed and supported by the Jet Propulsion Laboratory (JPL). JPL uses RTG for precise real-time satellite orbit and clock determination, formats the results as corrections to the GPS broadcast orbit, and provides a real-time stream over the Internet. In our tests we use a locally- installed copy of RTG to compute real-time positions of GPS stations at a sampling rate of 1 second. In clean sections of the position time series are good, with rms scatter of 2 to 4 cm in the north and east components, and 5 to 10 cm in the vertical. Current work is designed to understand and handle occasional convergence delays and large outliers; many outliers repeat every sidereal day and may be correlated with multipath or with the rising or setting of individual satellites. The test site is in a less-than-ideal setting, and we are experimenting with the software setup and with different sites with fewer sources of multipath and better sky view. USGS Pasadena currently operates about 90 permanent continuously-operating GPS stations, about 20 of which are real-time. With funding from the USGS MultiHazards Demonstration Project, USGS Pasadena is cooperating with the California Integrated Seismic Network to co-locate approximately eight real-time GPS receivers at new seismic stations along the southern San Andreas fault. The Plate Boundary Observatory (PBO) is also converting many of its southern California stations to real-time operation. These real-time data and software such as RTG promise to improve USGS Pasasdena's geodetic response to large southern California earthquakes.

  19. G-larmS: An Infrastructure for Geodetic Earthquake Early Warning, applied to Northern California

    NASA Astrophysics Data System (ADS)

    Johanson, I. A.; Grapenthin, R.; Allen, R. M.

    2014-12-01

    Integrating geodetic data into seismic earthquake early warning (EEW) is critical for accurately resolving magnitude and finite fault dimensions in the very largest earthquakes (M>7). We have developed G-larmS, the Geodetic alarm System, as part of our efforts to incorporate geodetic data into EEW for Northern California. G-larmS is an extensible geodetic EEW infrastructure that analyzes positioning time series from real-time GPS processors, such as TrackRT or RTNET. It is currently running in an operational mode at the Berkeley Seismological Laboratory (BSL) where we use TrackRT to produce high sample rate displacement time series for 62 GPS stations in the greater San Francisco Bay Area with 3-4 second latency. We employ a fully triangulated network scheme, which provides resiliency against an outage or telemetry loss at any individual station, for a total of 165 basestation-rover pairs. G-larmS is tightly integrated into seismic alarm systems (CISN ShakeAlert, ElarmS) as it uses their P-wave detection alarms to trigger its own processing and sends warning messages back to the ShakeAlert decision module. Once triggered, G-larmS estimates the static offset at each station pair and inputs these into an inversion for fault slip, which is updated once per second. The software architecture and clear interface definitions of this Python implementation enable straightforward extensibility and exchange of specific algorithms that operate in the individual modules. For example, multiple modeling instances can be called in parallel, each of which applying a different strategy to infer fault and magnitude information (e.g., pre-defined fault planes, full grid search, least squares inversion, etc.). This design enables, for example, quick tests, expansion and algorithm comparisons. Here, we present the setup and report results of the first months of operation in Northern California. This includes analysis of system latencies, noise, and G-larmS' response to actual events. We also test how differential positions over relatively short baselines (like those produced at the BSL) compare to absolute positions in the case of a very large earthquake. We perform this analysis using data from the 2011 Mw 9.0 Tohoku earthquake, add randomly selected real-time noise, and invert for slip along the subduction zone interface.

  20. Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model

    USGS Publications Warehouse

    Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J., II; Zeng, Yuehua; Working Group on CA Earthquake Probabilities

    2013-01-01

    In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of M6.5–7 earthquake rates and also includes types of multifault ruptures seen in nature. Although UCERF3 fits the data better than UCERF2 overall, there may be areas that warrant further site-specific investigation. Supporting products may be of general interest, and we list key assumptions and avenues for future model improvements.

  1. A Public Health Issue Related To Collateral Seismic Hazards: The Valley Fever Outbreak Triggered By The 1994 Northridge, California Earthquake

    NASA Astrophysics Data System (ADS)

    Jibson, Randall W.

    Following the 17 January 1994 Northridge, California earthquake (M = 6.7), Ventura County, California, experienced a major outbreak ofcoccidioidomycosis (CM), commonly known as valley fever, a respiratory disease contracted byinhaling airborne fungal spores. In the 8 weeks following the earthquake (24 Januarythrough 15 March), 203 outbreak-associated cases were reported, which is about an order of magnitude more than the expected number of cases, and three of these cases were fatal.Simi Valley, in easternmost Ventura County, had the highest attack rate in the county,and the attack rate decreased westward across the county. The temporal and spatial distribution of CM cases indicates that the outbreak resulted from inhalation of spore-contaminated dust generated by earthquake-triggered landslides. Canyons North East of Simi Valleyproduced many highly disrupted, dust-generating landslides during the earthquake andits aftershocks. Winds after the earthquake were from the North East, which transporteddust into Simi Valley and beyond to communities to the West. The three fatalities from the CM epidemic accounted for 4 percent of the total earthquake-related fatalities.

  2. A public health issue related to collateral seismic hazards: The valley fever outbreak triggered by the 1994 Northridge, California earthquake

    USGS Publications Warehouse

    Jibson, R.W.

    2002-01-01

    Following the 17 January 1994 Northridge. California earthquake (M = 6.7), Ventura County, California, experienced a major outbreak of coccidioidomycosis (CM), commonly known as valley fever, a respiratory disease contracted by inhaling airborne fungal spores. In the 8 weeks following the earthquake (24 January through 15 March), 203 outbreak-associated cases were reported, which is about an order of magnitude more than the expected number of cases, and three of these cases were fatal. Simi Valley, in easternmost Ventura County, had the highest attack rate in the county, and the attack rate decreased westward across the county. The temporal and spatial distribution of CM cases indicates that the outbreak resulted from inhalation of spore-contaminated dust generated by earthquake-triggered landslides. Canyons North East of Simi Valley produced many highly disrupted, dust-generating landslides during the earthquake and its aftershocks. Winds after the earthquake were from the North East, which transported dust into Simi Valley and beyond to communities to the West. The three fatalities from the CM epidemic accounted for 4 percent of the total earthquake-related fatalities.

  3. Data Sets and Data Services at the Northern California Earthquake Data Center

    NASA Astrophysics Data System (ADS)

    Neuhauser, D. S.; Zuzlewski, S.; Allen, R. M.

    2014-12-01

    The Northern California Earthquake Data Center (NCEDC) houses a unique and comprehensive data archive and provides real-time services for a variety of seismological and geophysical data sets that encompass northern and central California. We have over 80 terabytes of continuous and event-based time series data from broadband, short-period, strong motion, and strain sensors as well as continuous and campaign GPS data at both standard and high sample rates in both raw and RINEX format. The Northen California Seismic System (NCSS), operated by UC Berkeley and USGS Menlo Park, has recorded over 890,000 events from 1984 to the present, and the NCEDC provides catalog, parametric information, moment tensors and first motion mechanisms, and time series data for these events. We also host and provide event catalogs, parametric information, and event waveforms for DOE enhanced geothermal system monitoring in northern California and Nevada. The NCEDC provides a variety of ways for users to access these data. The most recent development are web services, which provide interactive, command-line, or program-based workflow access to data. Web services use well-established server and client protocols and RESTful software architecture that allow users to easily submit queries and receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, simple text, or MiniSEED depending on the service and selected output format. The NCEDC supports all FDSN-defined web services as well as a number of IRIS-defined and NCEDC-defined services. We also continue to support older email-based and browser-based access to data. NCEDC data and web services can be found at http://www.ncedc.org and http://service.ncedc.org.

  4. Strong motion and broadband teleseismic analysis of the 1991 Sierra Madre, California, earthquake

    NASA Astrophysics Data System (ADS)

    Wald, David J.

    1992-07-01

    The source parameters of the 1991 earthquake in the Sierra Madre, California, are studied by means of records of three-component strong motions and teleseismic waveforms. The rupture process is constrained by treating the data with a finite fault-source inversion using a fault-plane orientation based on the aftershock distribution, first motion mechanism, and the teleseismic bodywave point-source inversion. The teleseismic and strong-motion datasets are found to be consistent with a compact rupture area up-dip from the hypocenter. The seismic moment, potency, and resulting stress drop are determined, and the ground motions predicted in the rupture model are shown to be consistent with the damage patterns and peak ground accelerations. The paper shows that damaging ground motion radiation can be attributed to the compact fault region and that thrust faults of limited dimensions can produce potentially hazardous gound motions.

  5. Sonographs of submarine sediment failure caused by the 1980 earthquake off northern California

    USGS Publications Warehouse

    Field, M.E.; Hall, R.K.

    1982-01-01

    In 1980, a large earthquake caused extensive sediment failure on the shallow continental shelf off the Klamath River in northern California. Side-scan sonography was used to complement detailed geophysical profiling in identifying specific features and resolving modes of failure. The features include a nearly flat failure terrace mantled with sand boils, collapse craters and sediment flows, and bounded on the seaward side by a meandering continuous toe ridge. Seaward of the terrace lies a compression zone delineated by small pressure ridges. Our findings indicate a temporal progression of failure from lique-faction of shallow subsurface sand to lateral spread of intact blocks to sediment collapse and flow. ?? 1982 A. M. Dowden, Inc.

  6. Data and Visualizations in the Southern California Earthquake Center's Fault Information System

    NASA Astrophysics Data System (ADS)

    Perry, S.

    2003-12-01

    The Southern California Earthquake Center's Fault Information System (FIS) provides a single point of access to fault-related data and models from multiple databases and datasets. The FIS is built of computer code, metadata and Web interfaces based on Web services technology, which enables queries and data interchange irrespective of computer software or platform. Currently we have working prototypes of programmatic and browser-based access. The first generation FIS may be searched and downloaded live, by automated processes, as well as interactively, by humans using a browser. Users get ascii data in plain text or encoded in XML. Via the Earthquake Information Technology (EIT) Interns (Juve and others, this meeting), we are also testing the effectiveness of querying multiple databases using a fault database ontology. For more than a decade, the California Geological Survey (CGS), SCEC, and the U. S. Geological Survey (USGS) have put considerable, shared resources into compiling and assessing published fault data, then providing the data on the Web. Several databases now exist, with different formats, datasets, purposes, and users, in various stages of completion. When fault databases were first envisioned, the full power of today's internet was not yet recognized, and the databases became the Web equivalents of review papers, where one could read an overview summation of a fault, then copy and paste pertinent data. Today, numerous researchers also require rapid queries and downloads of data. Consequently, the first components of the FIS are MySQL databases that deliver numeric values from earlier, text-based databases. Another essential service provided by the FIS is visualizations of fault representations such as those in SCEC's Community Fault Model. The long term goal is to provide a standardized, open-source, platform-independent visualization technique. Currently, the FIS makes available fault model viewing software for users with access to Matlab or Java3D. The latter is the interactive LA3D software of the SCEC EIT intern team, which will be demonstrated at this session.

  7. Frequency-magnitude statistics and spatial correlation dimensions of earthquakes at Long Valley caldera, California

    NASA Astrophysics Data System (ADS)

    Barton, D. J.; Foulger, G. R.; Henderson, J. R.; Julian, B. R.

    1999-08-01

    Intense earthquake swarms at Long Valley caldera in late 1997 and early 1998 occurred on two contrasting structures. The first is defined by the intersection of a north-northwesterly array of faults with the southern margin of the resurgent dome, and is a zone of hydrothermal upwelling. Seismic activity there was characterized by high b-values and relatively low values of D, the spatial fractal dimension of hypocentres. The second structure is the pre-existing South Moat fault, which has generated large-magnitude seismic activity in the past. Seismicity on this structure was characterized by low b-values and relatively high D. These observations are consistent with low-magnitude, clustered earthquakes on the first structure, and higher-magnitude, diffuse earthquakes on the second structure. The first structure is probably an immature fault zone, fractured on a small scale and lacking a well-developed fault plane. The second zone represents a mature fault with an extensive, coherent fault plane.

  8. Frequency-magnitude statistics and spatial correlation dimensions of earthquakes at Long Valley caldera, California

    USGS Publications Warehouse

    Barton, D.J.; Foulger, G.R.; Henderson, J.R.; Julian, B.R.

    1999-01-01

    Intense earthquake swarms at Long Valley caldera in late 1997 and early 1998 occurred on two contrasting structures. The first is defined by the intersection of a north-northwesterly array of faults with the southern margin of the resurgent dome, and is a zone of hydrothermal upwelling. Seismic activity there was characterized by high b-values and relatively low values of D, the spatial fractal dimension of hypocentres. The second structure is the pre-existing South Moat fault, which has generated large-magnitude seismic activity in the past. Seismicity on this structure was characterized by low b-values and relatively high D. These observations are consistent with low-magnitude, clustered earthquakes on the first structure, and higher-magnitude, diffuse earthquakes on the second structure. The first structure is probably an immature fault zone, fractured on a small scale and lacking a well-developed fault plane. The second zone represents a mature fault with an extensive, coherent fault plane.

  9. Probable Post-Traumatic Stress Disorder and Its Predictors in Disaster-Bereaved Survivors: A Longitudinal Study After the Sichuan Earthquake.

    PubMed

    Hu, Xiuying; Cao, Xiaoyi; Wang, Heng; Chen, Qian; Liu, Maoqiong; Yamamoto, Aiko

    2016-04-01

    This study examined the trajectory of probable PTSD prevalence and severity, and analyzed the predictors for PTSD severity in bereaved survivors at 6months and 18months after the 2008 Sichuan earthquake. This was a longitudinal study with 226 bereaved survivors sampled at 6months and 18months post-earthquake. The instrument used in the study was the revised version of the Impact of Event Scale. The results showed that the prevalence of probable PTSD in bereaved survivors decreased significantly from 38.9% at 6months to 16.8% at 18months post-earthquake. Loss of a child, being directly exposed to the death of family members and property loss during the earthquake, and mental health services utilization after the earthquake were significant predictors for PTSD severity at both assessments. These findings can contribute to post-disaster psychological rescue work. The bereaved survivors at high risk for more severe PTSD should be particularly targeted. PMID:26992870

  10. Evidence of shallow fault zone strengthening after the 1992 M7.5 landers, california, earthquake

    PubMed

    Li; Vidale; Aki; Xu; Burdette

    1998-01-01

    Repeated seismic surveys of the Landers, California, fault zone that ruptured in the magnitude (M) 7.5 earthquake of 1992 reveal an increase in seismic velocity with time. P, S, and fault zone trapped waves were excited by near-surface explosions in two locations in 1994 and 1996, and were recorded on two linear, three-component seismic arrays deployed across the Johnson Valley fault trace. The travel times of P and S waves for identical shot-receiver pairs decreased by 0.5 to 1.5 percent from 1994 to 1996, with the larger changes at stations located within the fault zone. These observations indicate that the shallow Johnson Valley fault is strengthening after the main shock, most likely because of closure of cracks that were opened by the 1992 earthquake. The increase in velocity is consistent with the prevalence of dry over wet cracks and with a reduction in the apparent crack density near the fault zone by approximately 1.0 percent from 1994 to 1996. PMID:9422692

  11. Rupture propagation of the 2004 Parkfield, California, earthquake from observations at the UPSAR

    USGS Publications Warehouse

    Fletcher, Joe B.; Spudich, P.; Baker, L.M.

    2006-01-01

    Using a short-baseline seismic array (U.S. Geological Survey Parkfield Dense Seismograph Array [UPSAR]) about 12 km west of the rupture initiation of the 28 September 2004 M 6.0 Parkfield, California, earthquake, we have observed the movement of the rupture front of this earthquake on the San Andreas fault. The sources of high-frequency arrivals at UPSAR, which we use to identify the rupture front, are mapped onto the San Andreas fault using their apparent velocity and back azimuth. Measurements of apparent velocity and back azimuth are calibrated using aftershocks, which have a compact source and known location. Aftershock back azimuths show considerable lateral refraction, consistent with a high-velocity ridge on the southwest side of the fault. We infer that the initial mainshock rupture velocity was approximately the Rayleigh speed (with respect to slower side of the fault), and the rupture then slowed to about 0.66?? near the town of Parkfield after 2 sec. The last well-correlated pulse, 4 sec after S, is the largest at UPSAR, and its source is near the region of large accelerations recorded by strong-motion accelerographs and close to northern extent of continuous surface fractures on the southwest fracture zone. Coincidence of sources with preshock and aftershock distributions suggests fault material properties control rupture behavior. High-frequency sources approximately correlate with the edges of asperities identified as regions of high slip derived from inversion of strong-motion waveforms.

  12. Evidence of shallow fault zone strengthening after the 1992 M7.5 Landers, California, earthquake

    USGS Publications Warehouse

    Li, Y.-G.; Vidale, J.E.; Aki, K.; Xu, Fei; Burdette, T.

    1998-01-01

    Repeated seismic surveys of the Landers, California, fault zone that ruptured in the magnitude (M) 7.5 earthquake of 1992 reveal an increase in seismic velocity with time. P, S, and fault zone trapped waves were excited by near-surface explosions in two locations in 1994 and 1996, and were recorded on two linear, three-component seismic arrays deployed across the Johnson Valley fault trace. The travel times of P and S waves for identical shot-receiver pairs decreased by 0.5 to 1.5 percent from 1994 to 1996, with the larger changes at stations located within the fault zone. These observations indicate that the shallow Johnson Valley fault is strengthening after the main shock, most likely because of closure of cracks that were opened by the 1992 earthquake. The increase in velocity is consistent with the prevalence of dry over wet cracks and with a reduction in the apparent crack density near the fault zone by approximately 1.0 percent from 1994 to 1996.

  13. A Double-difference Earthquake location algorithm: Method and application to the Northern Hayward Fault, California

    USGS Publications Warehouse

    Waldhauser, F.; Ellsworth, W.L.

    2000-01-01

    We have developed an efficient method to determine high-resolution hypocenter locations over large distances. The location method incorporates ordinary absolute travel-time measurements and/or cross-correlation P-and S-wave differential travel-time measurements. Residuals between observed and theoretical travel-time differences (or double-differences) are minimized for pairs of earthquakes at each station while linking together all observed event-station pairs. A least-squares solution is found by iteratively adjusting the vector difference between hypocentral pairs. The double-difference algorithm minimizes errors due to unmodeled velocity structure without the use of station corrections. Because catalog and cross-correlation data are combined into one system of equations, interevent distances within multiplets are determined to the accuracy of the cross-correlation data, while the relative locations between multiplets and uncorrelated events are simultaneously determined to the accuracy of the absolute travel-time data. Statistical resampling methods are used to estimate data accuracy and location errors. Uncertainties in double-difference locations are improved by more than an order of magnitude compared to catalog locations. The algorithm is tested, and its performance is demonstrated on two clusters of earthquakes located on the northern Hayward fault, California. There it colapses the diffuse catalog locations into sharp images of seismicity and reveals horizontal lineations of hypocenter that define the narrow regions on the fault where stress is released by brittle failure.

  14. Overview of the Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) Time-Independent Model

    NASA Astrophysics Data System (ADS)

    Field, E. H.; Arrowsmith, R.; Biasi, G. P.; Bird, P.; Dawson, T. E.; Felzer, K. R.; Jackson, D. D.; Johnson, K. M.; Jordan, T. H.; Madugo, C. M.; Michael, A. J.; Milner, K. R.; Page, M. T.; Parsons, T.; Powers, P.; Shaw, B. E.; Thatcher, W. R.; Weldon, R. J.; Zeng, Y.

    2013-12-01

    We present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), where the primary achievements have been to relax fault segmentation and include multi-fault ruptures, both limitations of UCERF2. The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level 'grand inversion' that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (e.g., magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded due to lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (e.g., constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 over-prediction of M6.5-7 earthquake rates, and also includes types of multi-fault ruptures seen in nature. While UCERF3 fits the data better than UCERF2 overall, there may be areas that warrant further site-specific investigation. Supporting products may be of general interest, and we list key assumptions and avenues for future model improvements.

  15. Comparison of four moderate-size earthquakes in southern California using seismology and InSAR

    USGS Publications Warehouse

    Mellors, R.J.; Magistrale, H.; Earle, P.; Cogbill, A.H.

    2004-01-01

    Source parameters determined from interferometric synthetic aperture radar (InSAR) measurements and from seismic data are compared from four moderate-size (less than M 6) earthquakes in southern California. The goal is to verify approximate detection capabilities of InSAR, assess differences in the results, and test how the two results can be reconciled. First, we calculated the expected surface deformation from all earthquakes greater than magnitude 4 in areas with available InSAR data (347 events). A search for deformation from the events in the interferograms yielded four possible events with magnitudes less than 6. The search for deformation was based on a visual inspection as well as cross-correlation in two dimensions between the measured signal and the expected signal. A grid-search algorithm was then used to estimate focal mechanism and depth from the InSAR data. The results were compared with locations and focal mechanisms from published catalogs. An independent relocation using seismic data was also performed. The seismic locations fell within the area of the expected rupture zone for the three events that show clear surface deformation. Therefore, the technique shows the capability to resolve locations with high accuracy and is applicable worldwide. The depths determined by InSAR agree with well-constrained seismic locations determined in a 3D velocity model. Depth control for well-imaged shallow events using InSAR data is good, and better than the seismic constraints in some cases. A major difficulty for InSAR analysis is the poor temporal coverage of InSAR data, which may make it impossible to distinguish deformation due to different earthquakes at the same location.

  16. Winnetka deformation zone: Surface expression of coactive slip on a blind fault during the Northridge earthquake sequence, California. Evidence that coactive faulting occurred in the Canoga Park, Winnetka, and Northridge areas during the 17 January 1994, Northridge, California earthquake

    SciTech Connect

    Cruikshank, K.M.; Johnson, A.M.; Fleming, R.W.; Jones, R.L.

    1996-12-31

    Measurements of normalized length changes of streets over an area of 9 km{sup 2} in San Fernando Valley of Los Angeles, California, define a distinctive strain pattern that may well reflect blind faulting during the 1994 Northridge earthquake. Strain magnitudes are about 3 {times} 10{sup {minus}4}, locally 10{sup {minus}3}. They define a deformation zone trending diagonally from near Canoga Park in the southwest, through Winnetka, to near Northridge in the northeast. The deformation zone is about 4.5 km long and 1 km wide. The northwestern two-thirds of the zone is a belt of extension of streets, and the southeastern one-third is a belt of shortening of streets. On the northwest and southeast sides of the deformation zone the magnitude of the strains is too small to measure, less than 10{sup {minus}4}. Complete states of strain measured in the northeastern half of the deformation zone show that the directions of principal strains are parallel and normal to the walls of the zone, so the zone is not a strike-slip zone. The magnitudes of strains measured in the northeastern part of the Winnetka area were large enough to fracture concrete and soils, and the area of larger strains correlates with the area of greater damage to such roads and sidewalks. All parts of the pattern suggest a blind fault at depth, most likely a reverse fault dipping northwest but possibly a normal fault dipping southeast. The magnitudes of the strains in the Winnetka area are consistent with the strains produced at the ground surface by a blind fault plane extending to depth on the order of 2 km and a net slip on the order of 1 m, within a distance of about 100 to 500 m of the ground surface. The pattern of damage in the San Fernando Valley suggests a fault segment much longer than the 4.5 km defined by survey data in the Winnetka area. The blind fault segment may extend several kilometers in both directions beyond the Winnetka area. This study of the Winnetka area further supports observations that a large earthquake sequence can include rupture along both a main fault and nearby faults with quite different senses of slip. Faults near the main fault that approach the ground surface or cut the surface in an area have the potential of moving coactively in a major earthquake. Movement on such faults is associated with significant damage during an earthquake. The fault that produced the main Northridge shock and the faults that moved coactively in the Northridge area probably are parts of a large structure. Such interrelationships may be key to understanding earthquakes and damage caused by tectonism.

  17. Monitoring of the stress state variations of the Southern California for the purpose of earthquake prediction

    NASA Astrophysics Data System (ADS)

    Gokhberg, M.; Garagash, I.; Bondur, V.; Steblov, G. M.

    2014-12-01

    The three-dimensional geomechanical model of Southern California was developed, including a mountain relief, fault tectonics and characteristic internal features such as the roof of the consolidated crust and Moho surface. The initial stress state of the model is governed by the gravitational forces and horizontal tectonic motions estimated from GPS observations. The analysis shows that the three-dimensional geomechanical models allows monitoring of the changes in the stress state during the seismic process in order to constrain the distribution of the future places with increasing seismic activity. This investigation demonstrates one of possible approach to monitor upcoming seismicity for the periods of days - weeks - months. Continuous analysis of the stress state was carried out during 2009-2014. Each new earthquake with М~1 and above from USGS catalog was considered as the new defect of the Earth crust which has some definite size and causes redistribution of the stress state. Overall calculation technique was based on the single function of the Earth crust damage, recalculated each half month. As a result each half month in the upper crust layers and partially in the middle layers we revealed locations of the maximal values of the stress state parameters: elastic energy density, shear stress, proximity of the earth crust layers to their strength limit. All these parameters exhibit similar spatial and temporal distribution. How follows from observations all four strongest events with М ~ 5.5-7.2 occurred in South California during the analyzed period were prefaced by the parameters anomalies in peculiar advance time of weeks-months in the vicinity of 10-50 km from the upcoming earthquake. After the event the stress state source disappeared. The figure shows migration of the maximums of the stress state variations gradients (parameter D) in the vicinity of the epicenter of the earthquake 04.04.2010 with М=7.2 in the period of 01.01.2010-01.05.2010. Grey lines show the major faults. In the table the values are sampled by 2 weeks, "-" indicates time before the event, "+" indicates time after the event.

  18. Artefacts of earthquake location errors and short-term incompleteness on seismicity clusters in southern California

    NASA Astrophysics Data System (ADS)

    Zaliapin, Ilya; Ben-Zion, Yehuda

    2015-09-01

    We document and quantify effects of two types of catalogue uncertainties-earthquake location errors and short-term incompleteness-on results of statistical cluster analyses of seismicity in southern California. In the main part of the study we analyse 117 076 events with m ≥ 2 in southern California during 1981-2013 from the waveform-relocated catalogue of Hauksson et al. We present statistical evidence for three artefacts caused by the absolute and relative location errors: (1) Increased distance between offspring and parents. (2) Underestimated clustering, quantified by the number of offspring per event, the total number of clustered events, and some other statistics. (3) Overestimated background rates. We also find that short-term incompleteness leads to (4) Apparent magnitude dependence and temporal fluctuations of b-values. The reported artefacts are robustly observed in three additional catalogues of southern California: the relocated catalogue of Richards-Dinger & Shearer during 1975-1998, and the two subcatalogues-1961-1981 and 1981-2013-of the Advances National Seismic System catalogue. This implies that the reported artefacts are not specific to a particular (re)location method. The comparative quality of the four examined catalogues is reflected in the magnitude of the artefacts. The location errors in the examined catalogues mostly affect events with m < 3.5, while for larger magnitudes the location error effects are negligible. This is explained by comparing the location error and rupture lengths of events and their parents. Finally, our analysis suggests that selected aggregated cluster statistics (e.g. proportion of singles) are less prone to location artefacts than individual statistics (e.g. the distance to parent or parent-offspring assignment). The results can inform a range of studies focused on small-magnitude seismicity patterns in the presence of catalogue uncertainties.

  19. Survey of strong motion earthquake effects on thermal power plants in California with emphasis on piping systems. Volume 2, Appendices

    SciTech Connect

    Stevenson, J.D.

    1995-11-01

    Volume 2 of the ``Survey of Strong Motion Earthquake Effects on Thermal Power Plants in California with Emphasis on Piping Systems`` contains Appendices which detail the detail design and seismic response of several power plants subjected to strong motion earthquakes. The particular plants considered include the Ormond Beach, Long Beach and Seal Beach, Burbank, El Centro, Glendale, Humboldt Bay, Kem Valley, Pasadena and Valley power plants. Included is a typical power plant piping specification and photographs of typical power plant piping specification and photographs of typical piping and support installations for the plants surveyed. Detailed piping support spacing data are also included.

  20. CRUSTAL REFRACTION PROFILE OF THE LONG VALLEY CALDERA, CALIFORNIA, FROM THE JANUARY 1983 MAMMOTH LAKES EARTHQUAKE SWARM.

    USGS Publications Warehouse

    Luetgert, James H.; Mooney, Walter D.

    1985-01-01

    Seismic-refraction profiles recorded north of Mammoth Lakes, California, using earthquake sources from the January 1983 swarm complement earlier explosion refraction profiles and provide velocity information from deeper in the crust in the area of the Long Valley caldera. Eight earthquakes from a depth range of 4. 9 to 8. 0 km confirm the observation of basement rocks with seismic velocities ranging from 5. 8 to 6. 4 km/sec extending at least to depths of 20 km. The data provide further evidence for the existence of a partial melt zone beneath Long Valley caldera and constrain its geometry. Refs.

  1. Fault structure and mechanics of the Hayward Fault, California from double-difference earthquake locations

    USGS Publications Warehouse

    Waldhauser, F.; Ellsworth, W.L.

    2002-01-01

    The relationship between small-magnitude seismicity and large-scale crustal faulting along the Hayward Fault, California, is investigated using a double-difference (DD) earthquake location algorithm. We used the DD method to determine high-resolution hypocenter locations of the seismicity that occurred between 1967 and 1998. The DD technique incorporates catalog travel time data and relative P and S wave arrival time measurements from waveform cross correlation to solve for the hypocentral separation between events. The relocated seismicity reveals a narrow, near-vertical fault zone at most locations. This zone follows the Hayward Fault along its northern half and then diverges from it to the east near San Leandro, forming the Mission trend. The relocated seismicity is consistent with the idea that slip from the Calaveras Fault is transferred over the Mission trend onto the northern Hayward Fault. The Mission trend is not clearly associated with any mapped active fault as it continues to the south and joins the Calaveras Fault at Calaveras Reservoir. In some locations, discrete structures adjacent to the main trace are seen, features that were previously hidden in the uncertainty of the network locations. The fine structure of the seismicity suggest that the fault surface on the northern Hayward Fault is curved or that the events occur on several substructures. Near San Leandro, where the more westerly striking trend of the Mission seismicity intersects with the surface trace of the (aseismic) southern Hayward Fault, the seismicity remains diffuse after relocation, with strong variation in focal mechanisms between adjacent events indicating a highly fractured zone of deformation. The seismicity is highly organized in space, especially on the northern Hayward Fault, where it forms horizontal, slip-parallel streaks of hypocenters of only a few tens of meters width, bounded by areas almost absent of seismic activity. During the interval from 1984 to 1998, when digital waveforms are available, we find that fewer than 6.5% of the earthquakes can be classified as repeating earthquakes, events that rupture the same fault patch more than one time. These most commonly are located in the shallow creeping part of the fault, or within the streaks at greater depth. The slow repeat rate of 2-3 times within the 15-year observation period for events with magnitudes around M = 1.5 is indicative of a low slip rate or a high stress drop. The absence of microearthquakes over large, contiguous areas of the northern Hayward Fault plane in the depth interval from ???5 to 10 km and the concentrations of seismicity at these depths suggest that the aseismic regions are either locked or retarded and are storing strain energy for release in future large-magnitude earthquakes.

  2. Probability of introduction of exotic strains of bluetongue virus into the US and into California through importation of infected cattle.

    PubMed

    Hoar, Bruce R; Carpenter, Tim E; Singer, Randall S; Gardner, Ian A

    2004-12-15

    Strategies designed to minimize the probability of bluetongue virus (BTV) introduction to new areas should be based on a quantitative assessment of the probability of actually establishing the virus once it is introduced. The risk of introducing a new strain of bluetongue virus into a region depends on the number of viremic animals that enter and the competency of local vectors to transmit the virus. We used Monte Carlo simulation to model the probability of introducing BTV into California, USA, and the US through importation of cattle. Records of cattle and calf imports into California and the US were obtained, as was seroprevalence information from the exporting countries. A simulation model was constructed to evaluate the probability of importing either a viremic PCR-negative animal after 14-day quarantine, a c-ELISA BTV-antibody-negative animal after 28-day quarantine, or an untested viremic animal after 100-day quarantine into California and into the US. We found that for animals imported to the US, the simulated (best to worst scenarios) median percentage that tested positive for BTV-antibody ranged from 5.4 to 7.2%, while for the subset imported to California, the simulated median percentage that tested positive for BTV-antibody ranged from 20.9 to 78.9%. Using PCR, for animals imported to the US these values were 71.8-85.3%, and for those imported to California, the simulated median that test positive ranged from 74.3 to 92.4%. The probability that an imported animal was BTV-viremic is very low regardless of the scenario selected (median probability=0.0%). The probability of introducing an exotic strain of BTV into California or the US by importing infected cattle was remote, and the current Office International des Epizooties (OIE) recommendation of either a final PCR test performed 14 days after entry into quarantine, a c-ELISA performed 28 days after entry into quarantine or a 100-day quarantine with no testing requirement was adequate to protect cattle in the US and California from an exotic strain of BTV. PMID:15579336

  3. Timing of large earthquakes since A.D. 800 on the Mission Creek strand of the San Andreas fault zone at Thousand Palms Oasis, near Palm Springs, California

    USGS Publications Warehouse

    Fumal, T.E.; Rymer, M.J.; Seitz, G.G.

    2002-01-01

    Paleoseismic investigations across the Mission Creek strand of the San Andreas fault at Thousand Palms Oasis indicate that four and probably five surface-rupturing earthquakes occurred during the past 1200 years. Calendar age estimates for these earthquakes are based on a chronological model that incorporates radio-carbon dates from 18 in situ burn layers and stratigraphic ordering constraints. These five earthquakes occurred in about A.D. 825 (770-890) (mean, 95% range), A.D. 982 (840-1150), A.D. 1231 (1170-1290), A.D. 1502 (1450-1555), and after a date in the range of A.D. 1520-1680. The most recent surface-rupturing earthquake at Thousand Palms is likely the same as the A.D. 1676 ?? 35 event at Indio reported by Sieh and Williams (1990). Each of the past five earthquakes recorded on the San Andreas fault in the Coachella Valley strongly overlaps in time with an event at the Wrightwood paleoseismic site, about 120 km northwest of Thousand Palms Oasis. Correlation of events between these two sites suggests that at least the southernmost 200 km of the San Andreas fault zone may have ruptured in each earthquake. The average repeat time for surface-rupturing earthquakes on the San Andreas fault in the Coachella Valley is 215 ?? 25 years, whereas the elapsed time since the most recent event is 326 ?? 35 years. This suggests the southernmost San Andreas fault zone likely is very near failure. The Thousand Palms Oasis site is underlain by a series of six channels cut and filled since about A.D. 800 that cross the fault at high angles. A channel margin about 900 years old is offset right laterally 2.0 ?? 0.5 m, indicating a slip rate of 4 ?? 2 mm/yr. This slip rate is low relative to geodetic and other geologic slip rate estimates (26 ?? 2 mm/yr and about 23-35 mm/yr, respectively) on the southernmost San Andreas fault zone, possibly because (1) the site is located in a small step-over in the fault trace and so the rate is not be representative of the Mission Creek fault, (2) slip is partitioned northward from the San Andreas fault and into the eastern California shear zone, and/or (3) slip is partitioned onto the Banning strand of the San Andreas fault zone.

  4. TriNet "ShakeMaps": Rapid generation of peak ground motion and intensity maps for earthquakes in southern California

    USGS Publications Warehouse

    Wald, D.J.; Quitoriano, V.; Heaton, T.H.; Kanamori, H.; Scrivner, C.W.; Worden, C.B.

    1999-01-01

    Rapid (3-5 minutes) generation of maps of instrumental ground-motion and shaking intensity is accomplished through advances in real-time seismographic data acquisition combined with newly developed relationships between recorded ground-motion parameters and expected shaking intensity values. Estimation of shaking over the entire regional extent of southern California is obtained by the spatial interpolation of the measured ground motions with geologically based frequency and amplitude-dependent site corrections. Production of the maps is automatic, triggered by any significant earthquake in southern California. Maps are now made available within several minutes of the earthquake for public and scientific consumption via the World Wide Web; they will be made available with dedicated communications for emergency response agencies and critical users.

  5. Probability of detecting perchlorate under natural conditions in deep groundwater in California and the Southwestern United States

    USGS Publications Warehouse

    Fram, Miranda S.; Belitz, Kenneth

    2011-01-01

    We use data from 1626 groundwater samples collected in California, primarily from public drinking water supply wells, to investigate the distribution of perchlorate in deep groundwater under natural conditions. The wells were sampled for the California Groundwater Ambient Monitoring and Assessment Priority Basin Project. We develop a logistic regression model for predicting probabilities of detecting perchlorate at concentrations greater than multiple threshold concentrations as a function of climate (represented by an aridity index) and potential anthropogenic contributions of perchlorate (quantified as an anthropogenic score, AS). AS is a composite categorical variable including terms for nitrate, pesticides, and volatile organic compounds. Incorporating water-quality parameters in AS permits identification of perturbation of natural occurrence patterns by flushing of natural perchlorate salts from unsaturated zones by irrigation recharge as well as addition of perchlorate from industrial and agricultural sources. The data and model results indicate low concentrations (0.1-0.5 μg/L) of perchlorate occur under natural conditions in groundwater across a wide range of climates, beyond the arid to semiarid climates in which they mostly have been previously reported. The probability of detecting perchlorate at concentrations greater than 0.1 μg/L under natural conditions ranges from 50-70% in semiarid to arid regions of California and the Southwestern United States to 5-15% in the wettest regions sampled (the Northern California coast). The probability of concentrations above 1 μg/L under natural conditions is low (generally <3%).

  6. Stability and uncertainty of finite-fault slip inversions: Application to the 2004 Parkfield, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.; Mendoza, C.; Ji, C.; Larson, K.M.

    2007-01-01

    The 2004 Parkfield, California, earthquake is used to investigate stability and uncertainty aspects of the finite-fault slip inversion problem with different a priori model assumptions. We utilize records from 54 strong ground motion stations and 13 continuous, 1-Hz sampled, geodetic instruments. Two inversion procedures are compared: a linear least-squares subfault-based methodology and a nonlinear global search algorithm. These two methods encompass a wide range of the different approaches that have been used to solve the finite-fault slip inversion problem. For the Parkfield earthquake and the inversion of velocity or displacement waveforms, near-surface related site response (top 100 m, frequencies above 1 Hz) is shown to not significantly affect the solution. Results are also insensitive to selection of slip rate functions with similar duration and to subfault size if proper stabilizing constraints are used. The linear and nonlinear formulations yield consistent results when the same limitations in model parameters are in place and the same inversion norm is used. However, the solution is sensitive to the choice of inversion norm, the bounds on model parameters, such as rake and rupture velocity, and the size of the model fault plane. The geodetic data set for Parkfield gives a slip distribution different from that of the strong-motion data, which may be due to the spatial limitation of the geodetic stations and the bandlimited nature of the strong-motion data. Cross validation and the bootstrap method are used to set limits on the upper bound for rupture velocity and to derive mean slip models and standard deviations in model parameters. This analysis shows that slip on the northwestern half of the Parkfield rupture plane from the inversion of strong-motion data is model dependent and has a greater uncertainty than slip near the hypocenter.

  7. Dynamic deformations and the M6.7, Northridge, California earthquake

    USGS Publications Warehouse

    Gomberg, J.

    1997-01-01

    A method of estimating the complete time-varying dynamic formation field from commonly available three-component single station seismic data has been developed and applied to study the relationship between dynamic deformation and ground failures and structural damage using observations from the 1994 Northridge, California earthquake. Estimates from throughout the epicentral region indicate that the horizontal strains exceed the vertical ones by more than a factor of two. The largest strains (exceeding ???100 ??strain) correlate with regions of greatest ground failure. There is a poor correlation between structural damage and peak strain amplitudes. The smallest strains, ???35 ??strain, are estimated in regions of no damage or ground failure. Estimates in the two regions with most severe and well mapped permanent deformation, Potrero Canyon and the Granada-Mission Hills regions, exhibit the largest strains; peak horizontal strains estimates in these regions equal ???139 and ???229 ??strain respectively. Of note, the dynamic principal strain axes have strikes consistent with the permanent failure features suggesting that, while gravity, sub-surface materials, and hydrologic conditions undoubtedly played fundamental roles in determining where and what types of failures occurred, the dynamic deformation field may have been favorably sized and oriented to initiate failure processes. These results support other studies that conclude that the permanent deformation resulted from ground shaking, rather than from static strains associated with primary or secondary faulting. They also suggest that such an analysis, either using data or theoretical calculations, may enable observations of paleo-ground failure to be used as quantitative constraints on the size and geometry of previous earthquakes. ?? 1997 Elsevier Science Limited.

  8. Detailed observations of California foreshock sequences: Implications for the earthquake initiation process

    USGS Publications Warehouse

    Dodge, D.A.; Beroza, G.C.; Ellsworth, W.L.

    1996-01-01

    We find that foreshocks provide clear evidence for an extended nucleation process before some earthquakes. In this study, we examine in detail the evolution of six California foreshock sequences, the 1986 Mount Lewis (ML, = 5.5), the 1986 Chalfant (ML = 6.4), the. 1986 Stone Canyon (ML = 4.7), the 1990 Upland (ML = 5.2), the 1992 Joshua Tree (MW= 6.1), and the 1992 Landers (MW = 7.3) sequence. Typically, uncertainties in hypocentral parameters are too large to establish the geometry of foreshock sequences and hence to understand their evolution. However, the similarity of location and focal mechanisms for the events in these sequences leads to similar foreshock waveforms that we cross correlate to obtain extremely accurate relative locations. We use these results to identify small-scale fault zone structures that could influence nucleation and to determine the stress evolution leading up to the mainshock. In general, these foreshock sequences are not compatible with a cascading failure nucleation model in which the foreshocks all occur on a single fault plane and trigger the mainshock by static stress transfer. Instead, the foreshocks seem to concentrate near structural discontinuities in the fault and may themselves be a product of an aseismic nucleation process. Fault zone heterogeneity may also be important in controlling the number of foreshocks, i.e., the stronger the heterogeneity, the greater the number of foreshocks. The size of the nucleation region, as measured by the extent of the foreshock sequence, appears to scale with mainshock moment in the same manner as determined independently by measurements of the seismic nucleation phase. We also find evidence for slip localization as predicted by some models of earthquake nucleation. Copyright 1996 by the American Geophysical Union.

  9. Detailed observations of California foreshock sequences: Implications for the earthquake initiation process

    NASA Astrophysics Data System (ADS)

    Dodge, Douglas A.; Beroza, Gregory C.; Ellsworth, W. L.

    1996-10-01

    We find that foreshocks provide clear evidence for an extended nucleation process before some earthquakes. In this study, we examine in detail the evolution of six California foreshock sequences, the 1986 Mount Lewis (ML = 5.5), the 1986 Chalfant (ML = 6.4), the 1986 Stone Canyon (ML = 4.7), the 1990 Upland (ML = 5.2), the 1992 Joshua Tree (MW = 6.1), and the 1992 Landers (MW = 7.3) sequence. Typically, uncertainties in hypocentral parameters are too large to establish the geometry of foreshock sequences and hence to understand their evolution. However, the similarity of location and focal mechanisms for the events in these sequences leads to similar foreshock waveforms that we cross correlate to obtain extremely accurate relative locations. We use these results to identify small-scale fault zone structures that could influence nucleation and to determine the stress evolution leading up to the mainshock. In general, these foreshock sequences are not compatible with a cascading failure nucleation model in which the foreshocks all occur on a single fault plane and trigger the mainshock by static stress transfer. Instead, the foreshocks seem to concentrate near structural discontinuities in the fault and may themselves be a product of an aseismic nucleation process. Fault zone heterogeneity may also be important in controlling the number of foreshocks, i.e., the stronger the heterogeneity, the greater the number of foreshocks. The size of the nucleation region, as measured by the extent of the foreshock sequence, appears to scale with mainshock moment in the same manner as determined independently by measurements of the seismic nucleation phase. We also find evidence for slip localization as predicted by some models of earthquake nucleation.

  10. Coseismic and postseismic vertical movements associated with the 1940 M7.1 Imperial Valley, California, earthquake

    NASA Technical Reports Server (NTRS)

    Reilinger, R.

    1984-01-01

    Leveling surveys conducted along two routes that cross the Imperial fault in southern California indicate spatially coherent elevation changes attributable to coseismic and postseismic effects of the 1940, M7.1 Imperial Valley earthquake. The 1931-1941 elevation changes are consistent with theoretical models of vertical deformation of an elastic half space for a finite length strike-slip fault, using fault parameters that are consistent with the observed surface offsets following the 1940 earthquake. The elevation changes suggest an earthquake scenario consisting of a large coeismic slip in the southern half of the fault which transferred stress to the northern part as well as to the Brawley fault to the northeast.

  11. Potential Effects of a Scenario Earthquake on the Economy of Southern California: Small Business Exposure and Sensitivity Analysis to a Magnitude 7.8 Earthquake

    USGS Publications Warehouse

    Sherrouse, Benson C.; Hester, David J.; Wein, Anne M.

    2008-01-01

    The Multi-Hazards Demonstration Project (MHDP) is a collaboration between the U.S. Geological Survey (USGS) and various partners from the public and private sectors and academia, meant to improve Southern California's resiliency to natural hazards (Jones and others, 2007). In support of the MHDP objectives, the ShakeOut Scenario was developed. It describes a magnitude 7.8 (M7.8) earthquake along the southernmost 300 kilometers (200 miles) of the San Andreas Fault, identified by geoscientists as a plausible event that will cause moderate to strong shaking over much of the eight-county (Imperial, Kern, Los Angeles, Orange, Riverside, San Bernardino, San Diego, and Ventura) Southern California region. This report contains an exposure and sensitivity analysis of small businesses in terms of labor and employment statistics. Exposure is measured as the absolute counts of labor market variables anticipated to experience each level of Instrumental Intensity (a proxy measure of damage). Sensitivity is the percentage of the exposure of each business establishment size category to each Instrumental Intensity level. The analysis concerns the direct effect of the earthquake on small businesses. The analysis is inspired by the Bureau of Labor Statistics (BLS) report that analyzed the labor market losses (exposure) of a M6.9 earthquake on the Hayward fault by overlaying geocoded labor market data on Instrumental Intensity values. The method used here is influenced by the ZIP-code-level data provided by the California Employment Development Department (CA EDD), which requires the assignment of Instrumental Intensities to ZIP codes. The ZIP-code-level labor market data includes the number of business establishments, employees, and quarterly payroll categorized by business establishment size.

  12. Full Moment Tensor Variations and Isotropic Characteristics of Earthquakes in the Gulf of California Transform Fault System

    NASA Astrophysics Data System (ADS)

    Ortega, Roberto; Quintanar, Luis; Rivera, Luis

    2014-10-01

    The full moment tensor is a mathematical expression of six independent variables; however, on a routine basis, it is a common practice to reduce them to five assuming that the isotropic component is zero. This constraint is valid in most tectonic regimes where slip occurs entirely at the fault surface (e.g. subduction zones); however, we found that full moment tensors are best represented in transform fault systems. Here we present a method to analyze source complexity of earthquakes of different sizes using a simple formulation that relates the elastic constants obtained from independent studies with the angle between the slip and the fault normal vector, referred to as angle ; this angle is obtained from the full moment tensors. The angle , the proportion of volume change and the constant volume (shear) component are numerical indicators of complexity of the source; earthquakes are more complex as deviates from or as T and k deviate from zero as well. These parameters are obtained from the eigensolution of the full moment tensor. We analyzed earthquakes in the Gulf of California that exhibit a clear isotropic component and we observed that the constant volume parameter T is independent of scalar moments, suggesting that big and small earthquakes are equally complex. In addition, simple models of one single fault are not sufficient to describe physically all the combinations of in a source type plot. We also found that the principal direction of the strike of the Transform Fault System in the Gulf of California is following the first order approximation of the normal surface of the full moment tensor solution, whereas for deviatoric moment tensors the principal direction does not coincide with the strike of the Transform Fault System. Our observations that small and large earthquakes are equally complex are in agreement with recent studies of strike-slip earthquakes.

  13. Earthquakes

    USGS Publications Warehouse

    Shedlock, Kaye M.; Pakiser, Louis Charles

    1998-01-01

    One of the most frightening and destructive phenomena of nature is a severe earthquake and its terrible aftereffects. An earthquake is a sudden movement of the Earth, caused by the abrupt release of strain that has accumulated over a long time. For hundreds of millions of years, the forces of plate tectonics have shaped the Earth as the huge plates that form the Earth's surface slowly move over, under, and past each other. Sometimes the movement is gradual. At other times, the plates are locked together, unable to release the accumulating energy. When the accumulated energy grows strong enough, the plates break free. If the earthquake occurs in a populated area, it may cause many deaths and injuries and extensive property damage. Today we are challenging the assumption that earthquakes must present an uncontrollable and unpredictable hazard to life and property. Scientists have begun to estimate the locations and likelihoods of future damaging earthquakes. Sites of greatest hazard are being identified, and definite progress is being made in designing structures that will withstand the effects of earthquakes.

  14. Disaster Response and Decision Support in Partnership with the California Earthquake Clearinghouse

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Rosinski, A.; Vaughan, D.; Morentz, J.

    2014-12-01

    Getting the right information to the right people at the right time is critical during a natural disaster. E-DECIDER (Emergency Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response) is a NASA decision support system designed to produce remote sensing and geophysical modeling data products that are relevant to the emergency preparedness and response communities and serve as a gateway to enable the delivery of NASA decision support products to these communities. The E-DECIDER decision support system has several tools, services, and products that have been used to support end-user exercises in partnership with the California Earthquake Clearinghouse since 2012, including near real-time deformation modeling results and on-demand maps of critical infrastructure that may have been potentially exposed to damage by a disaster. E-DECIDER's underlying service architecture allows the system to facilitate delivery of NASA decision support products to the Clearinghouse through XchangeCore Web Service Data Orchestration that allows trusted information exchange among partner agencies. This in turn allows Clearinghouse partners to visualize data products produced by E-DECIDER and other NASA projects through incident command software such as SpotOnResponse or ArcGIS Online.

  15. Cruise report for A1-02-SC southern California CABRILLO project, Earthquake Hazards Task

    USGS Publications Warehouse

    Normark, William R.; Fisher, Michael A.; Gutmacher, Christina E.; Sliter, Ray; Hibbeler, Lori; Feingold, Beth; Reid, Jane A.

    2003-01-01

    A two-week marine geophysical survey obtained sidescan-sonar images and multiple sets of high-resolution seismic-reflection profiles in the southern California offshore area between Point Arguello and Point Dume. The data were obtained to support two project activities of the United States Geological Survey (USGS) Coastal and Marine Geology (CMG) Program: (1) the evaluation of the geologic hazards posed by earthquake faults and landslides in the offshore areas of Santa Barbara Channel and western Santa Monica Basin and (2) determine the location of active hydrocarbon seeps in the vicinity of Point Conception as part of a collaborative study with the Minerals Management Service (MMS). The 2002 cruise, A1-02- SC, is the fourth major data-collection effort in support of the first objective (Normark et al., 1999a, b; Gutmacher et al., 2000). A cruise to obtain sediment cores to constrain the timing of deformation interpreted from the geophysical records is planned for the summer of 2003.

  16. Fault tectonics and earthquake hazards in the Peninsular Ranges, Southern California

    NASA Technical Reports Server (NTRS)

    Merifield, P. M.; Lamar, D. L. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. ERTS and Skylab images reveal a number of prominent lineaments in the basement terrane of the Peninsular Ranges, Southern California. The major, well-known, active, northwest trending, right-slip faults are well displayed, but northeast and west to west-northwest trending lineaments are also present. Study of large-scale airphotos followed by field investigations have shown that several of these lineaments represent previously unmapped faults. Pitches of striations on shear surfaces of the northeast and west trending faults indicate oblique-slip movement; data are insufficient to determine the net-slip. These faults are restricted to the pre-Tertiary basement terrane and are truncated by the major northwest trending faults; therefore, they may have formed in response to an earlier stress system. Future work should be directed toward determining whether the northeast and west trending faults are related to the presently active stress system or to an older inactive system, because this question relates to the earthquake risk in the vicinity of these faults.

  17. Remotely triggered microearthquakes and tremor in central California following the 2010 Mw 8.8 Chile earthquake

    USGS Publications Warehouse

    Peng, Zhigang; Hill, David P.; Shelly, David R.; Aiken, Chastity

    2010-01-01

    We examine remotely triggered microearthquakes and tectonic tremor in central California following the 2010 Mw 8.8 Chile earthquake. Several microearthquakes near the Coso Geothermal Field were apparently triggered, with the largest earthquake (Ml 3.5) occurring during the large-amplitude Love surface waves. The Chile mainshock also triggered numerous tremor bursts near the Parkfield-Cholame section of the San Andreas Fault (SAF). The locally triggered tremor bursts are partially masked at lower frequencies by the regionally triggered earthquake signals from Coso, but can be identified by applying high-pass or matched filters. Both triggered tremor along the SAF and the Ml 3.5 earthquake in Coso are consistent with frictional failure at different depths on critically-stressed faults under the Coulomb failure criteria. The triggered tremor, however, appears to be more phase-correlated with the surface waves than the triggered earthquakes, likely reflecting differences in constitutive properties between the brittle, seismogenic crust and the underlying lower crust.

  18. Changes in the discharge characteristics of thermal springs and fumaroles in the Long Valley Caldera, California, resulting from earthquakes on May 25-27, 1980

    USGS Publications Warehouse

    Sorey, M.L.; Clark, Mark D.

    1981-01-01

    Changes in flow rate and turbidity have been observed and measured in hot springs in the Long Valley caldera, California, following earthquakes with magnitudes up to 6.3 in May 1980. Increases in flow rate of some hot springs occurred within minutes of the earthquakes, followed by more gradual decreases in flow rate to pre-earthquake levels. Spring temperatures and chemistries also show no long-term variations following earthquakes. Transient changes in discharge characteristics of the hot springs appear to result from increases in the permeability of fault conduits transmitting the hot water to the surface. (USGS)

  19. Multifrequential periodogram analysis of earthquake occurrence: An alternative approach to the Schuster spectrum, with two examples in central California

    NASA Astrophysics Data System (ADS)

    Dutilleul, Pierre; Johnson, Christopher W.; Bürgmann, Roland; Wan, Yongge; Shen, Zheng-Kang

    2015-12-01

    Periodic earthquake occurrences may reflect links with semidiurnal to multiyear tides, seasonal hydrological loads, and ~14 month pole tide forcing. The Schuster spectrum is a recent extension of Schuster's traditional test for periodicity analysis in seismology. We present an alternative approach: the multifrequential periodogram analysis (MFPA), performed on time series of monthly earthquake numbers. We explore if seismicity in two central California regions, the Central San Andreas Fault near Parkfield (CSAF-PKD) and the Sierra Nevada-Eastern California Shear Zone (SN-ECSZ), exhibits periodic behavior at periods of 2 months to several years. Original and declustered catalogs spanning up to 26 years were analyzed with both methods. For CSAF-PKD, the MFPA resolves ~1 year periodicities, with additional statistically significant periods of ~6 and ~4 months; for SN-ECSZ, it finds a strong ~14 month periodic component. Unlike the Schuster spectrum, the MFPA has an exact modified statistic at non-Fourier frequencies. Informed by the MFPA period estimates, trigonometric models with periods of 12, 6, and 4 months (Model 1) and 14.24 and 12 months (Model 2) were fitted to time series of earthquake numbers. For CSAF-PKD, Model 1 shows a peak annual earthquake occurrence during August-November and a secondary peak in April. Similar peaks, or troughs, are found in annual and semiannual components of pole tide and tide-induced stress model time series and fault normal-stress reduction from seasonal hydrological unloading. For SN-ECSZ, the dominant ~14 month periodicity prevents regular annual peaking, and Model 2 provides a better fit (ΔR>¯adjusted2: 2.4%). This new MFPA application resolves several periodicities in earthquake catalogs that reveal external periodic forcing.

  20. Earthquakes, active faults, and geothermal areas in the imperial valley, california.

    PubMed

    Hill, D P; Mowinckel, P; Peake, L G

    1975-06-27

    A dense seismograph network in the Imperial Valley recorded a series of earthquake swarms along the Imperial and Brawley faults and a diffuse pattern of earthquakes along the San Jacinto fault. Two known geothermal areas are closely associated with these earthquake swarms. This seismicity pattern demonstrates that seismic slip is occurring along both the Imperial-Brawley and San Jacinto fault systems. PMID:17772600

  1. Forecasting the evolution of seismicity in southern California: Animations built on earthquake stress transfer

    USGS Publications Warehouse

    Toda, S.; Stein, R.S.; Richards-Dinger, K.; Bozkurt, S.B.

    2005-01-01

    We develop a forecast model to reproduce the distibution of main shocks, aftershocks and surrounding seismicity observed during 1986-200 in a 300 ?? 310 km area centered on the 1992 M = 7.3 Landers earthquake. To parse the catalog into frames with equal numbers of aftershocks, we animate seismicity in log time increments that lengthen after each main shock; this reveals aftershock zone migration, expansion, and densification. We implement a rate/state algorithm that incorporates the static stress transferred by each M ??? 6 shock and then evolves. Coulomb stress changes amplify the background seismicity, so small stress changes produce large changes in seismicity rate in areas of high background seismicity. Similarly, seismicity rate declines in the stress shadows are evident only in areas with previously high seismicity rates. Thus a key constituent of the model is the background seismicity rate, which we smooth from 1981 to 1986 seismicity. The mean correlation coefficient between observed and predicted M ??? 1.4 shocks (the minimum magnitude of completeness) is 0.52 for 1986-2003 and 0.63 for 1992-2003; a control standard aftershock model yields 0.54 and 0.52 for the same periods. Four M ??? 6.0 shocks struck during the test period; three are located at sites where the expected seismicity rate falls above the 92 percentile, and one is located above the 75 percentile. The model thus reproduces much, but certainly not all, of the observed spatial and temporal seismicity, from which we infer that the decaying effect of stress transferred by successive main shocks influences seismicity for decades. Finally, we offer a M ??? 5 earthquake forecast for 2005-2015, assigning probabilities to 324 10 ?? 10 km cells.

  2. Coulomb stress changes imparted by simulated M>7 earthquakes to major fault surfaces in Southern California

    NASA Astrophysics Data System (ADS)

    Rollins, J. C.; Ely, G. P.; Jordan, T. H.

    2011-12-01

    To study static stress interactions between faults in southern California and identify cases where one large earthquake could trigger another, we select fourteen M>7 events simulated by the SCEC/CME CyberShake project and calculate the Coulomb stress changes those events impart to major fault surfaces in the UCERF2 fault model for the region. CyberShake simulates between 6 and 32 slip distributions for each event at a slip sampling resolution of 1 km, and we calculate stress changes on fault surfaces at the same resolution, a level of detail which is unprecedented in studies of stress transfer and which allows us to study the way that variabilities in slip on the source can affect imparted stress changes. We find that earthquakes rupturing the southern San Andreas fault generally decrease Coulomb stress on right-lateral faults in the Los Angeles basin, while M>7 events on the San Jacinto, Elsinore, Newport-Inglewood and Palos Verdes faults generally decrease stress on parallel right-lateral faults but increase Coulomb stress on the Mojave or San Bernardino sections of the San Andreas. Stress interactions between strike-slip and thrust faults and between the San Andreas and Garlock faults depend on the rupture area of the source. Coulomb stress changes imparted by simulated SAF events to locations on the San Jacinto and Garlock faults within ~8 km of the San Andreas appear to be influenced more by the nearby distribution of high and low slip on the San Andreas than by the overall slip distribution across the entire rupture. Using a simplified model, we calculate that an area of no slip surrounded by high slip on a rupture imparts strong Coulomb stress increases ≤7 km to either side of the source fault, possibly explaining the apparent ~8-km range of influence of local slip on the San Andreas. Additionally, we devise a method for evaluating uncertainty values in Coulomb stress changes caused by uncertainties in the strike, dip and rake of the receiver fault. These findings may be useful in understanding stress interactions between faults of different orientations and rakes, stress transfer and variability at short distances from the source fault, and applications of uncertainty values to Coulomb stress changes.

  3. Micro-earthquake Analysis for Reservoir Properties at the Prati-32 Injection Test, The Geysers, California

    NASA Astrophysics Data System (ADS)

    Hutchings, L. J.; Singh, A. K.

    2014-12-01

    The Prati-32 injection test offers a particular opportunity to test rock physics theories and tomography results as it occurred in a previously undisturbed portion of The Geysers, California. Within the northwest Geysers, there is a high temperature zone (HTZ) directly below the normal temperature reservoir (NTR) at ˜2.6 km below ground surface. We demonstrate an analysis of micro-earthquake data with rock physics theory to identify fractures, state of fluids, and permeable zones. We obtain earthquake source properties (hypocenters, magnitudes, stress drops, and moment tensors), 3D isotropic velocity (Vp and Vs) and attenuation (Qp and Qs seismic quality factors), derived elastic moduli (Lambda, Bulk and Young's moduli), and Poisson's ratio. After one month of injection changes in these parameters occur right at the point where injection occured, which confirms the accuracy of the tomography. Bulk modulus, Poisson's ratio, and Lambda increased. Vs decreased. Qp and Vp increased slightly and Qs did not change. We interpret this observation to indicate that there is fluid saturation along with fracturing around the well bottom. Fracturing would decrease Vs, while saturation would not affect Vs. Whereas, saturation would increase Vp, even with fracturing. Saturation and fracturing should have competing effect of intrinsic and extrinsic Q. Saturation should increase intrinsic Qp, but not affect extrinsic Qp. We can't explain the unchanged Qs, unless the effect of increasing intrinsic Qs is offset by a decrease in extrinsic Qs. Poisson's ratio, and Lambda increased, which is another indication of saturation. After two months of injection, as compared to one month before injection. Bulk modulus and Vp have returned to values comparable to before injection for the volume around the well bottom. A new anomaly in Vp has moved below the well. Vs continues to be low and Lambda and Poisson's ratio continue to be high compared to before injection. These changes have not moved, but increased in size. We interpret these observations to indicate continued saturation, but with increased fracturing. Only Vp and bulk modulus have changed significantly and this is due to the increased fracturing offsetting the saturation.

  4. Earthquake-cycle models of the Pacific-North America plate boundary at Point Reyes, California

    NASA Astrophysics Data System (ADS)

    Vaghri, A.; Hearn, E. H.

    2011-12-01

    At Point Reyes, California, about 36 mm/yr of Pacific-North America relative plate motion is accommodated by (from west to east) the San Andreas, Rodgers Creek, Napa and Green Valley faults. We have developed a suite of viscoelastic earthquake cycle models which take into account the timing and recurrence intervals of large earthquakes on these faults, and are calibrated to the current GPS velocity field. We infer a locking depth of about 12 km for all four faults, consistent with previous analyses of local hypocenter depths (e.g., d'Alessio et al, 2005). Low-viscosity viscous shear zones appear to be required for our models to fit the GPS velocities. In order to fit the high surface velocity gradient across this set of faults, the effective viscosity for the lower crust and mantle must exceed 10^20 Pa s. A modest contrast in effective viscosity of the lower crust and upper mantle across the San Andreas Fault, with higher viscosity values (at least 5 x 10^20 Pa s) to the east, is also indicated. In the region between the Rodgers Creek Fault and the Green Valley Fault, GPS data indicate a higher strain rate than our models can explain. Even after shifting the entire Green Valley Fault slip rate (9 mm/yr) westward to the Napa Fault, the misfit is not eliminated. Double-difference hypocenter data (Waldhauser and Schaff, 2008) suggest the presence of another fault zone between the Napa Fault and the Green Valley Fault, and that all three of these faults dip toward the west. This offsets their deep, creeping extensions several km from their surface traces. A preliminary model with a suitably offset, deep Green Valley Fault extension cuts the WRSS misfit to GPS site velocities by over a factor of two. Since non-vertical fault dips are often missed in seismic studies (e.g. Fuis et al., 2008), creeping shear zones at depth may routinely be offset by several kilometers from their surface traces, unless alternate evidence of their position at depth is available (e.g. Shelly et al., 2009). This may lead to incorrect inferences of material asymmetry, or errors in the attribution of slip rates to closely spaced, active faults.

  5. Directional topographic site response at Tarzana observed in aftershocks of the 1994 Northridge, California, earthquake: Implications for mainshock motions

    USGS Publications Warehouse

    Spudich, P.; Hellweg, M.; Lee, W.H.K.

    1996-01-01

    The Northridge earthquake caused 1.78 g acceleration in the east-west direction at a site in Tarzana, California, located about 6 km south of the mainshock epicenter. The accelerograph was located atop a hill about 15-m high, 500-m long, and 130-m wide, striking about N78??E. During the aftershock sequence, a temporary array of 21 three-component geophones was deployed in six radial lines centered on the accelerograph, with an average sensor spacing of 35 m. Station COO was located about 2 m from the accelerograph. We inverted aftershock spectra to obtain average relative site response at each station as a function of direction of ground motion. We identified a 3.2-Hz resonance that is a transverse oscillation of the hill (a directional topographic effect). The top/base amplification ratio at 3.2 Hz is about 4.5 for horizontal ground motions oriented approximately perpendicular to the long axis of the hill and about 2 for motions parallel to the hill. This resonance is seen most strongly within 50 m of COO. Other resonant frequencies were also observed. A strong lateral variation in attenuation, probably associated with a fault, caused substantially lower motion at frequencies above 6 Hz at the east end of the hill. There may be some additional scattered waves associated with the fault zone and seen at both the base and top of the hill, causing particle motions (not spectral ratios) at the top of the hill to be rotated about 20?? away from the direction transverse to the hill. The resonant frequency, but not the amplitude, of our observed topographic resonance agrees well with theory, even for such a low hill. Comparisons of our observations with theoretical results indicate that the 3D shape of the hill and its internal structure are important factors affecting its response. The strong transverse resonance of the hill does not account for the large east-west mainshock motions. Assuming linear soil response, mainshock east-west motions at the Tarzana accelerograph were amplified by a factor of about 2 or less compared with sites at the base of the hill. Probable variations in surficial shear-wave velocity do not account for the observed differences among mainshock acceleration observed at Tarzana and at two different sites within 2 km of Tarzana.

  6. Earthquake warning system for Japan Railways’ bullet train; implications for disaster prevention in California

    USGS Publications Warehouse

    Nakamura, Y.; Tucker, B. E.

    1988-01-01

    Today, Japanese society is well aware of the prediction of the Tokai earthquake. It is estimated by the Tokyo earthquake. It is estimated by the Tokyo muncipal government that this predicted earthquake could kill 30,000 people. (this estimate is viewed by many as conservative; other Japanese government agencies have made estimates but they have not been published.) Reduction in the number deaths from 120,000 to 30,000 between the Kanto earthquake and the predicted Tokai earthquake is due in large part to the reduction in the proportion of wooden construction (houses). 

  7. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.

  8. A model of earthquake triggering probabilities and application to dynamic deformations constrained by ground motion observations

    USGS Publications Warehouse

    Gomberg, J.; Felzer, K.

    2008-01-01

    We have used observations from Felzer and Brodsky (2006) of the variation of linear aftershock densities (i.e., aftershocks per unit length) with the magnitude of and distance from the main shock fault to derive constraints on how the probability of a main shock triggering a single aftershock at a point, P(r, D), varies as a function of distance, r, and main shock rupture dimension, D. We find that P(r, D) becomes independent of D as the triggering fault is approached. When r ??? D P(r, D) scales as Dm where m-2 and decays with distance approximately as r-n with n = 2, with a possible change to r-(n-1) at r > h, where h is the closest distance between the fault and the boundaries of the seismogenic zone. These constraints may be used to test hypotheses about the types of deformations and mechanisms that trigger aftershocks. We illustrate this using dynamic deformations (i.e., radiated seismic waves) and a posited proportionality with P(r, D). Deformation characteristics examined include peak displacements, peak accelerations and velocities (proportional to strain rates and strains, respectively), and two measures that account for cumulative deformations. Our model indicates that either peak strains alone or strain rates averaged over the duration of rupture may be responsible for aftershock triggering.

  9. Focal Mechanisms for Deep Crustal Earthquakes in the Central Foothills and Near Yosemite National Park in the Sierra Nevada, California

    NASA Astrophysics Data System (ADS)

    Ryan, J. C.; Frassetto, A.; Hurd, O.; Zandt, G.; Gilbert, H.; Owens, T.; Jones, C.

    2008-12-01

    Past studies have observed seismicity occurring to depths near 40 km beneath the central Sierra Nevada in eastern California, but the cause of this unusual activity remains largely unknown. We use seismograms from a recent deployment of the Sierra Nevada EarthScope Project (SNEP) broadband array and interspersed USArray TA stations to study this deep crustal earthquake activity. From June of 2005 to May of 2006, we recorded 126 earthquakes in the central western flank of the Sierra Nevada that relocated in the depth range from 1.0 to 47.6 km. These earthquakes have small magnitudes (M < 3), occur at a rate of ~10 per month, and occasionally display repeating waveforms. The majority of the earthquakes fall into two distinct clusters. One cluster of earthquakes form a diffuse band under the low foothills north of Fresno and have focal depths mostly between 20 and 35 km. The second cluster underlies the higher western slope of the range in a more compact north-south band extending from the southern edge of Yosemite National Park to the San Joaquin River. These events have focal depths from near surface to 30 km, and are located above occasional deep, long-period (LP) events (Pitt, et al., SRL, 2002). We use P- and S-wave polarity picks and P/SH amplitude ratios to construct focal mechanisms for 23 of the larger, well-recorded earthquakes, 14 in the Foothills Cluster and 9 in the Yosemite Cluster. The focal mechanisms show dominantly near vertical and subhorizontal nodal planes, although several events do show clear normal or reverse mechanisms. Although there is some scatter, a majority of the mechanisms from the Foothills Cluster have S-to-SW steeply dipping T-axes. The majority of earthquakes in the Yosemite Cluster have P-axes moderately dipping to the SW and T-axes moderately dipping to the NE, similar to focal mechanisms of earthquakes associated with the recent magma intrusion event under Lake Tahoe (von Seggern, et al., BSSA, 2008). We suggest that the earthquakes in the Foothills Cluster are occurring in response to the downward pull of an attached piece of dense ultramafic batholith residue and the events in the Yosemite Cluster are related to post-delamination crustal magmatic processes.

  10. Potential Effects of a Scenario Earthquake on the Economy of Southern California: Intraregional Commuter, Worker, and Earnings Flow Analysis

    USGS Publications Warehouse

    Sherrouse, Benson C.; Hester, David J.

    2008-01-01

    The Multi-Hazards Demonstration Project (MHDP) is a collaboration between the U.S. Geological Survey (USGS) and various partners from the public and private sectors and academia, meant to improve Southern California's resiliency to natural hazards (Jones and others, 2007). In support of the MHDP objectives, the ShakeOut Scenario was developed. It describes a magnitude 7.8 (M7.8) earthquake along the southernmost 300 kilometers (200 miles) of the San Andreas Fault, identified by geoscientists as a plausible event that will cause moderate to strong shaking over much of the eight-county (Imperial, Kern, Los Angeles, Orange, Riverside, San Bernardino, San Diego, and Ventura) Southern California region (Jones and others, 2008). This report uses selected datasets from the U.S. Census Bureau and the State of California's Employment Development Department to develop preliminary estimates of the number and spatial distribution of commuters who cross the San Andreas Fault and to characterize these commuters by the industries in which they work and their total earnings. The analysis concerns the relative exposure of the region's economy to the effects of the earthquake as described by the location, volume, and earnings of those commuters who work in each of the region's economic sectors. It is anticipated that damage to transportation corridors traversing the fault would lead to at least short-term disruptions in the ability of commuters to travel between their places of residence and work.

  11. Behavior of Repeating Earthquake Sequences in Central California and the Implications for Subsurface Fault Creep

    SciTech Connect

    Templeton, D C; Nadeau, R; Burgmann, R

    2007-07-09

    Repeating earthquakes (REs) are sequences of events that have nearly identical waveforms and are interpreted to represent fault asperities driven to failure by loading from aseismic creep on the surrounding fault surface at depth. We investigate the occurrence of these REs along faults in central California to determine which faults exhibit creep and the spatio-temporal distribution of this creep. At the juncture of the San Andreas and southern Calaveras-Paicines faults, both faults as well as a smaller secondary fault, the Quien Sabe fault, are observed to produce REs over the observation period of March 1984-May 2005. REs in this area reflect a heterogeneous creep distribution along the fault plane with significant variations in time. Cumulative slip over the observation period at individual sequence locations is determined to range from 5.5-58.2 cm on the San Andreas fault, 4.8-14.1 cm on the southern Calaveras-Paicines fault, and 4.9-24.8 cm on the Quien Sabe fault. Creep at depth appears to mimic the behaviors seen of creep on the surface in that evidence of steady slip, triggered slip, and episodic slip phenomena are also observed in the RE sequences. For comparison, we investigate the occurrence of REs west of the San Andreas fault within the southern Coast Range. Events within these RE sequences only occurred minutes to weeks apart from each other and then did not repeat again over the observation period, suggesting that REs in this area are not produced by steady aseismic creep of the surrounding fault surface.

  12. Response of the San Andreas fault to the 1983 Coalinga-Nuñez earthquakes: an application of interaction-based probabilities for Parkfield

    USGS Publications Warehouse

    Toda, Shinji; Stein, Ross S.

    2002-01-01

    The Parkfield-Cholame section of the San Andreas fault, site of an unfulfilled earthquake forecast in 1985, is the best monitored section of the world's most closely watched fault. In 1983, the M = 6.5 Coalinga and M = 6.0 Nuñez events struck 25 km northeast of Parkfield. Seismicity rates climbed for 18 months along the creeping section of the San Andreas north of Parkfield and dropped for 6 years along the locked section to the south. Right-lateral creep also slowed or reversed from Parkfield south. Here we calculate that the Coalinga sequence increased the shear and Coulomb stress on the creeping section, causing the rate of small shocks to rise until the added stress was shed by additional slip. However, the 1983 events decreased the shear and Coulomb stress on the Parkfield segment, causing surface creep and seismicity rates to drop. We use these observations to cast the likelihood of a Parkfield earthquake into an interaction-based probability, which includes both the renewal of stress following the 1966 Parkfield earthquake and the stress transfer from the 1983 Coalinga events. We calculate that the 1983 shocks dropped the 10-year probability of a M ∼ 6 Parkfield earthquake by 22% (from 54 ± 22% to 42 ± 23%) and that the probability did not recover until about 1991, when seismicity and creep resumed. Our analysis may thus explain why the Parkfield earthquake did not strike in the 1980s, but not why it was absent in the 1990s. We calculate a 58 ± 17% probability of a M ∼ 6 Parkfield earthquake during 2001–2011.

  13. The 1868 Hayward Earthquake Alliance: A Case Study - Using an Earthquake Anniversary to Promote Earthquake Preparedness

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.

    2008-12-01

    Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion simulations of a Hayward Fault earthquake, (5) a new USGS Fact Sheet about the earthquake and the Hayward Fault, (6) a virtual tour of the 1868 earthquake, and (7) a new online field trip guide to the Hayward Fault using locations accessible by car and public transit. Finally, the California Geological Survey and many other Alliance members sponsored the Third Conference on Earthquake Hazards in the East Bay at CSU East Bay in Hayward for the three days following the 140th anniversary. The 1868 Alliance hopes to commemorate the anniversary of the 1868 Hayward Earthquake every year to maintain and increase public awareness of this fault, the hazards it and other East Bay Faults pose, and the ongoing need for earthquake preparedness and mitigation.

  14. Virtual California, ETAS, and OpenHazards web services: Responding to earthquakes in the age of Big Data

    NASA Astrophysics Data System (ADS)

    Yoder, M. R.; Schultz, K.; Rundle, J. B.; Glasscoe, M. T.; Donnellan, A.

    2014-12-01

    The response to the 2014 m=6 Napa earthquake showcased data driven services and technologies that aided first responders and decision makers to quickly assess damage, estimate aftershock hazard, and efficiently allocate resources where where they were most needed. These tools have been developed from fundamental research as part of a broad collaboration -- facilitated in no small party by the California Earthquake Clearinghouse, between researchers, policy makers, and executive decision makers and practiced and honed during numerous disaster response exercises over the past several years. On 24 August 2014, and the weeks following the m=6 Napa event, it became evident that these technologies will play an important role in the response to natural (and other) disasters in the 21st century. Given the continued rapid growth of computational capabilities, remote sensing technologies, and data gathering capacities -- including by unpiloted aerial vehicles (UAVs), it is reasonable to expect that both the volume and variety of data available during a response scenario will grow significantly in the decades to come. Inevitably, modern Data Science will be critical to effective disaster response in the 21st century. In this work, we discuss the roles that earthquake simulators, statistical seismicity models, and remote sensing technologies played in the the 2014 Napa earthquake response. We further discuss "Big Data" technologies and data models that facilitate the transformation of raw data into disseminable information and actionable products, and we outline a framework for the next generation of disaster response data infrastructure.

  15. LLNL-Generated Content for the California Academy of Sciences, Morrison Planetarium Full-Dome Show: Earthquake

    SciTech Connect

    Rodgers, A J; Petersson, N A; Morency, C E; Simmons, N A; Sjogreen, B

    2012-01-23

    The California Academy of Sciences (CAS) Morrison Planetarium is producing a 'full-dome' planetarium show on earthquakes and asked LLNL to produce content for the show. Specifically the show features numerical ground motion simulations of the M 7.9 1906 San Francisco and a possible future M 7.05 Hayward fault scenario earthquake. The show also features concepts of plate tectonics and mantle convection using images from LLNL's G3D global seismic tomography. This document describes the data that was provided to the CAS in support of production of the 'Earthquake' show. The CAS is located in Golden Gate Park, San Francisco and hosts over 1.6 million visitors. The Morrison Planetarium, within the CAS, is the largest all digital planetarium in the world. It features a 75-foot diameter spherical section projection screen tilted at a 30-degree angle. Six projectors cover the entire field of view and give a three-dimensional immersive experience. CAS shows strive to use scientifically accurate digital data in their productions. The show, entitled simply 'Earthquake', will debut on 26 May 2012. They are working on graphics and animations based on the same data sets for display on LLNL powerwalls and flat-screens as well as for public release.

  16. In-situ fluid-pressure measurements for earthquake prediction: An example from a deep well at Hi Vista, California

    USGS Publications Warehouse

    Healy, J.H.; Urban, T.C.

    1985-01-01

    Short-term earthquake prediction requires sensitive instruments for measuring the small anomalous changes in stress and strain that precede earthquakes. Instruments installed at or near the surface have proven too noisy for measuring anomalies of the size expected to occur, and it is now recognized that even to have the possibility of a reliable earthquake-prediction system will require instruments installed in drill holes at depths sufficient to reduce the background noise to a level below that of the expected premonitory signals. We are conducting experiments to determine the maximum signal-to-noise improvement that can be obtained in drill holes. In a 592 m well in the Mojave Desert near Hi Vista, California, we measured water-level changes with amplitudes greater than 10 cm, induced by earth tides. By removing the effects of barometric pressure and the stress related to earth tides, we have achieved a sensitivity to volumetric strain rates of 10-9 to 10-10 per day. Further improvement may be possible, and it appears that a successful earthquake-prediction capability may be achieved with an array of instruments installed in drill holes at depths of about 1 km, assuming that the premonitory strain signals are, in fact, present. ?? 1985 Birkha??user Verlag.

  17. Earthquake Cycle Deformation at the Ballenas Transform, Gulf of California, Mexico, from InSAR and GPS Measurements

    NASA Astrophysics Data System (ADS)

    Plattner, Christina; Fattahi, Heresh; Malservisi, Rocco; Amelung, Falk; Verdecchia, Alessandro; Dixon, Timothy H.

    2015-05-01

    We study crustal deformation across the Ballenas marine channel, Gulf of California, Mexico using InSAR and campaign GPS data. Interseismic velocities are calculated by time-series analysis spanning five years of data. Displacements from the August 3rd 2009 Mw 6.9 earthquake are calculated by differencing the most recent observations before and after the event. To estimate the offset across the marine channel we calibrate the InSAR velocity and displacement fields using the corresponding GPS data. Unfortunately, the InSAR interseismic velocity field is affected by residual tropospheric delay. We interpret the GPS interseismic and the GPS and InSAR coseismic deformation data using dislocation modeling and compare the fault kinematics during these periods of the earthquake cycle.

  18. Borehole velocity measurements at five sites that recorded the Cape Mendocino, California earthquake of 25 April, 1992

    USGS Publications Warehouse

    Gibbs, James F.; Tinsley, John C., III; Boore, David M.

    2002-01-01

    The U.S. Geological Survey (USGS), as part of an ongoing program to acquire seismic velocity and geologic data at locations that recorded strong-ground motions during earthquakes, has investigated five sites in the Fortuna, California region (Figure 1). We selected drill sites at strong-motion stations that recorded high accelerations (Table 1) from the Cape Mendocino earthquake (M 7.0) of 25 April 1992 (Oppenheimer et al., 1993). The boreholes were drilled to a nominal depth of 95 meters (310 ft) and cased with schedule 80 pvc-casing grouted in place at each location. S-wave and P-wave data were acquired at each site using a surface source and a borehole three-component geophone. This report contains the velocity models interpreted from the borehole data and gives reference to locations and peak accelerations at the selected strong-motion stations.

  19. Potential Effects of a Scenario Earthquake on the Economy of Southern California: Labor Market Exposure and Sensitivity Analysis to a Magnitude 7.8 Earthquake

    USGS Publications Warehouse

    Sherrouse, Benson C.; Hester, David J.; Wein, Anne M.

    2008-01-01

    The Multi-Hazards Demonstration Project (MHDP) is a collaboration between the U.S. Geological Survey (USGS) and various partners from the public and private sectors and academia, meant to improve Southern California's resiliency to natural hazards (Jones and others, 2007). In support of the MHDP objectives, the ShakeOut Scenario was developed. It describes a magnitude 7.8 (M7.8) earthquake along the southernmost 300 kilometers (200 miles) of the San Andreas Fault, identified by geoscientists as a plausible event that will cause moderate to strong shaking over much of the eight-county (Imperial, Kern, Los Angeles, Orange, Riverside, San Bernardino, San Diego, and Ventura) Southern California region. This report contains an exposure and sensitivity analysis of economic Super Sectors in terms of labor and employment statistics. Exposure is measured as the absolute counts of labor market variables anticipated to experience each level of Instrumental Intensity (a proxy measure of damage). Sensitivity is the percentage of the exposure of each Super Sector to each Instrumental Intensity level. The analysis concerns the direct effect of the scenario earthquake on economic sectors and provides a baseline for the indirect and interactive analysis of an input-output model of the regional economy. The analysis is inspired by the Bureau of Labor Statistics (BLS) report that analyzed the labor market losses (exposure) of a M6.9 earthquake on the Hayward fault by overlaying geocoded labor market data on Instrumental Intensity values. The method used here is influenced by the ZIP-code-level data provided by the California Employment Development Department (CA EDD), which requires the assignment of Instrumental Intensities to ZIP codes. The ZIP-code-level labor market data includes the number of business establishments, employees, and quarterly payroll categorized by the North American Industry Classification System. According to the analysis results, nearly 225,000 business establishments, or 44 percent of all establishments, would experience Instrumental Intensities between VII (7) and X (10). This represents more than 4 million employees earning over $45 billion in quarterly payroll. Over 57,000 of these establishments, employing over 1 million employees earning over $10 billion in quarterly payroll, would experience Instrumental Intensities of IX (9) or X (10). Based upon absolute counts and percentages, the Trade, Transportation, and Utilities Super Sector and the Manufacturing Super Sector are estimated to have the greatest exposure and sensitivity respectively. The Information and the Natural Resources and Mining Super Sectors are estimated to be the least impacted. Areas estimated to experience an Instrumental Intensity of X (10) account for approximately 3 percent of the region's labor market.

  20. Instrumental intensity distribution for the Hector Mine, California, and the Chi-Chi, Taiwan, earthquakes: Comparison of two methods

    USGS Publications Warehouse

    Sokolov, V.; Wald, D.J.

    2002-01-01

    We compare two methods of seismic-intensity estimation from ground-motion records for the two recent strong earthquakes: the 1999 (M 7.1) Hector Mine, California, and the 1999 (M 7.6) Chi-Chi, Taiwan. The first technique utilizes the peak ground acceleration (PGA) and velocity (PGV), and it is used for rapid generation of the instrumental intensity map in California. The other method is based on the revised relationships between intensity and Fourier amplitude spectrum (FAS). The results of using the methods are compared with independently observed data and between the estimations from the records. For the case of the Hector Mine earthquake, the calculated intensities in general agree with the observed values. For the case of the Chi-Chi earthquake, the areas of maximum calculated intensity correspond to the areas of the greatest damage and highest number of fatalities. However, the FAS method producees higher-intensity values than those of the peak amplitude method. The specific features of ground-motion excitation during the large, shallow, thrust earthquake may be considered a reason for the discrepancy. The use of PGA and PGV is simple; however, the use of FAS provides a natural consideration of site amplification by means of generalized or site-specific spectral ratios. Because the calculation of seismic-intensity maps requires rapid processing of data from a large network, it is very practical to generate a "first-order" map from the recorded peak motions. Then, a "second-order" map may be compiled using an amplitude-spectra method on the basis of available records and numerical modeling of the site-dependent spectra for the regions of sparse station spacing.

  1. Stress triggering of the 1994 m = 6.7 northridge, california, earthquake by its predecessors.

    PubMed

    Stein, R S; King, G C; Lin, J

    1994-09-01

    A model of stress transfer implies that earthquakes in 1933 and 1952 increased the Coulomb stress toward failure at the site of the 1971 San Fernando earthquake. The 1971 earthquake in turn raised stress and produced aftershocks at the site of the 1987 Whittier Narrows and 1994 Northridge ruptures. The Northridge main shock raised stress in areas where its aftershocks and surface faulting occurred. Together, the earthquakes with moment magnitude M >/= 6 near Los Angeles since 1933 have stressed parts of the Oak Ridge, Sierra Madre, Santa Monica Mountains, Elysian Park, and Newport-lnglewood faults by more than 1 bar. Although too small to cause earthquakes, these stress changes can trigger events if the crust is already near failure or advance future earthquake occurrence if it is not. PMID:17833817

  2. The magnitude 6.7 northridge, california, earthquake of 17 january 1994.

    PubMed

    1994-10-21

    The most costly American earthquake since 1906 struck Los Angeles on 17 January 1994. The magnitude 6.7 Northridge earthquake resulted from more than 3 meters of reverse slip on a 15-kilometer-long south-dipping thrust fault that raised the Santa Susana mountains by as much as 70 centimeters. The fault appears to be truncated by the fault that broke in the 1971 San Fernando earthquake at a depth of 8 kilometers. Of these two events, the Northridge earthquake caused many times more damage, primarily because its causative fault is directly under the city. Many types of structures were damaged, but the fracture of welds in steel-frame buildings was the greatest surprise. The Northridge earthquake emphasizes the hazard posed to Los Angeles by concealed thrust faults and the potential for strong ground shaking in moderate earthquakes. PMID:17816681

  3. Southern California Earthquake Center - SCEC1: Final Report Summary Alternative Earthquake Source Characterization for the Los Angeles Region

    SciTech Connect

    Foxall, B

    2003-02-26

    The objective my research has been to synthesize current understanding of the tectonics and faults of the Los Angeles Basin and surrounding region to quantify uncertainty in the characterization of earthquake sources used for geologically- and geodetically-based regional earthquake likelihood models. This work has focused on capturing epistemic uncertainty; i.e. uncertainty stemming from ignorance of the true characteristics of the active faults in the region and of the tectonic forces that drive them. In the present context, epistemic uncertainty has two components: First, the uncertainty in source geometrical and occurrence rate parameters deduced from the limited geological, geophysical and geodetic observations available; and second. uncertainties that result from fundamentally different interpretations of regional tectonic deformation and faulting. Characterization of the large number of active and potentially active faults that need to be included in estimating earthquake occurrence likelihoods for the Los Angeles region requires synthesis and evaluation of large amounts of data and numerous interpretations. This was accomplished primarily through a series of carefully facilitated workshops, smaller meetings involving key researchers, and email groups. The workshops and meetings were made possible by the unique logistical and financial resources available through SCEC, and proved to be extremely effective forums for the exchange and critical debate of data and interpretations that are essential in constructing fully representative source models. The main products from this work are a complete source model that characterizes all know or potentially active faults in the greater Los Angeles region. which includes the continental borderland as far south as San Diego, the Ventura Basin, and the Santa Barbara Channel. The model constitutes a series of maps and representative cross-sections that define alternative fault geometries, a table containing rault geometrical and slip-rate parameters, including full uncertainty distributions, and a set of logic trees that define alternative source characterizations, particularly for sets of fault systems having inter-dependent geometries and kinematics resulting from potential intersection and interaction in the sub-surface. All of these products exist in a form suitable for input to earthquake likelihood and seismic hazard analyses. In addition, moment-balanced Poissonian earthquake rates for the alternative multi-segment characterizations of each fault system have been estimated. Finally, this work has served an important integrative function in that the exchange and debate of data, results and ideas that it has engendered has helped to focus SCEC research over the past six years on to key issues in tectonic deformation and faulting.

  4. Damage and restoration of geodetic infrastructure caused by the 1994 Northridge, California, earthquake

    USGS Publications Warehouse

    Hodgkinson, Kathleen M.; Stein, Ross S.; Hudnut, Kenneth W.; Satalich, Jay; Richards, John H.

    1996-01-01

    We seek to restore the integrity of the geodetic network in the San Fernando, Simi, Santa Clarita Valleys and in the northern Los Angeles Basin by remeasurement of the network and identification of BMs which experienced non-tectonic displacements associated with the Northridge earthquake. We then use the observed displacement of BMs in the network to portray or predict the permanent vertical and horizontal deformation associated with the 1994 Northridge earthquake throughout the area, including sites where we lack geodetic measurements. To accomplish this, we find the fault geometry and earthquake slip that are most compatible with the geodetic and independent seismic observations of the earthquake. We then use that fault model to predict the deformation everywhere at the earth's surface, both at locations where geodetic observations exist and also where they are absent. We compare displacements predicted for a large number of numerical models of the earthquake faulting to the coseismic displacements, treating the earthquake fault as a cut or discontinuity embedded in a stiff elastic solid. This comparison is made after non-tectonic deformation has been removed from the measured elevation changes. The fault slip produces strain in the medium and deforms the ground surface. The model compatible with seismic observations that best fits the geodetic data within their uncertainties is selected. The acceptable model fault bisects the mainshock focus, and the earthquake size , magnitude, is compatible with the earthquake size measured seismically. Our fault model was used to identify geodetic monuments on engineered structures that were anomalously displaced by the earthquake.

  5. Earthquake-induced structures in sediments of Van Norman Lake, San Fernando, California

    USGS Publications Warehouse

    Sims, J.D.

    1973-01-01

    The 9 February 1971 earthquake in the San Fernando Valley damaged the Lower Van Norman Dam severely enough to warrant draining the reservoir. In March 1972 the sediment deposited on the reservoir floor was examined to determine whether the 1971 earthquake had induced sediment deformation and, if so, what types. A zone of deformational structures characterized by small-scale loads and slightly recumbent folds associated with the 1971 earthquake was discovered, in addition to two older zones of load structures. Each of the zones has been tentatively correlated with an historic earthquake.

  6. Current progress in using multiple electromagnetic indicators to determine location, time, and magnitude of earthquakes in California and Peru (Invited)

    NASA Astrophysics Data System (ADS)

    Bleier, T. E.; Dunson, C.; Roth, S.; Heraud, J.; Freund, F. T.; Dahlgren, R.; Bryant, N.; Bambery, R.; Lira, A.

    2010-12-01

    Since ultra-low frequency (ULF) magnetic anomalies were discovered prior to the 1989 Loma Prieta, Ca. M7.0 earthquake, QuakeFinder, a small R&D group based in Palo Alto California has systematically monitored ULF magnetic signals with a network of 3-axis induction magnetometers since 2000 in California. This raw magnetometer data was collected at 20-50 samples per sec., with no preprocessing, in an attempt to collect an accurate time history of electromagnetic waveforms prior to, during, and after large earthquakes within 30 km. of these sensors. Finally in October 2007, the QuakeFinder team observed a series of strange magnetic pulsations at the Alum Rock, California site, 14 days prior to M5.4 earthquake. These magnetic signals observed were relatively short, random pulsations, not continuous waveform signals like Pc1 or Pc3 micropulsations. The magnetic pulses have a characteristic uni-polar shapes and 0.5 sec. to 30 sec. durations, much longer than lightning signals. In May of 2010, very similar pulses were observed at Tacna, Peru, 13 days prior to a M6.2 earthquake, using a QuakeFinder station jointly operated under collaboration with the Catholic University in Lima Peru (PUCP). More examples of these pulsations were sought, and a historical review of older California magnetic data discovered fewer but similar pulsations occurred at the Hollister, Ca. site operated by UC Berkeley (e.g. San Juan Bautista M5.1 earthquake on August 12, 1998). Further analysis of the direction of arrival of the magnetic pulses showed an interesting “azimuth clustering” observed in both Alum Rock, Ca. and Tacna, Peru data. The complete time series of the Alum Rock data allowed the team to analyze subsequent changes observed in magnetometer “filter banks” (0.001 Hz to 10 Hz filter bands, similar to those used by Fraser-Smith in 1989), but this time using time-adjusted limits based on time of day, time of year, Kp, and site background noise. These site-customized limits showed similar increases in 30 minute averaged energy excursions, but the 30 minute averages had a disadvantage in that they reduced the signal to noise ratio over the individual pulse counting method. In other electromagnetic monitoring methods, air conductivity instrumentation showed major changes in positive air-borne ions observed near the Alum Rock and Tacna sites, peaking during the 24 hours prior to the earthquake. The use of GOES (geosynchronous) satellite infra red (IR) data showed that an unusual apparent “night time heating” occurred in an extended area within 40+ km. of the Alum Rock site, and this IR signature peaked around the time of the magnetic pulse count peak. The combination of these 3 indicators (magnetic pulse counts, air conductivity, and IR night time heating) may be the start in determining the time (within 1-2 weeks), location (within 20-40km) and magnitude (within +/- 1 increment of Richter magnitude) of earthquake greater than M5.4

  7. Earthquake-by-earthquake fold growth above the Puente Hills blind thrust fault, Los Angeles, California: Implications for fold kinematics and seismic hazard

    USGS Publications Warehouse

    Leon, L.A.; Christofferson, S.A.; Dolan, J.F.; Shaw, J.H.; Pratt, T.L.

    2007-01-01

    Boreholes and high-resolution seismic reflection data collected across the forelimb growth triangle above the central segment of the Puente Hills thrust fault (PHT) beneath Los Angeles, California, provide a detailed record of incremental fold growth during large earthquakes on this major blind thrust fault. These data document fold growth within a discrete kink band that narrows upward from ???460 m at the base of the Quaternary section (200-250 m depth) to 82% at 250 m depth) folding and uplift occur within discrete kink bands, thereby enabling us to develop a paleoseismic history of the underlying blind thrust fault. The borehole data reveal that the youngest part of the growth triangle in the uppermost 20 m comprises three stratigraphically discrete growth intervals marked by southward thickening sedimentary strata that are separated by intervals in which sediments do not change thickness across the site. We interpret the intervals of growth as occurring after the formation of now-buried paleofold scarps during three large PHT earthquakes in the past 8 kyr. The intervening intervals of no growth record periods of structural quiescence and deposition at the regional, near-horizontal stream gradient at the study site. Minimum uplift in each of the scarp-forming events, which occurred at 0.2-2.2 ka (event Y), 3.0-6.3 ka (event X), and 6.6-8.1 ka (event W), ranged from ???1.1 to ???1.6 m, indicating minimum thrust displacements of ???2.5 to 4.5 m. Such large displacements are consistent with the occurrence of large-magnitude earthquakes (Mw > 7). Cumulative, minimum uplift in the past three events was 3.3 to 4.7 m, suggesting cumulative thrust displacement of ???7 to 10.5 m. These values yield a minimum Holocene slip rate for the PHT of ???0.9 to 1.6 mm/yr. The borehole and seismic reflection data demonstrate that dip within the kink band is acquired incrementally, such that older strata that have been deformed by more earthquakes dip more steeply than younger strata. Specifically, strata dip 0.4?? at 4 m depth, 0.7?? at 20 m depth, 8?? at 90 m, 16?? at 110 m, and 17?? at 200 m. Moreover, structural restorations of the borehole data show that the locus of active folding (the anticlinal active axial surface) does not extend to the surface in exactly the same location from earthquake to earthquake. Rather, that the axial surfaces migrate from earthquake to earthquake, reflecting a component of fold growth by kink band migration. The incremental acquisition of bed dip in the growth triangle may reflect some combination of fold growth by limb rotation in addition to kink band migration, possibly through a component of trishear or shear fault bend folding. Alternatively, the component of limb rotation may result from curved hinge fault bend folding, and/or the mechanical response of loosely consolidated granular sediments in the shallow subsurface to folding at depth. Copyright 2007 by the American Geophysical Union.

  8. Cross-sections and maps showing double-difference relocated earthquakes from 1984-2000 along the Hayward and Calaveras faults, California

    USGS Publications Warehouse

    Simpson, Robert W.; Graymer, Russell W.; Jachens, Robert C.; Ponce, David A.; Wentworth, Carl M.

    2004-01-01

    We present cross-section and map views of earthquakes that occurred from 1984 to 2000 in the vicinity of the Hayward and Calaveras faults in the San Francisco Bay region, California. These earthquakes came from a catalog of events relocated using the double-difference technique, which provides superior relative locations of nearby events. As a result, structures such as fault surfaces and alignments of events along these surfaces are more sharply defined than in previous catalogs.

  9. A possible explanation for deeper earthquakes under the Sacramento delta, California, in terms of its deep structure and thermal history

    NASA Astrophysics Data System (ADS)

    Mikhailov, V.; Parsons, T.; Simpson, R. W.; Timoshkina, E.; Williams, C.

    2003-04-01

    Hypocentral depth of earthquakes under the Sacramento River Delta region in Northern California extends to nearly 20 km, whereas in the Coast Ranges to the west it is less than 12-15 km. In order to better understand the origin of these deeper earthquakes and the potential earthquake hazard in the vicinity, we have used data from wells in the Sacramento Valley to construct and calibrate a model of tectonic subsidence and thermal evolution of this forearc basin. Our model assumes an oceanic basement with an initial thermal profile dependent on its age, subjected to a refrigeration effect caused by subducting slab, which age and rate could change in time. Subsidence obtained for the Sacramento Delta area is close to that expected for a forearc basin underlain by normal oceanic lithosphere of 150 My age. Observed subsidence at the eastern and northern borders of the Sacramento valley appears to be considerably less, corresponding to subsidence caused by the dynamics of the subduction zone alone. Thus, it appears that the lithosphere of the Sacramento Delta, being thinner and having undergone deeper long-term subsidence, differs from the lithosphere of other parts of the Sacramento valley. Strength diagrams based on the thermal model show that even under very slow deformation the upper part of the Sacramento Delta crystalline crust (at least down to 20-22 km) can fail in brittle fashion, which is in agreement with earthquake occurrence. Rheology of the mantle below the Moho also appears to be brittle. The greater width of the seismogenic zone in this area raises the possibility that for segments of comparable length, earthquakes of somewhat greater magnitude might occur than in the Coast Ranges to the west.

  10. The Magnitude 6.7 Northridge, California, Earthquake of January 17, 1994

    NASA Technical Reports Server (NTRS)

    Donnellan, A.

    1994-01-01

    The most damaging earthquake in the United States since 1906 struck northern Los Angeles on January 17.1994. The magnitude 6.7 Northridge earthquake produced a maximum of more than 3 meters of reverse (up-dip) slip on a south-dipping thrust fault rooted under the San Fernando Valley and projecting north under the Santa Susana Mountains.

  11. Earthquake locations and seismic velocity structure in the Los Angeles Area, Southern California

    SciTech Connect

    Piper, K.A.

    1985-01-01

    To determine to the extent possible the three dimensional velocity structure of the Los Angeles basin, and to make use of that in the relocation of earthquakes, a study was undertaken to invert for that velocity structure, making use of existing earthquake data and incorporating a ray tracing routine which allowed use of the program for the larger area of the current study.

  12. The 1906 earthquake at Palo Alto, California; an interview with Birge M. Clark

    USGS Publications Warehouse

    Spall, H.

    1981-01-01

    Mr.Birge M. Clark, an architect in Palo Alto, Calif., was living in Palo Alto at the time of the 1906 earthquake. his father-in-law was Professor S. D. Townley, well known for his 1939 compilation, with Maxwell W. Allen, of earthquakes along the Pacific coast from 1769 to 1928. 

  13. Chapter C. The Loma Prieta, California, Earthquake of October 17, 1989 - Preseismic Observations

    USGS Publications Warehouse

    Johnston, Malcolm J. S., (Edited By)

    1993-01-01

    The October 17, 1989, Loma Prieta, Calif., Ms=7.1 earthquake provided the first opportunity in the history of fault monitoring in the United States to gather multidisciplinary preearthquake data in the near field of an M=7 earthquake. The data obtained include observations on seismicity, continuous strain, long-term ground displacement, magnetic field, and hydrology. The papers in this chapter describe these data, their implications for fault-failure mechanisms, the scale of prerupture nucleation, and earthquake prediction in general. Of the 10 papers presented here, about half identify preearthquake anomalies in the data, but some of these results are equivocal. Seismicity in the Loma Prieta region during the 20 years leading up to the earthquake was unremarkable. In retrospect, however, it is apparent that the principal southwest-dipping segment of the subsequent Loma Prieta rupture was virtually aseismic during this period. Two M=5 earthquakes did occur near Lake Elsman near the junction of the Sargent and San Andreas faults within 2.5 and 15 months of, and 10 km to the north of, the Loma Prieta epicenter. Although these earthquakes were not on the subsequent rupture plane of the Loma Prieta earthquake and other M=5 earthquakes occurred in the preceding 25 years, it is now generally accepted that these events were, in some way, foreshocks to the main event.

  14. Estimating Earthquake Hazards in the San Pedro Shelf Region, Southern California

    NASA Astrophysics Data System (ADS)

    Baher, S.; Fuis, G.; Normark, W. R.; Sliter, R.

    2003-12-01

    The San Pedro Shelf (SPS) region of the inner California Borderland offshore southern California poses a significant seismic hazard to the contiguous Los Angeles Area, as a consequence of late Cenozoic compressional reactivation of mid-Cenozoic extensional faults. The extent of the hazard, however, is poorly understood because of the complexity of fault geometries and uncertainties in earthquake locations. The major faults in the region include the Palos Verdes, THUMS Huntington Beach and the Newport-Inglewood fault zones. We report here the analysis and interpretation of wide-angle seismic-reflection and refraction data recorded as part of the Los Angeles Region Seismic Experiment line 1 (LARSE 1), multichannel seismic (MCS) reflection data obtained by the USGS (1998-2000) and industry borehole stratigraphy. The onshore-offshore velocity model, which is based on forward modeling of the refracted P-wave arrival times, is used to depth migrate the LARSE 1 section. Borehole stratigraphy allows correlation of the onshore and offshore velocity models because state regulations prevent collection of deep-penetration acoustic data nearshore (within 3 mi.). Our refraction study is an extension of ten Brink et al., 2000 tomographic inversion of LARSE I data. They found high velocities (> 6 km/sec) at about ~3.5 km depth from the Catalina Fault (CF) to the SPS. We find these velocities, shallower (around 2 km depth) beneath the Catalina Ridge (CR) and SPS, but at a depth 2.5-3.0 km elsewhere in the study region. This change in velocity structure can provide additional constraints for the tectonic processes of this region. The structural horizons observed in the LARSE 1 reflection data are tied to adjacent MCS lines. We find localized folding and faulting at depth (~2 km) southwest of the CR and on the SPS slope. Quasi-laminar beds, possible of pelagic origin follow the contours of earlier folded (wavelength ~1 km) and faulted Cenozoic sedimentary and volcanic rocks. Depth to basement, where observed, is approx. 1.7 km. beneath the base then shallows to approx. 1 km at the top of the SPS. This corresponds to the results obtained by Fisher et al. (in press) and Wright (1991). The pattern of faulting changes from southwest to the northeast. West of CF, faulting is confined to the pelagic and older units. Closely spaced faulting (~0.75 km) is prominent between CF and Avalon Knoll (AV), while generally more widely spaced faults (~5 km) with localized fracture zones is observed from AV to the SPS. The SPS is dominated by major faults such as the Cabrillo, Palos Verdes, THUMS Huntington Beach and Newport-Inglewood fault zones. The Cabrillo and Palos Verdes fault are major stratigraphic discontinuity with laminar beds (~30 cm) adjacent to gently folded sediments (wavelength ~1.5 km). There is evidence of recent displacement on the Cabrillo fault.

  15. Ground-water-level monitoring for earthquake prediction; a progress report based on data collected in Southern California, 1976-79

    USGS Publications Warehouse

    Moyle, W.R., Jr.

    1980-01-01

    The U.S. Geological Survey is conducting a research program to determine if groundwater-level measurements can be used for earthquake prediction. Earlier studies suggest that water levels in wells may be responsive to small strains on the order of 10 to the minus 8th power to 10 to the minus 10th power (dimensionless). Water-level data being collected in the area of the southern California uplift show response to earthquakes and other natural and manmade effects. The data are presently (1979) being made ready for computer analysis. The completed analysis may indicate the presence of precursory earthquake information. (USGS)

  16. Source Characteristics of Small Earthquakes at the Northwest Geysers Geothermal Field, California

    NASA Astrophysics Data System (ADS)

    Viegas, G.; Hutchings, L. J.

    2010-12-01

    We investigate the source parameters of small earthquakes (<=M4) at the Northwest Geysers, CA, near an injection well, before and during water injection. Our objective is to understand the relation among injection, production and source mechanisms of earthquakes. To determine the earthquakes source parameters, such as fault radius, stress drop, seismic moment and radiated energy, we use two techniques: the Empirical Green's Function (EGF) method; and the NetMoment (NM) method. The EGF method is very efficient at extracting source information of micro-earthquakes in a heavily attenuating media such as The Geysers, as it empirically corrects for attenuation and site effects. However, the number of earthquakes for which it can be applied is limited by the availability of an EGF earthquake which acts as the media transfer function. The NM method is a good method to apply when several stations record the same event. It uses a simultaneous inversion for attenuation and source properties for a single event and all stations. NM runs automatically and works well with large datasets such as the one at The Geysers, with more than 19,000 small earthquakes reported in 2006. However the method potentially carries large uncertainties in the attenuation correction, as there is a trade-off between attenuation and source corner frequency. In this study, we use the source parameter estimates obtained with the EGF approach of a small number of earthquakes, to validate and further constrain the source parameters of a larger dataset obtained with the NM method. Our EGF results indicate that earthquakes at the Northwest Geysers have radii comparable to or smaller than, and stress drops (median of 31.4 MPa) comparable to or higher than those of deeper, larger (>M5.5), natural occurring tectonic earthquakes (where stress drop ranges from 17 to 55 MPa). We do not observe a systematic change of earthquake source properties with time (before and during injection), but the number of earthquakes analyzed before injection with the EGF method is very limited and not statistically significant. We expect to improve the statistical significance once we add results using the NM method. The source information of the Geysers earthquakes can be used to estimated the total area of the fractured surface of the reservoir, in which fluid will flow, has implications for the understanding of the physics of faulting and can help in the assessment of the seismic potential in areas of ongoing geothermal exploration.

  17. Liquefaction and other ground failures in Imperial County, California, from the April 4, 2010, El Mayor-Cucapah earthquake

    USGS Publications Warehouse

    McCrink, Timothy P.; Pridmore, Cynthia L.; Tinsley, John C., III; Sickler, Robert R.; Brandenberg, Scott J.; Stewart, Jonathan P.

    2011-01-01

    The Colorado River Delta region of southern Imperial Valley, California, and Mexicali Valley, Baja California, is a tectonically dynamic area characterized by numerous active faults and frequent large seismic events. Significant earthquakes that have been accompanied by surface fault rupture and/or soil liquefaction occurred in this region in 1892 (M7.1), 1915 (M6.3; M7.1), 1930 (M5.7), 1940 (M6.9), 1950 (M5.4), 1957 (M5.2), 1968 (6.5), 1979 (6.4), 1980 (M6.1), 1981 (M5.8), and 1987 (M6.2; M6.8). Following this trend, the M7.2 El Mayor-Cucapah earthquake of April 4, 2010, ruptured approximately 120 kilometers along several known faults in Baja California. Liquefaction caused by the M7.2 El Mayor-Cucapah earthquake was widespread throughout the southern Imperial Valley but concentrated in the southwest corner of the valley, southwest of the city centers of Calexico and El Centro where ground motions were highest. Although there are few strong motion recordings in the very western part of the area, the recordings that do exist indicate that ground motions were on the order of 0.3 to 0.6g where the majority of liquefaction occurrences were found. More distant liquefaction occurrences, at Fites Road southwest of Brawley and along Rosita Canal northwest of Holtville were triggered where ground motions were about 0.2 g. Damage to roads was associated mainly with liquefaction of sandy river deposits beneath bridge approach fills, and in some cases liquefaction within the fills. Liquefaction damage to canal and drain levees was not always accompanied by vented sand, but the nature of the damage leads the authors to infer that liquefaction was involved in the majority of observed cases. Liquefaction-related damage to several public facilities - Calexico Waste Water Treatment Plant, Fig Lagoon levee system, and Sunbeam Lake Dam in particular - appears to be extensive. The cost to repair these facilities to prevent future liquefaction damage will likely be prohibitive. As such, it is likely that liquefaction will recur at these facilities during the next large earthquake in this area.

  18. Using Logistic Regression to Predict the Probability of Debris Flows in Areas Burned by Wildfires, Southern California, 2003-2006

    USGS Publications Warehouse

    Rupert, Michael G.; Cannon, Susan H.; Gartner, Joseph E.; Michael, John A.; Helsel, Dennis R.

    2008-01-01

    Logistic regression was used to develop statistical models that can be used to predict the probability of debris flows in areas recently burned by wildfires by using data from 14 wildfires that burned in southern California during 2003-2006. Twenty-eight independent variables describing the basin morphology, burn severity, rainfall, and soil properties of 306 drainage basins located within those burned areas were evaluated. The models were developed as follows: (1) Basins that did and did not produce debris flows soon after the 2003 to 2006 fires were delineated from data in the National Elevation Dataset using a geographic information system; (2) Data describing the basin morphology, burn severity, rainfall, and soil properties were compiled for each basin. These data were then input to a statistics software package for analysis using logistic regression; and (3) Relations between the occurrence or absence of debris flows and the basin morphology, burn severity, rainfall, and soil properties were evaluated, and five multivariate logistic regression models were constructed. All possible combinations of independent variables were evaluated to determine which combinations produced the most effective models, and the multivariate models that best predicted the occurrence of debris flows were identified. Percentage of high burn severity and 3-hour peak rainfall intensity were significant variables in all models. Soil organic matter content and soil clay content were significant variables in all models except Model 5. Soil slope was a significant variable in all models except Model 4. The most suitable model can be selected from these five models on the basis of the availability of independent variables in the particular area of interest and field checking of probability maps. The multivariate logistic regression models can be entered into a geographic information system, and maps showing the probability of debris flows can be constructed in recently burned areas of southern California. This study demonstrates that logistic regression is a valuable tool for developing models that predict the probability of debris flows occurring in recently burned landscapes.

  19. Comments on baseline correction of digital strong-motion data: Examples from the 1999 Hector Mine, California, earthquake

    USGS Publications Warehouse

    Boore, D.M.; Stephens, C.D.; Joyner, W.B.

    2002-01-01

    Residual displacements for large earthquakes can sometimes be determined from recordings on modern digital instruments, but baseline offsets of unknown origin make it difficult in many cases to do so. To recover the residual displacement, we suggest tailoring a correction scheme by studying the character of the velocity obtained by integration of zeroth-order-corrected acceleration and then seeing if the residual displacements are stable when the various parameters in the particular correction scheme are varied. For many seismological and engineering purposes, however, the residual displacement are of lesser importance than ground motions at periods less than about 20 sec. These ground motions are often recoverable with simple baseline correction and low-cut filtering. In this largely empirical study, we illustrate the consequences of various correction schemes, drawing primarily from digital recordings of the 1999 Hector Mine, California, earthquake. We show that with simple processing the displacement waveforms for this event are very similar for stations separated by as much as 20 km. We also show that a strong pulse on the transverse component was radiated from the Hector Mine earthquake and propagated with little distortion to distances exceeding 170 km; this pulse leads to large response spectral amplitudes around 10 sec.

  20. Geodetic slip rate for the eastern California shear zone and the recurrence time of Mojave desert earthquakes

    USGS Publications Warehouse

    Sauber, J.; Thatcher, W.; Solomon, S.C.; Lisowski, M.

    1994-01-01

    Where the San Andreas fault passes along the southwestern margin of the Mojave desert, it exhibits a large change in trend, and the deformation associated with the Pacific/North American plate boundary is distributed broadly over a complex shear zone. The importance of understanding the partitioning of strain across this region, especially to the east of the Mojave segment of the San Andreas in a region known as the eastern California shear zone (ECSZ), was highlighted by the occurrence (on 28 June 1992) of the magnitude 7.3 Landers earthquake in this zone. Here we use geodetic observations in the central Mojave desert to obtain new estimates for the rate and distribution of strain across a segment of the ECSZ, and to determine a coseismic strain drop of ~770 ??rad for the Landers earthquake. From these results we infer a strain energy recharge time of 3,500-5,000 yr for a Landers-type earthquake and a slip rate of ~12 mm yr-1 across the faults of the central Mojave. The latter estimate implies that a greater fraction of plate motion than heretofore inferred from geodetic data is accommodated across the ECSZ.

  1. A search for evidence of secondary static stress triggering during the 1992 Mw7.3 Landers, California, earthquake sequence

    NASA Astrophysics Data System (ADS)

    Meier, M.-A.; Werner, M. J.; Woessner, J.; Wiemer, S.

    2014-04-01

    Secondary triggering of aftershocks is widely observed and often ascribed to secondary static stress transfer. However, small to moderate earthquakes are generally disregarded in estimates of Coulomb stress changes (ΔCFS), either because of source parameter uncertainties or a perceived lack of importance. We use recently published high-quality focal mechanisms and hypocenters to reassess the role of small to moderate earthquakes for static stress triggering of aftershocks during the 1992 Mw7.3 Landers, California, earthquake sequence. We compare the ΔCFS imparted by aftershocks (2≤M≤6) onto subsequent aftershocks with the total ΔCFS induced by the M>6 main shocks. We find that incremental stress changes between aftershock pairs are potentially more often positive than expected over intermediate distances. Cumulative aftershock stress changes are not reliable for receivers with nearby aftershock stress sources because we exclude unrealistic aftershock stress shadows that result from uniform slip models. Nonetheless, 27% of aftershocks receive greater positive stress from aftershocks than from the main shocks. Overall, 85% of aftershocks are encouraged by the main shocks, while adding secondary stress encourages only 79%. We infer that source parameter uncertainties of small aftershocks remain too large to convincingly demonstrate (or rule out) that secondary stress transfer induces aftershocks. An important exception concerns aftershocks in main shock stress shadows: well-resolved secondary stress from detected aftershocks rarely compensates negative main shock stress; these aftershocks require a different triggering mechanism.

  2. Offshore and onshore liquefaction at Moss Landing spit, central California - result of the October 17, 1989, Loma Prieta earthquake

    SciTech Connect

    Greene, H.G.; Chase, T.E.; Hicks, K.R. ); Gardner-Taggart, J.; Ledbetter, M.T.; Barminski, R. ); Baxter, C. )

    1991-09-01

    As a result of the October 17, 1989, Loma Prieta (Santa Cruz Mountains, California) earthquake, liquefaction of the fluvial, estuarine, eolian, and beach sediments under a sand spit destroyed the Moss Landing Marine Laboratories and damaged other structures and utilities. Initial studies suggested that the liquefaction was a local phenomenon. More detailed offshore investigations, however, indicate that it occurred over a large area (maximum 8 km{sup 2}) during or shortly after the earthquake with movement of unconsolidated sediment toward and into the head of Monterey submarine canyon. This conclusion is supported by side-scan sonographs, high-resolution seismic-reflection and bathymetric profiles, onshore and sea-floor photographs, and underwater video tapes. Many distinct lobate features were identified on the shallow shelf. These features almost certainly were the result of the October 17 earthquake; they were subsequently destroyed by winter storms. In addition, fresh slump scars and recently dislodged mud debris were found on the upper, southern wall of Monterey submarine canyon.

  3. Assessment of the relative ratio of correlated and uncorrelated waiting times in the Southern California earthquakes catalogue

    NASA Astrophysics Data System (ADS)

    Matcharashvili, Teimuraz; Chelidze, Tamaz; Zhukova, Natalia

    2015-09-01

    In the present study, we investigated the interevent time interval distribution of earthquakes in Southern California. We analyzed and compared datasets of waiting times between consecutive earthquakes and time structure-distorted datasets. The aim of this study was to determine the proportion of waiting time values in the original catalogue that can be regarded as statistically distinguishable or indistinguishable from the baseline time intervals datasets where the original structure of the temporal distribution of earthquakes was disrupted. We compiled two types of time structure-distorted baseline sequences, which comprised mean values of: (a) shuffled original interevent data and (b) interevent times data from time-randomized catalogues. To compare the dynamical characteristics of the original and time structure-distorted baseline sequences, we used several data analysis methods such as power spectrum regression, detrended fluctuation, and multifractal detrended fluctuation analysis. We also tested the nonlinear structure of the original and baseline sequences using the magnitude and sign scaling analysis method. We calculated theZscore in order to assess whether the interevent time values in the original dataset shared statistical similarity or dissimilarity with the time structure-distorted baseline interevent data sequence. We compared the interevent time values in the original dataset with the mean baseline interevent times computed for two types of time structure-distorted sequences. The results showed that about 30% of the original interevent times were statistically indistinguishable from the mean of the shuffled dataset and 10% from the mean of the time structure-distorted baseline interevent dataset. We performed similar analyses for other catalogues obtained from different parts of the world. According to the results of this analysis, the proportion of events in the original catalogues that were indistinguishable from sequences with disturbed time structure did not contradict the results obtained for the Southern California catalogue.

  4. Postearthquake relaxation and aftershock accumulation linearly related after the 2003 M 6.5 Chengkung, Taiwan, and the 2004 M 6.0 Parkfield, California, earthquakes

    USGS Publications Warehouse

    Savage, J.C.; Yu, S.-B.

    2007-01-01

    We treat both the number of earthquakes and the deformation following a mainshock as the superposition of a steady background accumulation and the post-earthquake process. The preseismic displacement and seismicity rates ru and rE are used as estimates of the background rates. Let t be the time after the mainshock, u(t) + u0 the postseismic displacement less the background accumulation rut, and ??N(t) the observed cumulative number of postseismic earthquakes less the background accumulation rE t. For the first 160 days (duration limited by the occurrence of another nearby earthquake) following the Chengkung (M 6.5, 10 December 2003, eastern Taiwan) and the first 560 days following the Parkfield (M 6.0, 28 September 2004, central California) earthquakes u(t) + u0 is a linear function of ??N(t). The aftershock accumulation ??N(t) for both earthquakes is described by the modified Omori Law d??N/dt ?? (1 + t/??)-p with p = 0.96 and ?? = 0.03 days. Although the Chengkung earthquake involved sinistral, reverse slip on a moderately dipping fault and the Parkfield earthquake right-lateral slip on a near-vertical fault, the earthquakes share an unusual feature: both occurred on faults exhibiting interseismic fault creep at the surface. The source of the observed postseismic deformation appears to be afterslip on the coseismic rupture. The linear relation between u(t) + u0 and N(t) suggests that this afterslip also generates the aftershocks. The linear relation between u(t) + u0 and ??N(t) obtains after neither the 1999 M 7.1 Hector Mine (southern California) nor the 1999 M 7.6 Chi-Chi (central Taiwan) earthquakes, neither of which occurred on fault segments exhibiting fault creep.

  5. Chapter B. The Loma Prieta, California, Earthquake of October 17, 1989 - Liquefaction

    USGS Publications Warehouse

    Holzer, Thomas L.

    1998-01-01

    The 1989 Loma Prieta earthquake both reconfirmed the vulnerability of areas in the San Francisco-Monterey Bay region to liquefaction and provided an opportunity to test methodologies for predicting liquefaction that have been developed since the mid-1970's. This vulnerability is documented in the chapter edited by O'Rourke and by the investigators in this chapter who describe case histories of liquefaction damage and warn us about the potential for even greater damage from liquefaction if an earthquake similar to the 1989 Loma Prieta earthquake, but located closer to their study sites, were to occur.

  6. Probable slow slips in the mid-crust of Hsinchu, northwestern Taiwan: Temporal correlation between normal faulting earthquakes and relative uplift

    NASA Astrophysics Data System (ADS)

    Pu, H. C.; Lin, C. H.

    2016-05-01

    To investigate the seismic behavior of crustal deformation, we deployed a dense seismic network at the Hsinchu area of northwestern Taiwan during the period between 2004 and 2006. Based on abundant local micro-earthquakes recorded at this seismic network, we have successfully determined 274 focal mechanisms among ∼1300 seismic events. It is very interesting to see that the dominant energy of both seismic strike-slip and normal faulting mechanisms repeatedly alternated with each other within two years. Also, the strike-slip and normal faulting earthquakes were largely accompanied with the surface slipping along N60°E and uplifting obtained from the continuous GPS data, individually. Those phenomena were probably resulted by the slow uplifts at the mid-crust beneath the northwestern Taiwan area. As the deep slow uplift was active below 10 km in depth along either the boundary fault or blind fault, the push of the uplifting material would simultaneously produce both of the normal faulting earthquakes in the shallow depths (0-10 km) and the slight surface uplifting. As the deep slow uplift was stop, instead, the strike-slip faulting earthquakes would be dominated as usual due to strongly horizontal plate convergence in the Taiwan. Since the normal faulting earthquakes repeatedly dominated in every 6 or 7 months between 2004 and 2006, it may conclude that slow slip events in the mid crust were frequent to release accumulated tectonic stress in the Hsinchu area.

  7. The M7 October 21, 1868 Hayward Earthquake, Northern California-140 Years Later

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Boatwright, J.; Lienkaemper, J. J.; Schwartz, D. P.; Garcia, S.

    2007-12-01

    October 21, 2008 marks the 140th anniversary of the M7 1868 Hayward earthquake. This large earthquake, which occurred slightly before 8 AM, caused extensive damage to San Francisco Bay Area and remains the nation's 12th most lethal earthquake. Property loss was extensive and about 30 people were killed. This earthquake culminated a decade-long series of earthquakes in the Bay Area which started with an M~6 earthquake in the southern Peninsula in 1856, followed by a series of four M5.8 to M6.1 sized earthquakes along the northern Calaveras fault, and ended with a M~6.5 earthquake in the Santa Cruz Mountains in 1865. Despite this flurry of quakes, the shaking from the 1868 earthquake was the strongest that the new towns and growing cities of the Bay Area had ever experienced. The effect on the brick buildings of the time was devastating: walls collapsed in San Francisco, Oakland, and San Jose, and buildings cracked as far away as Napa, Santa Rosa, and Hollister. The area that was strongly shaken (at Modified Mercalli Intensity VII or higher) encompassed about 2,300 km2. Aftershocks continued into November 1868. Surface cracking of the ground along the southern end of the Hayward Fault was traced from Warm Springs in Fremont northward 32 km to San Leandro. As Lawson (1908) reports, "the evidence to the northward of San Leandro is not very satisfactory. The country was then unsettled, and the information consisted of reports of cow- boys riding on the range". Analysis of historical triangulation data suggest that the fault moved as far north as Berkeley, and from these data the average slip along the fault is inferred to be about 1.9 ± 0.4 meters. The paleoseismic record from the southern end of the Hayward Fault provides evidence for 10 earthquakes before 1868. The average interval between these earthquakes is 170 ± 80 years, but the last five earthquakes have had an average interval of only 140 ± 50 years. The 1868 Hayward earthquake and more recent analogs such as the 1995 Kobe earthquake are stark reminders of the awesome energy waiting to be released from below the east side of the San Francisco Bay along the Hayward Fault. The population at risk from a Hayward Fault earthquake is now 100 times larger than in 1868. The infrastructure in the San Francisco Bay Area has been tested only by the relatively remote 1989 M6.9 Loma Prieta earthquake. To help focus public attention on these hazards, the 1868 Hayward Earthquake Alliance has been formed, consisting of public and private sector agencies and corporations (see their website www.1868alliance.org). The Alliance is planning a series of activities leading up to the 140th anniversary on October 21, 2008. These include public forums, conferences, commemoration events, publications, websites, videos, and public service announcements.

  8. Triggered surface slips in the Coachella Valley area associated with the 1992 Joshua Tree and Landers, California, Earthquakes

    USGS Publications Warehouse

    Rymer, M.J.

    2000-01-01

    The Coachella Valley area was strongly shaken by the 1992 Joshua Tree (23 April) and Landers (28 June) earthquakes, and both events caused triggered slip on active faults within the area. Triggered slip associated with the Joshua Tree earthquake was on a newly recognized fault, the East Wide Canyon fault, near the southwestern edge of the Little San Bernardino Mountains. Slip associated with the Landers earthquake formed along the San Andreas fault in the southeastern Coachella Valley. Surface fractures formed along the East Wide Canyon fault in association with the Joshua Tree earthquake. The fractures extended discontinuously over a 1.5-km stretch of the fault, near its southern end. Sense of slip was consistently right-oblique, west side down, similar to the long-term style of faulting. Measured offset values were small, with right-lateral and vertical components of slip ranging from 1 to 6 mm and 1 to 4 mm, respectively. This is the first documented historic slip on the East Wide Canyon fault, which was first mapped only months before the Joshua Tree earthquake. Surface slip associated with the Joshua Tree earthquake most likely developed as triggered slip given its 5 km distance from the Joshua Tree epicenter and aftershocks. As revealed in a trench investigation, slip formed in an area with only a thin (<3 m thick) veneer of alluvium in contrast to earlier documented triggered slip events in this region, all in the deep basins of the Salton Trough. A paleoseismic trench study in an area of 1992 surface slip revealed evidence of two and possibly three surface faulting events on the East Wide Canyon fault during the late Quaternary, probably latest Pleistocene (first event) and mid- to late Holocene (second two events). About two months after the Joshua Tree earthquake, the Landers earthquake then triggered slip on many faults, including the San Andreas fault in the southeastern Coachella Valley. Surface fractures associated with this event formed discontinuous breaks over a 54-km-long stretch of the fault, from the Indio Hills southeastward to Durmid Hill. Sense of slip was right-lateral; only locally was there a minor (~1 mm) vertical component of slip. Measured dextral displacement values ranged from 1 to 20 mm, with the largest amounts found in the Mecca Hills where large slip values have been measured following past triggered-slip events.

  9. Chronology of Postglacial Eruptive Activity and Calculation of Eruption Probabilities for Medicine Lake Volcano, Northern California

    USGS Publications Warehouse

    Nathenson, Manuel; Donnelly-Nolan, Julie M.; Champion, Duane E.; Lowenstern, Jacob B.

    2007-01-01

    Medicine Lake volcano has had 4 eruptive episodes in its postglacial history (since 13,000 years ago) comprising 16 eruptions. Time intervals between events within the episodes are relatively short, whereas time intervals between the episodes are much longer. An updated radiocarbon chronology for these eruptions is presented that uses paleomagnetic data to constrain the choice of calibrated ages. This chronology is used with exponential, Weibull, and mixed-exponential probability distributions to model the data for time intervals between eruptions. The mixed exponential distribution is the best match to the data and provides estimates for the conditional probability of a future eruption given the time since the last eruption. The probability of an eruption at Medicine Lake volcano in the next year from today is 0.00028.

  10. Why earthquakes correlate weakly with the solid Earth tides: Effects of periodic stress on the rate and probability of earthquake occurrence

    USGS Publications Warehouse

    Beeler, N.M.; Lockner, D.A.

    2003-01-01

    We provide an explanation why earthquake occurrence does not correlate well with the daily solid Earth tides. The explanation is derived from analysis of laboratory experiments in which faults are loaded to quasiperiodic failure by the combined action of a constant stressing rate, intended to simulate tectonic loading, and a small sinusoidal stress, analogous to the Earth tides. Event populations whose failure times correlate with the oscillating stress show two modes of response; the response mode depends on the stressing frequency. Correlation that is consistent with stress threshold failure models, e.g., Coulomb failure, results when the period of stress oscillation exceeds a characteristic time tn; the degree of correlation between failure time and the phase of the driving stress depends on the amplitude and frequency of the stress oscillation and on the stressing rate. When the period of the oscillating stress is less than tn, the correlation is not consistent with threshold failure models, and much higher stress amplitudes are required to induce detectable correlation with the oscillating stress. The physical interpretation of tn is the duration of failure nucleation. Behavior at the higher frequencies is consistent with a second-order dependence of the fault strength on sliding rate which determines the duration of nucleation and damps the response to stress change at frequencies greater than 1/tn. Simple extrapolation of these results to the Earth suggests a very weak correlation of earthquakes with the daily Earth tides, one that would require >13,000 earthquakes to detect. On the basis of our experiments and analysis, the absence of definitive daily triggering of earthquakes by the Earth tides requires that for earthquakes, tn exceeds the daily tidal period. The experiments suggest that the minimum typical duration of earthquake nucleation on the San Andreas fault system is ???1 year.

  11. Lower crustal earthquake swarms beneath Mammoth Mountain, California - evidence for the magmatic roots to the Mammoth Mountain mafic volcanic field?

    NASA Astrophysics Data System (ADS)

    Hill, D. P.; Shelly, D. R.

    2010-12-01

    Mammoth Mountain is a cluster of dacitic domes erupted ~ 68 ka. It stands on the SW topographic rim of Long Valley caldera in eastern CA. Structurally, it is outboard of the caldera ring-fracture system and its magmatic system is genetically distinct from that of the caldera. It resides within a field of mafic (basaltic) vents that erupted between 190 - 8 ka. A series of phreatic explosions from the north flank of the mountain some 700 ybp attest to the infusion of heat to shallow depths shortly prior to the 600 ybp eruptions of the Inyo Domes 6 to 12 km north of the Mountain. Unrest beneath Mammoth Mountain since 1980 has included 1) swarms of brittle-failure earthquakes in the upper 10 km of the crust that define concentric elliptical ring-like patterns centered beneath the summit, 2) mid-crustal (depths 10 to 20 km) long-period volcanic earthquakes, 3) the onset of diffuse CO2 degassing in 1990 following an 11-month-long swarm of shallow (<10 km), brittle-failure earthquakes in 1989, 4) occasional very-long-period earthquakes at depths of ~ 3 km, and 5) brief swarms of lower-crustal, brittle-failure earthquakes at depths of 20 to 30 km, including sizable episodes June 16-17, 2006 and September 29-30, 2009. Seismic waveform correlation analysis at multiple stations reveals that these lower-crustal, brittle-failure swarms consist of tens to hundreds of repeated similar events and also serves to identify many events not included in the Northern California Seismic Network (NCSN) catalog. In the case of the 2009 episode, an evolution in waveform is clearly discernible over the sequence, suggesting a corresponding evolution in source location or mechanism. Work is ongoing to take advantage of the waveform similarity to estimate precise hypocentral locations of these events in order to distinguish between these possibilities.We suggest that the brittle-failure earthquakes at depths of 20 to 30 km are occurring within the more mafic mid- to lower-crust, which can remain in the brittle domain to temperatures as high as ~700o C. Above these deep events are two distinct shallower zones of seismicity. The mid-crustal long-period earthquakes between 10 and 20 km are presumably occurring within the silicic crust, but below the rheological transition from brittle to plastic behavior, expected to occur at temperatures of ~350 to 400o C. Above this transition are shallow brittle-failure earthquakes, in the upper 8 kilometers of the silicic crust. These lower crustal brittle-failure earthquakes are similar in depth and tectonic setting to those that occurred beneath the Sierra Nevada crest in the vicinity of Lake Tahoe in late 2003, which Smith et al. (Science, 2004) concluded were associated with a magmatic intrusion in the lower crust. The Mammoth sequences, however, are much shorter in duration (1-2 days compared with several months) and have no detectable accompanying geodetic signal. Thus, there is no clear evidence for a significant intrusion associated with these deep swarms of brittle-failure earthquakes beneath Mammoth Mountain.

  12. Active Crustal Deformation in the Area of San Carlos, Baja California Sur, Mexico as Shown by Data of Local Earthquake Sequences

    NASA Astrophysics Data System (ADS)

    Munguía, Luis; González-Escobar, Mario; Navarro, Miguel; Valdez, Tito; Mayer, Sergio; Aguirre, Alfredo; Wong, Victor; Luna, Manuel

    2015-12-01

    We analyzed earthquakes of sequences that occurred at different times near San Carlos, a town of approximately 5000 inhabitants. The seismic sequences happened during March-April 1989, October 2000-June 2001, and 5-15 February 2004 at about 200 km west of the Pacific-North America plate boundary. The strong shaking from initial earthquakes of the first two sequences prompted the installation of temporary seismic stations in the area. With data recorded by these stations, we found an earthquake distribution that is consistent with the northwest segment of the Santa Margarita fault. Both the focal depth, that seemed to increase in E-NE direction, and a composite fault-plane solution, obtained from polarity data of the small earthquakes, were also consistent with the main characteristics of that fault. We also found that our normal-faulting mechanism (east side down) was quite similar to centroid moment tensor solutions for earthquakes with M w 5.4 and 5.3 that occurred in the area in February 2004. It is likely, then, that these larger earthquakes also occurred along the Santa Margarita Fault. To get some insight into the regional stress pattern, we compared the above mechanisms with mechanisms reported for other earthquakes of the Pacific margin of Baja California Sur and the Gulf of California regions. We observed that focal mechanisms of the two regions have T axes of stress that plunge sub horizontally in E-NE average direction. The corresponding P axes have N-NW average trend, but for the Pacific earthquakes these axes plunge at angles that are ~35° larger than those for the Gulf earthquakes. These more vertically inclined P axes of compressive stress mean substantial oblique fault motions. The mixture of oblique and strike-slip components of fault motions, as the focal mechanisms show, confirms a transtensional stress regime for the region. Before this research, we knew little about the seismicity and styles of faulting in the area. Now we know that earthquakes can occur along the coastline of Baja California, at 60 km east of the Tosco-Abreojos fault system. We conclude that transtensional deformation is taking place across a wide zone of the Pacific margin of Baja California. Finally, we point out that although the studied earthquakes were of small magnitude, they might serve as a reminder of the danger that future larger events pose to San Carlos.

  13. Birth of a fault: Connecting the Kern County and Walker Pass, California, earthquakes

    USGS Publications Warehouse

    Bawden, G.W.; Michael, A.J.; Kellogg, L.H.

    1999-01-01

    A band of seismicity transects the southern Sierra Nevada range between the northeastern end of the site of the 1952 MW (moment magnitude) 7.3 Kern County earthquake and the site of the 1946 MW 6.1 Walker Pass earthquake. Relocated earthquakes in this band, which lacks a surface expression, better delineate the northeast-trending seismic lineament and resolve complex structure near the Walker Pass mainshock. Left-lateral earthquake focal planes are rotated counterclockwise from the strike of the seismic lineament, consistent with slip on shear fractures such as those observed in the early stages of fault development in laboratory experiments. We interpret this seismic lineament as a previously unrecognized, incipient, currently blind, strike-slip fault, a unique example of a newly forming structure.

  14. Chapter C. The Loma Prieta, California, Earthquake of October 17, 1989 - Building Structures

    USGS Publications Warehouse

    Celebi, Mehmet

    1998-01-01

    Several approaches are used to assess the performance of the built environment following an earthquake -- preliminary damage surveys conducted by professionals, detailed studies of individual structures, and statistical analyses of groups of structures. Reports of damage that are issued by many organizations immediately following an earthquake play a key role in directing subsequent detailed investigations. Detailed studies of individual structures and statistical analyses of groups of structures may be motivated by particularly good or bad performance during an earthquake. Beyond this, practicing engineers typically perform stress analyses to assess the performance of a particular structure to vibrational levels experienced during an earthquake. The levels may be determined from recorded or estimated ground motions; actual levels usually differ from design levels. If a structure has seismic instrumentation to record response data, the estimated and recorded response and behavior of the structure can be compared.

  15. Seismicity remotely triggered by the magnitude 7.3 landers, california, earthquake

    USGS Publications Warehouse

    Hill, D.P.; Reasenberg, P.A.; Michael, A.; Arabaz, W.J.; Beroza, G.; Brumbaugh, D.; Brune, J.N.; Castro, R.; Davis, S.; Depolo, D.; Ellsworth, W.L.; Gomberg, J.; Harmsen, S.; House, L.; Jackson, S.M.; Johnston, M.J.S.; Jones, L.; Keller, Rebecca Hylton; Malone, S.; Munguia, L.; Nava, S.; Pechmann, J.C.; Sanford, A.; Simpson, R.W.; Smith, R.B.; Stark, M.; Stickney, M.; Vidal, A.; Walter, S.; Wong, V.; Zollweg, J.

    1993-01-01

    The magnitude 7.3 Landers earthquake of 28 June 1992 triggered a remarkably sudden and widespread increase in earthquake activity across much of the western United States. The triggered earthquakes, which occurred at distances up to 1250 kilometers (17 source dimensions) from the Landers mainshock, were confined to areas of persistent seismicity and strike-slip to normal faulting. Many of the triggered areas also are sites of geothermal and recent volcanic activity. Static stress changes calculated for elastic models of the earthquake appear to be too small to have caused the triggering. The most promising explanations involve nonlinear interactions between large dynamic strains accompanying seismic waves from the mainshock and crustal fluids (perhaps including crustal magma).

  16. Borehole strainmeter measurements spanning the 2014 Mw6.0 South Napa Earthquake, California: The effect from instrument calibration

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2015-10-01

    The 24 August 2014 Mw6.0 South Napa, California earthquake produced significant offsets on 12 borehole strainmeters in the San Francisco Bay area. These strainmeters are located between 24 and 80 km from the source, and the observed offsets ranged up to 400 parts per billion (ppb), which exceeds their nominal precision by a factor of 100. However, the observed offsets of tidally calibrated strains differ by up to 130 ppb from predictions based on a moment tensor derived from seismic data. The large misfit can be attributed to a combination of poor instrument calibration and better modeling of the strain field from the earthquake. Borehole strainmeters require in situ calibration, which historically has been accomplished by comparing their measurements of Earth tides with the strain tides predicted by a model. Although the borehole strainmeter accurately measures the deformation within the borehole, the long-wavelength strain signals from tides or other tectonic processes recorded in the borehole are modified by the presence of the borehole and the elastic properties of the grout and the instrument. Previous analyses of surface-mounted, strainmeter data and their relationship with the predicted tides suggest that tidal models could be in error by 30%. The poor fit of the borehole strainmeter data from this earthquake can be improved by simultaneously varying the components of the model tides up to 30% and making small adjustments to the point source model of the earthquake, which reduces the RMS misfit from 130 ppb to 18 ppb. This suggests that relying on tidal models to calibrate borehole strainmeters significantly reduces their accuracy.

  17. Borehole strainmeter measurements spanning the 2014, Mw6.0 South Napa Earthquake, California: The effect from instrument calibration

    USGS Publications Warehouse

    Langbein, John O.

    2015-01-01

    The 24 August 2014 Mw6.0 South Napa, California earthquake produced significant offsets on 12 borehole strainmeters in the San Francisco Bay area. These strainmeters are located between 24 and 80 km from the source and the observed offsets ranged up to 400 parts-per-billion (ppb), which exceeds their nominal precision by a factor of 100. However, the observed offsets of tidally calibrated strains differ by up to 130 ppb from predictions based on a moment tensor derived from seismic data. The large misfit can be attributed to a combination of poor instrument calibration and better modeling of the strain fit from the earthquake. Borehole strainmeters require in-situ calibration, which historically has been accomplished by comparing their measurements of Earth tides with the strain-tides predicted by a model. Although the borehole strainmeter accurately measure the deformation within the borehole, the long-wavelength strain signals from tides or other tectonic processes recorded in the borehole are modified by the presence of the borehole and the elastic properties of the grout and the instrument. Previous analyses of surface-mounted, strainmeter data and their relationship with the predicted tides suggest that tidal models could be in error by 30%. The poor fit of the borehole strainmeter data from this earthquake can be improved by simultaneously varying the components of the model tides up to 30% and making small adjustments to the point-source model of the earthquake, which reduces the RMS misfit from 130 ppb to 18 ppb. This suggests that relying on tidal models to calibrate borehole strainmeters significantly reduces their accuracy.

  18. Chapter A. The Loma Prieta, California, Earthquake of October 17, 1989 - Lifelines

    USGS Publications Warehouse

    Schiff, Anshel J., (Edited By)

    1998-01-01

    To the general public who had their televisions tuned to watch the World Series, the 1989 Loma Prieta earthquake was a lifelines earthquake. It was the images seen around the world of the collapsed Cypress Street viaduct, with the frantic and heroic efforts to pull survivors from the structure that was billowing smoke; the collapsed section of the San Francisco-Oakland Bay Bridge and subsequent home video of a car plunging off the open span; and the spectacular fire in the Marina District of San Francisco fed by a broken gasline. To many of the residents of the San Francisco Bay region, the relation of lifelines to the earthquake was characterized by sitting in the dark because of power outage, the inability to make telephone calls because of network congestion, and the slow and snarled traffic. Had the public been aware of the actions of the engineers and tradespeople working for the utilities and other lifeline organizations on the emergency response and restoration of lifelines, the lifeline characteristics of this earthquake would have been even more significant. Unobserved by the public were the warlike devastation in several electrical-power substations, the 13 miles of gas-distribution lines that had to be replaced in several communities, and the more than 1,200 leaks and breaks in water mains and service connections that had to be excavated and repaired. Like the 1971 San Fernando, Calif., earthquake, which was a seminal event for activity to improve the earthquake performance of lifelines, the 1989 Loma Prieta earthquake demonstrated that the tasks of preparing lifelines in 'earthquake country' were incomplete-indeed, new lessons had to be learned.

  19. Chapter D. The Loma Prieta, California, Earthquake of October 17, 1989 - Aftershocks and Postseismic Effects

    USGS Publications Warehouse

    Reasenberg, Paul A., (Edited By)

    1997-01-01

    While the damaging effects of the earthquake represent a significant social setback and economic loss, the geophysical effects have produced a wealth of data that have provided important insights into the structure and mechanics of the San Andreas Fault system. Generally, the period after a large earthquake is vitally important to monitor. During this part of the seismic cycle, the primary fault and the surrounding faults, rock bodies, and crustal fluids rapidly readjust in response to the earthquake's sudden movement. Geophysical measurements made at this time can provide unique information about fundamental properties of the fault zone, including its state of stress and the geometry and frictional/rheological properties of the faults within it. Because postseismic readjustments are rapid compared with corresponding changes occurring in the preseismic period, the amount and rate of information that is available during the postseismic period is relatively high. From a geophysical viewpoint, the occurrence of the Loma Prieta earthquake in a section of the San Andreas fault zone that is surrounded by multiple and extensive geophysical monitoring networks has produced nothing less than a scientific bonanza. The reports assembled in this chapter collectively examine available geophysical observations made before and after the earthquake and model the earthquake's principal postseismic effects. The chapter covers four broad categories of postseismic effect: (1) aftershocks; (2) postseismic fault movements; (3) postseismic surface deformation; and (4) changes in electrical conductivity and crustal fluids.

  20. Earthquake location data for the southern Great Basin of Nevada and California: 1984 through 1986

    SciTech Connect

    Harmsen, S.C.; Rogers, A.M.

    1987-01-01

    This report presents data in map and table form for earthquake parameters such as hypocentral coordinates and magnitudes for earthquakes located by the southern Great Basin Seismic network for the time period January 1, 1984, through December 31, 1986. These maps show concentrations of earthquakes in regions previously noted to be seismically active, including the Pahranagat Shear Zone, Pahroc Mountains, southern Nevada Test Site, Timber Mountain, Black Mountain, Gold Mountain, Montezuma Range, and Grapevine Mountains. A concentration of earthquake activity in the Reveille Range was observed in 1986, in a previously inactive area. The northern Nevada Test Site had fewer earthquakes than a comparable area of the southern Nevada Test Site, indicating that the low-yield nuclear testing program is not currently triggering significant numbers of aftershocks. Eight microearthquakes occurred at Yucca Mountain during the 1984-1986 monitoring period. Depths of focus for well-located earthquakes continue to indicate a bimodal distribution, with peaks at 1 to 2 and 8 to 9 km below sea-level and a local minimum at 4 to 5 km. Focal mechanisms range from strike slip to normal slip. No dependence of slip mode on depth or magnitude is evident. 8 refs., 46 figs., 5 tabs.

  1. Local Public Health System Response to the Tsunami Threat in Coastal California following the Tōhoku Earthquake

    PubMed Central

    Hunter, Jennifer C.; Crawley, Adam W.; Petrie, Michael; Yang, Jane E.; Aragón, Tomás J.

    2012-01-01

    Background On Friday March 11, 2011 a 9.0 magnitude earthquake triggered a tsunami off the eastern coast of Japan, resulting in thousands of lives lost and billions of dollars in damage around the Pacific Rim. The tsunami first reached the California coast on Friday, March 11th, causing more than $70 million in damage and at least one death. While the tsunami’s impact on California pales in comparison to the destruction caused in Japan and other areas of the Pacific, the event tested emergency responders’ ability to rapidly communicate and coordinate a response to a potential threat. Methods To evaluate the local public health system emergency response to the tsunami threat in California, we surveyed all local public health, emergency medical services (EMS), and emergency management agencies in coastal or floodplain counties about several domains related to the tsunami threat in California, including: (1) the extent to which their community was affected by the tsunami, (2) when and how they received notification of the event, (3) which public health response activities were carried out to address the tsunami threat in their community, and (4) which organizations contributed to the response. Public health activities were characterized using the Centers for Disease Control and Prevention (CDC) Public Health Preparedness Capabilities (PHEP) framework. Findings The tsunami's impact on coastal communities in California ranged widely, both in terms of the economic consequences and the response activities. Based on estimates from the National Oceanic and Atmospheric Administration (NOAA), ten jurisdictions in California reported tsunami-related damage, which ranged from $15,000 to $35 million. Respondents first became aware of the tsunami threat in California between the hours of 10:00pm Pacific Standard Time (PST) on Thursday March 10th and 2:00pm PST on Friday March 11th, a range of 16 hours, with notification occurring through both formal and informal channels. In response to this threat, the activities most commonly reported by the local government agencies included in this study were: emergency public information and warning, emergency operations coordination, and inter-organizational information sharing, which were reported by 86%, 75%, and 65% of all respondents, respectively. When looking at the distribution of responsibility, emergency management agencies were the most likely to report assuming a lead role in these common activities as well as those related to evacuation and community recovery. While activated less frequently, public health agencies carried out emergency response functions related to surveillance and epidemiology, environmental health, and mental health/psychological support. Both local public health and EMS agencies took part in mass care and medical material management activities. A large network of organizations contributed to response activities, with emergency management, law enforcement, fire, public health, public works, EMS, and media cited by more than half of respondents. Conclusions In response to the tsunami threat in California, we found that emergency management agencies assumed a lead role in the local response efforts. While public health and medical agencies played a supporting role in the response, they uniquely contributed to a number of specific activities. If the response to the recent tsunami is any indication, these support activities can be anticipated in planning for future events with similar characteristics to the tsunami threat. Additionally, we found that many respondents first learned of the tsunami through the media, rather than through rapid notification systems, which suggests that government agencies must continue to develop and maintain the ability to rapidly aggregate and analyze information in order to provide accurate assessments and guidance to a potentially well-informed public. Citation: Hunter JC, Crawley AW, Petrie M, Yang JE, Aragón TJ. Local Public Health System Response to the Tsunami Threat in Coastal California following the Tōhoku Earthquake. PLoS Currents Disasters. 2012 Jul 16 PMID:22953236

  2. School Site Preparedness for the Safety of California's Children K-12. Official Report of the Northridge Earthquake Task Force on Education.

    ERIC Educational Resources Information Center

    California State Legislature, Sacramento. Senate Select Committee on the Northridge Earthquake.

    This report asserts that disaster preparedness at all school sites must become a major and immediate priority. Should a disaster equaling the magnitude of the Northridge earthquake occur, the current varying levels of site preparedness may not adequately protect California's children. The report describes why the state's children are not safe and…

  3. The Redwood Coast Tsunami Work Group: a unique organization promoting earthquake and tsunami resilience on California's North Coast

    NASA Astrophysics Data System (ADS)

    Dengler, L.; Henderson, C.; Larkin, D.; Nicolini, T.; Ozaki, V.

    2012-12-01

    The Northern California counties of Del Norte, Humboldt, and Mendocino account for over 30% of California's coastline and is one of the most seismically active areas of the contiguous 48 states. The region is at risk from earthquakes located on- and offshore and from tsunamis generated locally from faults associated with the Cascadia subduction zone (CSZ) and from distant sources elsewhere in the Pacific. In 1995 the California Geological Survey (CGS) published a scenario for a CSZ earthquake that included both strong ground shaking effects and a tsunami. As a result of the scenario, the Redwood Coast Tsunami Work Group (RCTWG), an organization of government agencies, tribes, service groups, academia and the private sector, was formed to coordinate and promote earthquake and tsunami hazard awareness and mitigation in the three-county region. The RCTWG and its member agencies projects include education/outreach products and programs, tsunami hazard mapping, signage and siren planning. Since 2008, RCTWG has worked with the California Emergency Management Agency (Cal EMA) in conducting tsunami warning communications tests on the North Coast. In 2007, RCTWG members helped develop and carry out the first tsunami training exercise at FEMA's Emergency Management Institute in Emmitsburg, MD. The RCTWG has facilitated numerous multi-agency, multi-discipline coordinated exercises, and RCTWG county tsunami response plans have been a model for other regions of the state and country. Eight North Coast communities have been recognized as TsunamiReady by the National Weather Service, including the first National Park the first State Park and only tribe in California to be so recognized. Over 500 tsunami hazard zone signs have been posted in the RCTWG region since 2008. Eight assessment surveys from 1993 to 2010 have tracked preparedness actions and personal awareness of earthquake and tsunami hazards in the county and additional surveys have tracked public awareness and tourist concerns about tsunami hazard signs. Over the seventeen-year period covered by the surveys, the percent with houses secured to foundations has increased from 58 to 84 percent, respondents aware of a local tsunami hazard increased from 51 to 89 percent and knowing what the Cascadia subduction zone is from 16 to 57 percent. In 2009, the RCTWG was recognized by the Western States Seismic Policy Council (WSSPC) with an award for innovation and in 2010, the RCTWG-sponsored class "Living on Shaky Ground" was awarded WSSPC's overall Award in Excellence. The RCTWG works closely with CGS and Cal EMA on a number of projects including tsunami mapping, evacuation zone planning, siren policy, tsunami safety for boaters, and public education messaging. Current projects include working with CGS to develop a "playbook" tsunami mapping product to illustrate the expected effects from a range of tsunami source events and assist local governments in focusing future response actions to reflect the range expected impacts from distant source events. Preparedness efforts paid off on March 11, 2011 when a tsunami warning was issued for the region and significant damage occurred in harbor regions of Del Norte County and Mendocino County. Full-scale evacuations were carried out in a coordinated manner and the majority of the commercial fishing fleet in Crescent City was able to exit the harbor before the tsunami arrived.

  4. Chapter F. The Loma Prieta, California, Earthquake of October 17, 1989 - Marina District

    USGS Publications Warehouse

    O'Rourke, Thomas D., (Edited By)

    1992-01-01

    During the earthquake, a total land area of about 4,300 km2 was shaken with seismic intensities that can cause significant damage to structures. The area of the Marina District of San Francisco is only 4.0 km2--less than 0.1 percent of the area most strongly affected by the earthquake--but its significance with respect to engineering, seismology, and planning far outstrips its proportion of shaken terrain and makes it a centerpiece for lessons learned from the earthquake. The Marina District provides perhaps the most comprehensive case history of seismic effects at a specific site developed for any earthquake. The reports assembled in this chapter, which provide an account of these seismic effects, constitute a unique collection of studies on site, as well as infrastructure and societal, response that cover virtually all aspects of the earthquake, ranging from incoming ground waves to the outgoing airwaves used for emergency communication. The Marina District encompasses the area bounded by San Francisco Bay on the north, the Presidio on the west, and Lombard Street and Van Ness Avenue on the south and east, respectively. Nearly all of the earthquake damage in the Marina District, however, occurred within a considerably smaller area of about 0.75 km2, bounded by San Francisco Bay and Baker, Chestnut, and Buchanan Streets. At least five major aspects of earthquake response in the Marina District are covered by the reports in this chapter: (1) dynamic site response, (2) soil liquefaction, (3) lifeline performance, (4) building performance, and (5) emergency services.

  5. The dependence of peak horizontal acceleration on magnitude, distance, and site effects for small-magnitude earthquakes in California and eastern North America

    USGS Publications Warehouse

    Campbell, K.W.

    1989-01-01

    One-hundred and ninety free-field accelerograms recorded on deep soil (>10 m deep) were used to study the near-source scaling characteristics of peak horizontal acceleration for 91 earthquakes (2.5 ??? ML ??? 5.0) located primarily in California. An analysis of residuals based on an additional 171 near-source accelerograms from 75 earthquakes indicated that accelerograms recorded in building basements sited on deep soil have 30 per cent lower acclerations, and that free-field accelerograms recorded on shallow soil (???10 m deep) have 82 per cent higher accelerations than free-field accelerograms recorded on deep soil. An analysis of residuals based on 27 selected strong-motion recordings from 19 earthquakes in Eastern North America indicated that near-source accelerations associated with frequencies less than about 25 Hz are consistent with predictions based on attenuation relationships derived from California. -from Author

  6. Statistical analysis of the induced Basel 2006 earthquake sequence: introducing a probability-based monitoring approach for Enhanced Geothermal Systems

    NASA Astrophysics Data System (ADS)

    Bachmann, C. E.; Wiemer, S.; Woessner, J.; Hainzl, S.

    2011-08-01

    Geothermal energy is becoming an important clean energy source, however, the stimulation of a reservoir for an Enhanced Geothermal System (EGS) is associated with seismic risk due to induced seismicity. Seismicity occurring due to the water injection at depth have to be well recorded and monitored. To mitigate the seismic risk of a damaging event, an appropriate alarm system needs to be in place for each individual experiment. In recent experiments, the so-called traffic-light alarm system, based on public response, local magnitude and peak ground velocity, was used. We aim to improve the pre-defined alarm system by introducing a probability-based approach; we retrospectively model the ongoing seismicity in real time with multiple statistical forecast models and then translate the forecast to seismic hazard in terms of probabilities of exceeding a ground motion intensity level. One class of models accounts for the water injection rate, the main parameter that can be controlled by the operators during an experiment. By translating the models into time-varying probabilities of exceeding various intensity levels, we provide tools which are well understood by the decision makers and can be used to determine thresholds non-exceedance during a reservoir stimulation; this, however, remains an entrepreneurial or political decision of the responsible project coordinators. We introduce forecast models based on the data set of an EGS experiment in the city of Basel. Between 2006 December 2 and 8, approximately 11 500 m3 of water was injected into a 5-km-deep well at high pressures. A six-sensor borehole array, was installed by the company Geothermal Explorers Limited (GEL) at depths between 300 and 2700 m around the well to monitor the induced seismicity. The network recorded approximately 11 200 events during the injection phase, more than 3500 of which were located. With the traffic-light system, actions where implemented after an ML 2.7 event, the water injection was reduced and then stopped after another ML 2.5 event. A few hours later, an earthquake with ML 3.4, felt within the city, occurred, which led to bleed-off of the well. A risk study was later issued with the outcome that the experiment could not be resumed. We analyse the statistical features of the sequence and show that the sequence is well modelled with the Omori-Utsu law following the termination of water injection. Based on this model, the sequence will last 31+29/-14 years to reach the background level. We introduce statistical models based on Reasenberg and Jones and Epidemic Type Aftershock Sequence (ETAS) models, commonly used to model aftershock sequences. We compare and test different model setups to simulate the sequences, varying the number of fixed and free parameters. For one class of the ETAS models, we account for the flow rate at the injection borehole. We test the models against the observed data with standard likelihood tests and find the ETAS model accounting for the on flow rate to perform best. Such a model may in future serve as a valuable tool for designing probabilistic alarm systems for EGS experiments.

  7. Retardations in fault creep rates before local moderate earthquakes along the San Andreas fault system, central California

    USGS Publications Warehouse

    Burford, R.O.

    1988-01-01

    Records of shallow aseismic slip (fault creep) obtained along parts of the San Andreas and Calaveras faults in central California demonstrate that significant changes in creep rates often have been associated with local moderate earthquakes. An immediate postearthquake increase followed by gradual, long-term decay back to a previous background rate is generally the most obvious earthquake effect on fault creep. This phenomenon, identified as aseismic afterslip, usually is characterized by above-average creep rates for several months to a few years. In several cases, minor step-like movements, called coseismic slip events, have occurred at or near the times of mainshocks. One extreme case of coseismic slip, recorded at Cienega Winery on the San Andreas fault 17.5 km southeast of San Juan Bautista, consisted of 11 mm of sudden displacement coincident with earthquakes of ML=5.3 and ML=5.2 that occurred 2.5 minutes apart on 9 April 1961. At least one of these shocks originated on the main fault beneath the winery. Creep activity subsequently stopped at the winery for 19 months, then gradually returned to a nearly steady rate slightly below the previous long-term average. The phenomena mentioned above can be explained in terms of simple models consisting of relatively weak material along shallow reaches of the fault responding to changes in load imposed by sudden slip within the underlying seismogenic zone. In addition to coseismic slip and afterslip phenomena, however, pre-earthquake retardations in creep rates also have been observed. Onsets of significant, persistent decreases in creep rates have occurred at several sites 12 months or more before the times of moderate earthquakes. A 44-month retardation before the 1979 ML=5.9 Coyote Lake earthquake on the Calaveras fault was recorded at the Shore Road creepmeter site 10 km northwest of Hollister. Creep retardation on the San Andreas fault near San Juan Bautista has been evident in records from one creepmeter site for the past 5 years. Retardations with durations of 21 and 19 months also occurred at Shore Road before the 1974 and 1984 earthquakes of ML=5.2 and ML=6.2, respectively. Although creep retardation remains poorly understood, several possible explanations have been discussed previously. (1) Certain onsets of apparent creep retardation may be explained as abrupt terminations of afterslip generated from previous moderate-mainshock sequences. (2) Retardations may be related to significant decreases in the rate of seismic and/or aseismic slip occurring within or beneath the underlying seismogenic zone. Such decreases may be caused by changes in local conditions related to growth of asperities, strain hardening, or dilatancy, or perhaps by passage of stress-waves or other fluctuations in driving stresses. (3) Finally, creep rates may be lowered (or increased) by stresses imposed on the fault by seismic or aseismic slip on neighboring faults. In addition to causing creep-rate increases or retardations, such fault interactions occasionally may trigger earthquakes. Regardless of the actual mechanisms involved and the current lack of understanding of creep retardation, it appears that shallow fault creep is sensitive to local and regional effects that promote or accompany intermediate-term preparation stages leading to moderate earthquakes. A strategy for more complete monitoring of fault creep, wherever it is known to occur, therefore should be assigned a higher priority in our continuing efforts to test various hypotheses concerning the mechanical relations between seismic and aseismic slip. ?? 1988 Birkha??user Verlag.

  8. Probability Assessment of Mega-thrust Earthquakes in Global Subduction Zones -from the View of Slip Deficit-

    NASA Astrophysics Data System (ADS)

    Ikuta, R.; Mitsui, Y.; Ando, M.

    2014-12-01

    We studied inter-plate slip history for about 100 years using earthquake catalogs. On assumption that each earthquake has stick-slip patch centered in its centroid, we regard cumulative seismic slips around the centroid as representing the inter-plate dislocation. We evaluated the slips on the stick-slip patches of over-M5-class earthquakes prior to three recent mega-thrust earthquakes, the 2004 Sumatra (Mw9.2), the 2010 Chile (Mw8.8), and the 2011 Tohoku (Mw9.0) around them. Comparing the cumulative seismic slips with the plate convergence, the slips before the mega-thrust events are significantly short in large area corresponding to the size of the mega-thrust events. We also researched cumulative seismic slips after other three mega-thrust earthquakes occurred in this 100 years, the 1952 Kamchatka (Mw9.0), the 1960 Chile (Mw9.5), the 1964 Alaska (Mw9.2). The cumulative slips have been significantly short in and around the focal area after their occurrence. The result should reflect persistency of the strong or/and large inter-plate coupled area capable of mega-thrust earthquakes. We applied the same procedure to global subduction zones to find that 21 regions including the focal area of above mega-thrust earthquakes show slip deficit over large area corresponding to the size of M9-class earthquakes. Considering that at least six M9-class earthquakes occurred in this 100 years and each recurrence interval should be 500-1000 years, it would not be surprised that from five to ten times of the already known regions (30 to 60 regions) are capable of M9 class earthquakes. The 21 regions as expected M9 class focal areas in our study is less than 5 to 10 times of the known 6, some of these regions may be divided into a few M9 class focal area because they extend to much larger area than typical M9 class focal area.

  9. Rate-and-State Southern California Earthquake Forecasts: Resolving Stress Singularities

    NASA Astrophysics Data System (ADS)

    Strader, A. E.; Jackson, D. D.

    2014-12-01

    In previous studies, we pseudo-prospectively evaluated time-dependent Coulomb stress earthquake forecasts, based on rate-and-state friction (Toda and Enescu, 2011 and Dieterich, 1996), against an ETAS null hypothesis (Zhuang et al., 2002). At the 95% confidence interval, we found that the stress-based forecast failed to outperform the ETAS forecast during the first eight weeks following the 10/16/1999 Hector Mine earthquake, in both earthquake number and spatial distribution. The rate-and-state forecast was most effective in forecasting far-field events (earthquakes occurring at least 50km away from modeled active faults). Near active faults, where most aftershocks occurred, stress singularities arising from modeled fault section boundaries obscured the Coulomb stress field. In addition to yielding physically unrealistic stress quantities, the stress singularities arising from the slip model often failed to indicate potential fault asperity locations inferred from aftershock distributions. Here, we test the effects of these stress singularities on the rate-and-state forecast's effectiveness, as well as mitigate stress uncertainties near active faults. We decrease the area significantly impacted by stress singularities by increasing the number of fault patches and introducing tapered slip at fault section boundaries, representing displacement as a high-resolution step function. Using recent seismicity distributions to relocate fault asperities, we also invert seismicity for a fault displacement model with higher resolution than the original slip distribution, where areas of positive static Coulomb stress change coincide with earthquake locations.

  10. Geodetic measurement of deformation in the Loma Prieta, California earthquake with Very Long Baseline Interferometry (VLBI)

    SciTech Connect

    Clark, T.A.; Ma, C.; Sauber, J.M.; Ryan, J.W. ); Gordon, D.; Caprette, D.S. ); Shaffer, D.B.; Vandenberg, N.R. )

    1990-07-01

    Following the Loma Prieta earthquake, two mobile Very Long Baseline Interferometry (VLBI) systems operated by the NASA Crustal Dynamics Project and the NOAA National Geodetic Survey were deployed at three previously established VLBI sites in the earthquake area: Fort Ord (near Monterey), the Presidio (in San Francisco) and Point Reyes. From repeated VLBI occupations of these sites since 1983, the pre-earthquake rates of deformation have been determined with respect to a North American reference frame with 1{sigma} formal standard errors of {approximately}1 mm/yr. The VLBI measurements immediately following the earthquake showed that the Fort Ord site was displaced 49 {plus minus} 4 mm at an azimuth of 11 {plus minus} 4{degree} and that the Presidio site was displaced 12 {plus minus} 5 mm at an azimuth of 148 {plus minus} 13{degree}. No anomalous change was detected at Point Reyes with 1{sigma} uncertainty of 4 mm. The estimated displacements at Fort Ord and the Presidio are consistent with the static displacements predicted on the basis of a coseismic slip model in which slip on the southern segment is shallower than slip on the more northern segment is shallower than slip on the more northern segment of the fault rupture. The authors also give the Cartesian positions at epoch 1990.0 of a set of VLBI fiducial stations and the three mobile sites in the vicinity of the earthquake.

  11. Earthquake History of the Northern Imperial Fault, Imperial Valley, California, since the last Lake Cahuilla Highstand, circa A.D. 1680

    NASA Astrophysics Data System (ADS)

    Meltzner, A. J.; Rockwell, T. K.; Verdugo, D. M.

    2003-12-01

    The Imperial fault (IF) is the only fault in southern California to have ruptured in two major earthquakes in the 20th century. In 1940, it ruptured end-to-end (both north and south of the international border) in an M 6.9 earthquake, and in 1979, the northern segment of the fault (north of the border) ruptured again in an M 6.4 event. Slip in 1940 was highest (5-6 m) along the central portion of the fault and lowest (<1 m) along the northern portion, with a high slip gradient between these two segments just north of the border. The 1979 earthquake involved surface rupture along only the northern 30 km of the fault, with dextral offsets being <1 m and being nearly identical to 1940 offsets along the northern 20 km of the rupture. The similarities and differences of the two events led Sieh (1996) to propose a "slip-patch model" for the Imperial fault, whereby the fault ruptures with frequent moderate earthquakes along its northern end, like in 1979, and with less frequent larger events like 1940 along its entire length. According to the model, the central patch, which experienced high slip in 1940 and did not rupture in 1979, would rupture with relatively infrequent events (roughly every 260 years) with typically 5-6 m of slip per event; meanwhile, the northern patch, which corresponds to the 1979 rupture, would rupture more frequently (roughly every 40 years) with up to 1 m of slip per event. This model is consistent with the slip distribution observed in 1940 and in 1979. Paleoseismic investigations along the central patch also support this model, as the penultimate event there occurred shortly after the last Lake Cahuilla (LC) highstand at around A.D. 1680 (Thomas and Rockwell, 1996). Prior to the present investigation, however, there were no data on events prior to 1940 on the northern patch, which could serve to either support or refute the slip-patch model. We have opened a trench across the IF south of Harris Road, adjacent to Mesquite Basin, where the fault has both dextral and normal slip components. On the downdropped side of the fault, a laminated clay unit (inferred to be the most recent LC clay, at ca. A.D. 1680; dating results are pending) is exposed in the trench and is overlain by nearly 2 m of younger deposits; the overlying material consists of bedded fine sands and silts (inferred to be overbank deposits from a nearby channel), which interfinger with massive silts and clays (inferred mostly to be colluvium). Unfortunately, the normal component of slip for all earthquakes in the trench was almost exclusively restricted to a single east-dipping plane or set of closely spaced planes, so that the amount of dip slip per event cannot be resolved; nonetheless, a series of fissures and flower structures adjacent to the main fault in the hangingwall block permit the distinction of individual events. There is good evidence for 4, and possibly 5 events since the last LC highstand, based on filled-in fissures and abrupt upward terminations of multiple fault strands and liquefaction cracks. The youngest of these events are inferred to be 1979 and 1940; the oldest, which produced liquefaction and ruptured to the top of the most recent lake deposits, probably occurred at or very soon after the highstand, based on the arguments that no lake deposits post-date the event, and that the ground was still saturated at the time of the earthquake. This event may have been the penultimate (ca. A.D. 1680) event seen on the central patch of the IF.

  12. Triggered Fault Slip in Southern California Associated with the 2010 Sierra El Mayor-Cucapah, Baja California, Mexico, Earthquake

    NASA Astrophysics Data System (ADS)

    Rymer, M. J.; Treiman, J. A.; Kendrick, K. J.; Lienkaemper, J. J.; Wei, M.; Weldon, R. J.; Bilham, R. G.; Fielding, E. J.

    2010-12-01

    Surface fracturing (triggered slip) occurred in the central Salton Trough and to the southwest, in the Yuha Desert area—all in association with the 4 April 2010 (M7.2) El Mayor-Cucapah earthquake and its aftershocks. Triggered slip in the central Salton Trough occurred on the ‘frequent movers’: the southern San Andreas, Coyote Creek, Superstition Hills, and Imperial Faults, all of which have slipped in previous moderate to large, local and regional earthquakes in the past five decades. Other faults in the central Salton Trough that also slipped in 2010 include the Wienert Fault (southeastern section of the Superstition Hills Fault), the Kalin Fault (in the Brawley Seismic Zone), and the Brawley Fault Zone; triggered slip had not been reported on these faults in the past. Geologic measures of slip on faults in the central Salton Trough ranged from 1 to 18 mm, and everywhere was located where previous primary (tectonic) or triggered slip has occurred. Triggered slip in the Yuha Desert area occurred along at least two dozen faults, only some of which were known before the 4 April 2010 El Mayor-Cucapah earthquake. From east to northwest, slip occurred in seven general areas; 1) in the Northern Centinela Fault Zone (newly named), 2) along unnamed faults south of Pinto Wash, 3) along the Yuha Fault (newly named), 4) along both east and west branches of the Laguna Salada Fault, 5) along the Yuha Well Fault Zone (newly revised name), 6) along the Ocotillo Fault (newly named), and 7) along the southeastern-most section of the Elsinore Fault. Faults that slipped in the Yuha Desert area include northwest-trending right-lateral faults, northeast-trending left-lateral faults, and north-south faults, some of which had dominantly vertical slip. Triggered slip along the Ocotillo and Elsinore Faults occurred only in association with the 14 June 2010 (M5.7) aftershock, which also initiated slip along other faults near the town of Ocotillo. Triggered slip on faults in the Yuha Desert area was most commonly less than 20 mm, but two significant exceptions are slip of about 50-60 mm on the Yuha Fault and of about 80 mm on the Ocotillo Fault. All triggered slips in the Yuha Desert area occurred along pre-existing faults, whether previously recognized or not.

  13. Further evidence of localized geomagnetic field changes before the 1974 Thanksgiving Day Earthquake, Hollister, California

    NASA Astrophysics Data System (ADS)

    Davis, Paul M.; Jackson, David D.; Johnston, Malcolm J. S.

    1980-07-01

    Seven weeks prior to the M=5.1 Hollister, Calif., Thanksgiving Day earthquake of 28 November, 1974, an anomalous magnetic variation was observed at one of the magnetometers of the USGS array. The anomaly lasted for about three weeks. Recently developed methods of reducing noise on magnetic records reveal that anomalous magnetic changes occurred at about the same time at three, of the six stations analysed. Such changes have not been seen either previously or subsequently. The largest variation occurred at the two stations closest to the earthquake, but a change also occurred at a station 44 km to the south.

  14. Further evidence of localized geomagnetic field changes before the 1974 Thanksgiving Day earthquake, Hollister, California

    SciTech Connect

    Davis, P.M.; Jackson, D.D.; Johnston, M.J.S.

    1980-07-01

    Seven weeks prior to he M=5.1 Hollister, Calif., Thanksgiving Day earthquake of 28 November, 1974, and anomalous magnetic variation was observed at one of the magnetometers of the USGS array. The anomaly lasted for about three weeks. Recently developed methods or reducing noise on magnetic records reveal that anomalous magnetic changes occurred at about the same time at three of the six stations analysed. Such changes have not been seen either previously or subsequently. The largest variation occurred at the two stations closest to the earthquake, but a change also occurred at a station 44 km to the south.

  15. Chapter A. The Loma Prieta, California, Earthquake of October 17, 1989 - Strong Ground Motion

    USGS Publications Warehouse

    Borcherdt, Roger D.

    1994-01-01

    Strong ground motion generated by the Loma Prieta, Calif., earthquake (MS~7.1) of October 17, 1989, resulted in at least 63 deaths, more than 3,757 injuries, and damage estimated to exceed $5.9 billion. Strong ground motion severely damaged critical lifelines (freeway overpasses, bridges, and pipelines), caused severe damage to poorly constructed buildings, and induced a significant number of ground failures associated with liquefaction and landsliding. It also caused a significant proportion of the damage and loss of life at distances as far as 100 km from the epicenter. Consequently, understanding the characteristics of the strong ground motion associated with the earthquake is fundamental to understanding the earthquake's devastating impact on society. The papers assembled in this chapter address this problem. Damage to vulnerable structures from the earthquake varied substantially with the distance from the causative fault and the type of underlying geologic deposits. Most of the damage and loss of life occurred in areas underlain by 'soft soil'. Quantifying these effects is important for understanding the tragic concentrations of damage in such areas as Santa Cruz and the Marina and Embarcadero Districts of San Francisco, and the failures of the San Francisco-Oakland Bay Bridge and the Interstate Highway 880 overpass. Most importantly, understanding these effects is a necessary prerequisite for improving mitigation measures for larger earthquakes likely to occur much closer to densely urbanized areas in the San Francisco Bay region. The earthquake generated an especially important data set for understanding variations in the severity of strong ground motion. Instrumental strong-motion recordings were obtained at 131 sites located from about 6 to 175 km from the rupture zone. This set of recordings, the largest yet collected for an event of this size, was obtained from sites on various geologic deposits, including a unique set on 'soft soil' deposits (artificial fill and bay mud). These exceptional ground-motion data are used by the authors of the papers in this chapter to infer radiation characteristics of the earthquake source, identify dominant propagation characteristics of the Earth?s crust, quantify amplification characteristics of near-surface geologic deposits, develop general amplification factors for site-dependent building-code provisions, and revise earthquake-hazard assessments for the San Francisco Bay region. Interpretations of additional data recorded in well-instrumented buildings, dams, and freeway overpasses are provided in other chapters of this report.

  16. Chapter E. The Loma Prieta, California, Earthquake of October 17, 1989 - Geologic Setting and Crustal Structure

    USGS Publications Warehouse

    Wells, Ray E.

    2004-01-01

    Although some scientists considered the Ms=7.1 Loma Prieta, Calif., earthquake of 1989 to be an anticipated event, some aspects of the earthquake were surprising. It occurred 17 km beneath the Santa Cruz Mountains along a left-stepping restraining bend in the San Andreas fault system. Rupture on the southwest-dipping fault plane consisted of subequal amounts of right-lateral and reverse motion but did not reach the surface. In the area of maximum uplift, severe shaking and numerous ground cracks occurred along Summit Road and Skyland Ridge, several kilometers south of the main trace of the San Andreas fault. The relatively deep focus of the earthquake, the distribution of ground failure, the absence of throughgoing surface rupture on the San Andreas fault, and the large component of uplift raised several questions about the relation of the 1989 Loma Prieta earthquake to the San Andreas fault: Did the earthquake actually occur on the San Andreas fault? Where exactly is the San Andreas fault in the heavily forested Santa Cruz Mountains, and how does the fault relate to ground ruptures that occurred there in 1989 and 1906? What is the geometry of the San Andreas fault system at depth, and how does it relate to the major crustal blocks identified by geologic mapping? Subsequent geophysical and geologic investigations of crustal structure in the Loma Prieta region have addressed these and other questions about the relation of the earthquake to geologic structures observed in the southern Santa Cruz Mountains. The diverse papers in this chapter cover several topics: geologic mapping of the region, potential- field and electromagnetic modeling of crustal structure, and the velocity structure of the crust and mantle in and below the source region for the earthquake. Although these papers were mostly completed between 1992 and 1997, they provide critical documentation of the crustal structure of the Loma Prieta region. Together, they present a remarkably coherent, three-dimensional picture of the earthquake source region--a geologically complex volume of crust with a long history of both right-lateral faulting and fault-normal compression, thrusting, and uplift.

  17. Response of Long Valley Caldera to the Mw = 7.3 Landers, California, Earthquake

    NASA Astrophysics Data System (ADS)

    Hill, David P.; Johnston, Malcolm J. S.; Langbein, John O.; Bilham, Roger

    1995-07-01

    Of the many sites in the western United States responding to the June 28, 1992, Landers earthquake (Mw = 7.3) with remotely triggered seismicity, only Long Valley caldera is monitored by both seismic and continuous deformation networks. A transient strain pulse and surge in seismicity recorded by these networks began within tens of seconds following arrival of the shear pulse from Landers. The cumulative strain and number of triggered earthquakes followed the same exponentially decaying growth rate (time constant 1.8 days) during the first 6 days following Landers. The strain transient, which was recorded on a borehole dilatometer at the west margin of the caldera and a long-base tiltmeter 20 km to the east, peaked on the sixth day at ≈0.25 ppm and gradually decayed over the next 15-20 days. The absence of a clear strain signal exceeding 0.4 ppm in data from the two-color geodimeter deformation lines, which span the central section of the caldera, indicates that the strain transient cannot be due solely to pressure changes in the concentrated pressure source 7 km beneath the central part of the caldera that accounts for most of the uplift of the resurgent dome since 1980. The triggered seismicity occupied the entire seismogenic volume beneath the caldera. The focal mechanisms, the frequency-magnitude distribution, and the spatial distribution of the triggered earthquakes are typical of other swarms in Long Valley caldera. The cumulative seismic moment of the triggered earthquakes through the first 2 weeks after the Landers earthquake corresponds to a single M = 3.8 earthquake, which is too small by nearly 2 orders of magnitude to account for the 0.25-ppm peak amplitude of the observed strain transients. Evidently, the strain transient represents the dominant response mode, which precludes direct triggering of local earthquakes by the large dynamic stresses from Landers as the dominant process. Conditionally viable models for the triggering process beneath the caldera include (1) the transient pressurization of magma bodies beneath the resurgent dome and Mammoth Mountain by the advective overpressure of rising bubbles, (2) a surge in fluid pressure within the seismogenic zone due to upward cascading failure of isolated compartments containing superhydrostatic pore fluids, (3) relaxation (fluidization) of a partially crystallized magma body or dike intrusion in the deep crustal roots of Long Valley magmatic system, or (4) aseismic slip on midcrustal faults. Either the deep, relaxing-magma body or lower crustal dike intrusion satisfy all the strain observations with a single deformation source. The latter model admits the possibility that large, regional earthquakes can trigger the episodic recharge of the deep roots of crustal magmatic systems.

  18. Acceleration and volumetric strain generated by the Parkfield 2004 earthquake on the GEOS strong-motion array near Parkfield, California

    USGS Publications Warehouse

    Borcherdt, Rodger D.; Johnston, Malcolm J.S.; Dietel, Christopher; Glassmoyer, Gary; Myren, Doug; Stephens, Christopher

    2004-01-01

    An integrated array of 11 General Earthquake Observation System (GEOS) stations installed near Parkfield, CA provided on scale broad-band, wide-dynamic measurements of acceleration and volumetric strain of the Parkfield earthquake (M 6.0) of September 28, 2004. Three component measurements of acceleration were obtained at each of the stations. Measurements of collocated acceleration and volumetric strain were obtained at four of the stations. Measurements of velocity at most sites were on scale only for the initial P-wave arrival. When considered in the context of the extensive set of strong-motion recordings obtained on more than 40 analog stations by the California Strong-Motion Instrumentation Program (Shakal, et al., 2004 http://www.quake.ca.gov/cisn-edc) and those on the dense array of Spudich, et al, (1988), these recordings provide an unprecedented document of the nature of the near source strong motion generated by a M 6.0 earthquake. The data set reported herein provides the most extensive set of near field broad band wide dynamic range measurements of acceleration and volumetric strain for an earthquake as large as M 6 of which the authors are aware. As a result considerable interest has been expressed in these data. This report is intended to describe the data and facilitate its use to resolve a number of scientific and engineering questions concerning earthquake rupture processes and resultant near field motions and strains. This report provides a description of the array, its scientific objectives and the strong-motion recordings obtained of the main shock. The report provides copies of the uncorrected and corrected data. Copies of the inferred velocities, displacements, and Psuedo velocity response spectra are provided. Digital versions of these recordings are accessible with information available through the internet at several locations: the National Strong-Motion Program web site (http://agram.wr.usgs.gov/), the COSMOS Virtual Data Center Web site (http://www.cosmos-eq.org), and the CISN Engineering and Berkeley data centers (http://www.quake.ca.gov/cisn-edc). They are also accessible together with recordings on the GEOS Strong-motion Array near Parkfield, CA since its installation in 1987 through the USGS GEOS web site ( http://nsmp.wr.usgs.gov/GEOS).

  19. Source inversion of the 1988 Upland, California, earthquake: determination of a fault plane for a small event

    USGS Publications Warehouse

    Mori, J.; Hartzell, S.

    1990-01-01

    We examined short-period P waves to investigate if waveform data could be used to determine which of two nodal planes was the actual fault plane for a small (ML 4,6) earthquake near Upland, California. The southwest trending fault plane consistently gave better fitting solutions than the southeast-trending plane. We determined a moment of 4.2 ?? 1022 dyne-cm. The rupture velocity, and thus the source area could not be well resolved, but if we assume a reasonable rupture velocity of 0.87 times the shear wave velocity, we obtain a source area of 0.97 km2 and a stress drop of 38 bars. -from Authors

  20. The 1989 earthquake swarm beneath Mammoth Mountain, California: an initial look at the 4 May through 30 September activity

    USGS Publications Warehouse

    Hill, D.P.

    1990-01-01

    Mammoth Mountain is a 50 000- to 200 000-yr-old cumulovolcano standing on the southwestern rim of Long Valley in eastern California. On 4 May 1989, two M=1 earthquakes beneath the south flank of the mountain marked the onset of a swarm that has continued for more than 6 months. In addition to its longevity, noteworthy aspects of this persistent swarm are described. These aspects of the swarm, together with its location along the southern extension of the youthful Mono-Inyo volcanic chain, which last erupted 500 to 600 yr ago, point to a magmatic source for the modest but persistent influx of strain energy into the crust beneath Mammoth Mountain. -from Authors

  1. Fluid-faulting interactions: Fracture-mesh and fault-valve behavior in the February 2014 Mammoth Mountain, California, earthquake swarm

    NASA Astrophysics Data System (ADS)

    Shelly, David R.; Taira, Taka'aki; Prejean, Stephanie G.; Hill, David P.; Dreger, Douglas S.

    2015-07-01

    Faulting and fluid transport in the subsurface are highly coupled processes, which may manifest seismically as earthquake swarms. A swarm in February 2014 beneath densely monitored Mammoth Mountain, California, provides an opportunity to witness these interactions in high resolution. Toward this goal, we employ massive waveform-correlation-based event detection and relative relocation, which quadruples the swarm catalog to more than 6000 earthquakes and produces high-precision locations even for very small events. The swarm's main seismic zone forms a distributed fracture mesh, with individual faults activated in short earthquake bursts. The largest event of the sequence, M 3.1, apparently acted as a fault valve and was followed by a distinct wave of earthquakes propagating ~1 km westward from the updip edge of rupture, 1-2 h later. Late in the swarm, multiple small, shallower subsidiary faults activated with pronounced hypocenter migration, suggesting that a broader fluid pressure pulse propagated through the subsurface.

  2. The SAFRR Tsunami Scenario: Improving Resilience for California from a Plausible M9 Earthquake near the Alaska Peninsula

    NASA Astrophysics Data System (ADS)

    Ross, S.; Jones, L.; Wilson, R. I.; Bahng, B.; Barberopoulou, A.; Borrero, J. C.; Brosnan, D.; Bwarie, J.; Geist, E. L.; Johnson, L.; Kirby, S. H.; Knight, W.; Long, K.; Lynett, P. J.; Miller, K.; Mortensen, C. E.; Nicolsky, D.; Oglesby, D. D.; Perry, S. C.; Plumlee, G. S.; Porter, K. A.; Real, C. R.; Ryan, K. J.; Suleimani, E.; Thio, H. K.; Titov, V.; Wein, A. M.; Whitmore, P.; Wood, N. J.

    2013-12-01

    The SAFRR Tsunami Scenario models a hypothetical but plausible tsunami, created by an Mw9.1 earthquake occurring offshore from the Alaskan peninsula, and its impacts on the California coast. We present the likely inundation areas, current velocities in key ports and harbors, physical damage and repair costs, economic consequences, environmental impacts, social vulnerability, emergency management, and policy implications for California associated with the tsunami scenario. The intended users are those who must make mitigation decisions before and rapid decisions during future tsunamis. Around a half million people would be present in the scenario's inundation area in residences, businesses, public venues, parks and beaches. Evacuation would likely be ordered for the State of California's maximum mapped tsunami inundation zone, evacuating an additional quarter million people from residences and businesses. Some island and peninsula communities would face particular evacuation challenges because of limited access options and short warning time, caused by the distance between Alaska and California. Evacuations may also be a challenge for certain dependent-care populations. One third of the boats in California's marinas could be damaged or sunk, costing at least 700 million in repairs to boats and docks, and potentially much more to address serious issues due to sediment transport and environmental contamination. Fires would likely start at many sites where fuel and petrochemicals are stored in ports and marinas. Tsunami surges and bores may travel several miles inland up coastal rivers. Debris clean-up and recovery of inundated and damaged areas will take days, months, or years depending on the severity of impacts and the available resources for recovery. The Ports of Los Angeles and Long Beach (POLA/LB) would be shut down for a miniμm of two days due to strong currents. Inundation of dry land in the ports would result in 100 million damages to cargo and additional downtime. The direct exposure of port trade value totals over 1.2 billion, while associated business interruption losses in the California economy could more than triple that value. Other estimated damages include 1.8 billion of property damage and 85 million for highway and railroad repairs. In total, we have estimated repair and replacement costs of almost 3 billion to California marinas, coastal properties and the POLA/LB. These damages could cause $6 billion of business interruption losses in the California economy, but that could be reduced by 80-90% with the implementation of business continuity or resilience strategies. This scenario provides the basis for improving preparedness, mitigation, and continuity planning for tsunamis, which can reduce damage and economic impacts and enhance recovery efforts. Two positive outcomes have already resulted from the SAFRR Tsunami Scenario. Emergency managers in areas where the scenario inundation exceeds the State's maximum inundation zone have been notified and evacuation plans have been updated appropriately. The State has also worked with NOAA's West Coast and Alaska Tsunami Warning Center to modify future message protocols to facilitate effective evacuations in California. While our specific results pertain to California, the lessons learned and our scenario approach can be applied to other regions.

  3. Continuous GPS observations of postseismic deformation following the 16 October 1999 Hector Mine, California, earthquake (Mw 7.1)

    USGS Publications Warehouse

    Hudnutt, K.W.; King, N.E.; Galetzka, J.E.; Stark, K.F.; Behr, J.A.; Aspiotes, A.; van, Wyk S.; Moffitt, R.; Dockter, S.; Wyatt, F.

    2002-01-01

    Rapid field deployment of a new type of continuously operating Global Positioning System (GPS) network and data from Southern California Integrated GPS Network (SCIGN) stations that had recently begun operating in the area allow unique observations of the postseismic deformation associated with the 1999 Hector Mine earthquake. Innovative solutions in fieldcraft, devised for the 11 new GPS stations, provide high-quality observations with 1-year time histories on stable monuments at remote sites. We report on our results from processing the postseismic GPS data available from these sites, as well as 8 other SCIGN stations within 80 km of the event (a total of 19 sites). From these data, we analyze the temporal character and spatial pattern of the postseismic transients. Data from some sites display statistically significant time variation in their velocities. Although this is less certain, the spatial pattern of change in the postseismic velocity field also appears to have changed. The pattern now is similar to the pre-Landers (pre-1992) secular field, but laterally shifted and locally at twice the rate. We speculate that a 30 km ?? 50 km portion of crust (near Twentynine Palms), which was moving at nearly the North American plate rate (to within 3.5 mm/yr of that rate) prior to the 1992 Landers sequence, now is moving along with the crust to the west of it, as though it has been entrained in flow along with the Pacific Plate as a result of the Landers and Hector Mine earthquake sequence. The inboard axis of right-lateral shear deformation (at lower crustal to upper mantle depth) may have jumped 30 km farther into the continental crust at this fault junction that comprises the southern end of the eastern California shear zone.

  4. Formation of left-lateral fractures within the Summit Ridge shear zone, 1989 Loma Prieta, California, earthquake

    SciTech Connect

    Johnson, A.M.; Fleming, R.W. |

    1993-12-01

    The 1989 Loma Prieta, California, earthquake is characterized by the lack of major, throughgoing, coseismic, right-lateral faulting along strands of the San Andreas fault zone in the epicentral area. Instead, throughout the Summit Ridge area there are zones of tension cracks and left-lateral fracture zones oriented about N45 deg W, that is, roughly parallel to the San Andreas fault in this area. The left-lateral fractures zones are enigmatic because their left-lateral slip is opposite to the right-lateral sense of the relative motion between the Pacific and North American plates. We suggest that the enigmatic fractures can be understood if we assume that coesiesmic deformation was by right-lateral shear across a broad zone, about 0.5 km wide and 4 km long, beneath Summit Ridge. Contrary to most previous reports on the Loma Prieta earthquake, which assert that coseismic, right-lateral ground rupture was restricted to considerable (greater than 4 km) depths in the epicentral area, we find that nearly all the right-lateral offset is represented at the ground surface by the Summit Ridge shear zone.

  5. Late Holocene slip rate and recurrence of great earthquakes on the San Andreas fault in northern California

    SciTech Connect

    Niemi, T.M. Earth Sciences Associates, Palo Alto, CA ); Hall, N.T. )

    1992-03-01

    The slip rate of the San Andreas fault 45 km north of San Francisco at Olema, California, is determined by matching offset segments of a buried late Holocene stream channel. Stream deposits from 1,800 {plus minus} 78 yr B.P. are offset 42.5 {plus minus} 3.5 m across the active (1906) fault trace for a minimum late Holocene slip rate of 24 {plus minus} 3 mm/yr. When local maximum coseismic displacements of 4.9 to 5.5 m from the 1906 earthquake are considered with this slip rate, the recurrence of 1906-type earthquakes on the North Coast segment of the San Andreas fault falls within the interval of 221 {plus minus} 40 yr. Both comparable coseismic slip in 1906 and similar late Holocene geologic slip rates at the Olema site and a site 145 km northwest at Point Arena (Prentice, 1989) suggest that the North Coast segment behaves as a coherent rupture unit.

  6. Site response, shallow shear-wave velocity, and damage in Los Gatos, California, from the 1989 Loma Prieta earthquake

    USGS Publications Warehouse

    Hartzell, S.; Carver, D.; Williams, R.A.

    2001-01-01

    Aftershock records of the 1989 Loma Prieta earthquake are used to calculate site response in the frequency band of 0.5-10 Hz at 24 locations in Los Gatos, California, on the edge of the Santa Clara Valley. Two different methods are used: spectral ratios relative to a reference site on rock and a source/site spectral inversion method. These two methods complement each other and give consistent results. Site amplification factors are compared with surficial geology, thickness of alluvium, shallow shear-wave velocity measurements, and ground deformation and structural damage resulting from the Loma Prieta earthquake. Higher values of site amplification are seen on Quaternary alluvium compared with older Miocene and Cretaceous units of Monterey and Franciscan Formation. However, other more detailed correlations with surficial geology are not evident. A complex pattern of alluvial sediment thickness, caused by crosscutting thrust faults, is interpreted as contributing to the variability in site response and the presence of spectral resonance peaks between 2 and 7 Hz at some sites. Within the range of our field measurements, there is a correlation between lower average shear-wave velocity of the top 30 m and 50% higher values of site amplification. An area of residential homes thrown from their foundations correlates with high site response. This damage may also have been aggravated by local ground deformation. Severe damage to commercial buildings in the business district, however, is attributed to poor masonry construction.

  7. Southern California Permanent GPS Geodetic Array: Continuous measurements of regional crustal deformation between the 1992 Landers and 1994 Northridge earthquakes

    USGS Publications Warehouse

    Bock, Y.; Wdowinski, S.; Fang, P.; Zhang, Jiahua; Williams, S.; Johnson, H.; Behr, J.; Genrich, J.; Dean, J.; Van Domselaar, M.; Agnew, D.; Wyatt, F.; Stark, K.; Oral, B.; Hudnut, K.; King, R.; Herring, T.; Dinardo, S.; Young, W.; Jackson, D.; Gurtner, W.

    1997-01-01

    The southern California Permanent GPS Geodetic Array (PGGA) was established in 1990 across the Pacific-North America plate boundary to continuously monitor crustal deformation. We describe the development of the array and the time series of daily positions estimated for its first 10 sites in the 19-month period between the June 28, 1992 (Mw=7.3), Landers and January 17, 1994 (Mw=6.7), Northridge earthquakes. We compare displacement rates at four site locations with those reported by Feigl et al. [1993], which were derived from an independent set of Global Positioning System (GPS) and very long baseline interferometry (VLBI) measurements collected over nearly a decade prior to the Landers earthquake. The velocity differences for three sites 65-100 km from the earthquake's epicenter are of order of 3-5 mm/yr and are systematically coupled with the corresponding directions of coseismic displacement. The fourth site, 300 km from the epicenter, shows no significant velocity difference. These observations suggest large-scale postseismic deformation with a relaxation time of at least 800 days. The statistical significance of our observations is complicated by our incomplete knowledge of the noise properties of the two data sets; two possible noise models fit the PGGA data equally well as described in the companion paper by Zhang et al. [this issue]; the pre-Landers data are too sparse and heterogeneous to derive a reliable noise model. Under a fractal white noise model for the PGGA data we find that the velocity differences for all three sites are statistically different at the 99% significance level. A white noise plus flicker noise model results in significance levels of only 94%, 43%, and 88%. Additional investigations of the pre-Landers data, and analysis of longer spans of PGGA data, could have an important effect on the significance of these results and will be addressed in future work. Copyright 1997 by the American Geophysical Union.

  8. Direct and indirect evidence for earthquakes; an example from the Lake Tahoe Basin, California-Nevada

    NASA Astrophysics Data System (ADS)

    Maloney, J. M.; Noble, P. J.; Driscoll, N. W.; Kent, G.; Schmauder, G. C.

    2012-12-01

    High-resolution seismic CHIRP data can image direct evidence of earthquakes (i.e., offset strata) beneath lakes and the ocean. Nevertheless, direct evidence often is not imaged due to conditions such as gas in the sediments, or steep basement topography. In these cases, indirect evidence for earthquakes (i.e., debris flows) may provide insight into the paleoseismic record. The four sub-basins of the tectonically active Lake Tahoe Basin provide an ideal opportunity to image direct evidence for earthquake deformation and compare it to indirect earthquake proxies. We present results from high-resolution seismic CHIRP surveys in Emerald Bay, Fallen Leaf Lake, and Cascade Lake to constrain the recurrence interval on the West Tahoe Dollar Point Fault (WTDPF), which was previously identified as potentially the most hazardous fault in the Lake Tahoe Basin. Recently collected CHIRP profiles beneath Fallen Leaf Lake image slide deposits that appear synchronous with slides in other sub-basins. The temporal correlation of slides between multiple basins suggests triggering by events on the WTDPF. If correct, we postulate a recurrence interval for the WTDPF of ~3-4 k.y., indicating that the WTDPF is near its seismic recurrence cycle. In addition, CHIRP data beneath Cascade Lake image strands of the WTDPF that offset the lakefloor as much as ~7 m. The Cascade Lake data combined with onshore LiDAR allowed us to map the geometry of the WTDPF continuously across the southern Lake Tahoe Basin and yielded an improved geohazard assessment.

  9. Chapter E. The Loma Prieta, California, Earthquake of October 17, 1989 - Hydrologic Disturbances

    USGS Publications Warehouse

    Rojstaczer, Stuart A., (Edited By)

    1994-01-01

    Seismic events have long been known to cause changes in the level of oceans, streams, lakes, and the water table. The great San Francisco earthquake of 1906 induced significant hydrologic changes that were qualitatively similar to those changes observed for the Loma Prieta earthquake. What is different is that the hydrologic data sets collected from the Loma Prieta event have enough detail to enable hypotheses on the causes for these changes to be tested. The papers in this chapter document changes in ocean level, stream morphology and flow, water table height, and ground-water flow rates in response to the earthquake. Although hydrologic disturbances may have occurred about 1 hour before the main shock, the papers in this chapter deal strictly with postevent hydrologic changes. The hydrologic responses reported here reflect changes that are not the result of surface rupture. They appear to be the result of landslides, the static displacements induced by the earthquake, and changes in the permeability of the near surface.

  10. Chapter B. The Loma Prieta, California, Earthquake of October 17, 1989 - Public Response

    USGS Publications Warehouse

    Bolton, Patricia A., (Edited By)

    1993-01-01

    Major earthquakes provide seismologists and engineers an opportunity to examine the performance of the Earth and the man-made structures in response to the forces of the quake. So, too, do they provide social scientists an opportunity to delve into human responses evoked by the ground shaking and its physical consequences. The findings from such research can serve to guide the development and application of programs and practices designed to reduce death, injury, property losses, and social disruption in subsequent earthquakes. This chapter contains findings from studies focused mainly on public response to the Loma Prieta earthquake; that is, on the behavior and perceptions of the general population rather than on the activities of specific organizations or on the impact on procedures or policies. A major feature of several of these studies is that the information was collected from the population throughout the Bay area, not just from persons in the most badly damaged communities or who had suffered the greatest losses. This wide range serves to provide comparisons of behavior for those most directly affected by the earthquake with others who were less directly affected by it but still had to consider it very 'close to home.'

  11. Seismicity and crustal structure studies of southern California: tectonic implications from improved earthquake locations

    SciTech Connect

    Corbett, E.J.

    1984-01-01

    The 5.1 M/sub L/ Santa Barbara earthquake of 13 August 1978 was located 3 km southeast of Santa Barbara at a focal depth of 12.7 km. The temporal-spatial development of the aftershock zone may indicate that the initial rupture plane was considerably smaller than that of the eventual aftershock zone. The aftershock hypocenters outline a nearly horizontal plane (dipping 15/sup 0/ or less) at 13-km depth and the preferred focal mechanism indicates north-over-south thrusting. To further test the decollement hypothesis, Caltech catalog locations were reviewed to determine the depth distribution of earthquakes in the Transverse Ranges. The seismogenic zone is thickest along the southern front of the Transverse Ranges and is thinnest in the southern Mojave Desert and at the east end of the Transverse Ranges. The seismicity of the western Transverse Ranges is typified by north-dipping planar structures and the eastern Transverse Ranges are typified by pervasive seismicity extending down to the floor of the seismogenic zone. Data from a large quarry explosion on Catalina Island were utilized to derive a 3-layer Continental Borderland velocity structure to improve the locations of the 1981 Santa Barbara Island earthquakes. The Santa Barbara Island earthquake (5.3 M/sub L/) occurred on September 4, 1981. Aftershocks exhibited a clear northwest-southeast alignment that coincides with the submarine escarpment of the Santa Cruz-Catalina fault and was consistent with focal mechanisms.

  12. Migrating swarms of brittle-failure earthquakes in the lower crust beneath Mammoth Mountain, California

    NASA Astrophysics Data System (ADS)

    Shelly, D. R.; Hill, D. P.

    2011-12-01

    Brittle-failure earthquakes in the lower crust, where high pressures and temperatures would typically promote ductile deformation, are relatively rare but occasionally observed beneath active volcanic centers. When they occur, these earthquakes provide a unique opportunity to constrain volcanic processes in the lower crust, such as fluid injection and migration. Here, we examine recent brief earthquakes swarms occurring deep beneath Mammoth Mountain, located on the southwestern margin of Long Valley Caldera. Brief lower-crustal swarms were observed beneath Mammoth in 2006, 2008, and 2009. These brittle-failure earthquakes at depths of 19 to 30 km are likely occurring within the more mafic mid- to lower crust, which can remain in the brittle domain to temperatures as high as ~700 degrees C. Above these deep events are two distinct shallower zones of seismicity. Mid-crustal, long-period earthquakes between 10 and 19 km are presumably occurring within the silicic crust, but below the rheological transition from brittle to plastic behavior, which is expected to occur at temperatures of ~350 to 400 degrees C. Above this transition shallow, brittle-failure earthquakes occur in the upper 8 kilometers of the silicic crust. We focus primarily on a deep swarm that occurred September 29-30, 2009, which is the best recorded of the recent lower-crustal swarms. To maximally illuminate the spatial-temporal progression of seismicity, we supplement the earthquake catalog by identifying additional small events with similar waveforms in the continuous data, achieving up to a 10-fold increase in the number of locatable events. We then relocate all events, using cross-correlation and a double-difference algorithm. We find that the 2009 swarm exhibits systematically decelerating upward migration, with hypocenters shallowing from 21 to 19 km depth over approximately 12 hours. We also observe substantial diversity in the pattern of P-wave first motions, where events with very similar hypocenters and origin times exhibit nearly opposite patterns of compressional and dilational first motions at network seismometers. These lower-crustal, brittle-failure earthquakes are similar in many respects to those that occurred beneath the Sierra Nevada crest in the vicinity of Lake Tahoe in late 2003, which Smith et al. (Science, 2004) concluded were associated with a magmatic intrusion into the lower crust. The 2009 Mammoth sequence, however, is much shorter in duration (1-2 days compared with several months), faster migrating, and has no detectible accompanying geodetic signal. This suggests that the events may be triggered by upward diffusion of a lower viscosity fluid. CO2 is a likely candidate, given its abundant release in the area at the surface. Thus our preferred hypothesis is that this earthquake swarm is a symptom of ascending high-pressure CO2, perhaps reflecting slip induced on pre-existing fractures by reducing the effective normal stress. Indeed, the concentration of earthquakes with similar epicenters at a wide range of depths beneath Mammoth Mountain suggests that this may be a preferred pathway for CO2, and occasionally melt, to travel upward through the crust.

  13. Earthquake hazards of active blind-thrust faults under the central Los Angeles basin, California

    NASA Astrophysics Data System (ADS)

    Shaw, John H.; Suppe, John

    1996-04-01

    We document several blind-thrust faults under the Los Angeles basin that, if active and seismogenic, are capable of generating large earthquakes (M = 6.3 to 7.3). Pliocene to Quaternary growth folds imaged in seismic reflection profiles record the existence, size, and slip rates of these blind faults. The growth structures have shapes characteristic of fault-bend folds above blind thrusts, as demonstrated by balanced kinematic models, geologic cross sections, and axial-surface maps. We interpret the Compton-Los Alamitos trend as a growth fold above the Compton ramp, which extends along strike from west Los Angeles to at least the Santa Ana River. The Compton thrust is part of a larger fault system, including a decollement and ramps beneath the Elysian Park and Palos Verdes trends. The Cienegas and Coyote Hills growth folds overlie additional blind thrusts in the Elysian Park trend that are not closely linked to the Compton ramp. Analysis of folded Pliocene to Quaternary strata yields slip rates of 1.4 ± 0.4 mm/yr on the Compton thrust and 1.7 ± 0.4 mm/yr on a ramp beneath the Elysian Park trend. Assuming that slip is released in large earthquakes, we estimate magnitudes of 6.3 to 6.8 for earthquakes on individual ramp segments based on geometric segment sizes derived from axial surface maps. Multiple-segment ruptures could yield larger earthquakes (M = 6.9 to 7.3). Relations among magnitude, coseismic displacement, and slip rate yield an average recurrence interval of 380 years for single-segment earthquakes and a range of 400 to 1300 years for multiple-segment events. If these newly documented blind thrust faults are active, they will contribute substantially to the seismic hazards in Los Angeles because of their locations directly beneath the metropolitan area.

  14. Faulting apparently related to the 1994 Northridge, California, earthquake and possible co-seismic origin of surface cracks in Potrero Canyon, Los Angeles County, California

    USGS Publications Warehouse

    Catchings, R.D.; Goldman, M.R.; Lee, W.H.K.; Rymer, M.J.; Ponti, D.J.

    1998-01-01

    Apparent southward-dipping, reverse-fault zones are imaged to depths of about 1.5 km beneath Potrero Canyon, Los Angeles County, California. Based on their orientation and projection to the surface, we suggest that the imaged fault zones are extensions of the Oak Ridge fault. Geologic mapping by others and correlations with seismicity studies suggest that the Oak Ridge fault is the causative fault of the 17 January 1994 Northridge earthquake (Northridge fault). Our seismically imaged faults may be among several faults that collectively comprise the Northridge thrust fault system. Unusually strong shaking in Potrero Canyon during the Northridge earthquake may have resulted from focusing of seismic energy or co-seismic movement along existing, related shallow-depth faults. The strong shaking produced ground-surface cracks and sand blows distributed along the length of the canyon. Seismic reflection and refraction images show that shallow-depth faults may underlie some of the observed surface cracks. The relationship between observed surface cracks and imaged faults indicates that some of the surface cracks may have developed from nontectonic alluvial movement, but others may be fault related. Immediately beneath the surface cracks, P-wave velocities are unusually low (<400 m/sec), and there are velocity anomalies consistent with a seismic reflection image of shallow faulting to depths of at least 100 m. On the basis of velocity data, we suggest that unconsolidated soils (<800 m/sec) extend to depths of about 15 to 20 m beneath our datum (<25 m below ground surface). The underlying rocks range in velocity from about 1000 to 5000 m/sec in the upper 100 m. This study illustrates the utility of high-resolution seismic imaging in assessing local and regional seismic hazards.

  15. Multi-sensor Integration of Space and Ground Observations of Pre-earthquake Anomalies Associated with M6.0, August 24, 2014 Napa, California

    NASA Astrophysics Data System (ADS)

    Ouzounov, Dimitar; Tramutoli, Valerio; Pulinets, Sergey; Liu, Tiger; Filizzola, Carolina; Genzano, Nicola; Lisi, Mariano; Petrov, Leonid; Kafatos, Menas

    2015-04-01

    We integrate multiple space-born and ground sensors for monitoring pre-earthquake geophysical anomalies that can provide significant early notification for earthquakes higher than M5.5 worldwide. The latest M6.0 event of August 24, 2014 in South Napa, California generated pre-earthquake signatures during our outgoing tests for California, and an experimental warning was documented about 17 days in advance. We process in controlled environment different satellite and ground data for California (and several other test areas) by using: a) data from the NPOES sensors recording OLR (Outgoing Longwave Radiation) in the infrared; b) 2/GNSS, FORMOSAT (GPS/TEC); c) Earth Observing System assimilation models from NASA; d) ground-based gas observations and meteorological data; e) TIR (Thermal Infrared) data from geostationary satellite (GOES). On Aug 4th, we detected (prospectively) a large anomaly of OLR transient field at the TOA over Northern California. The location was shifted in the northeast direction about 150 km from the Aug 23rd epicentral area. Compared to the reference field of August 2004 to 2014 the hotspot anomaly was the largest energy flux anomaly over the entire continental United States at this time. Based on the temporal and spatial estimates of the anomaly, on August 4th we issued an internal warning for a M5.5+ earthquake in Northern California within the next 1-4 weeks. TIR retrospective analysis showed significant (spatially extended and temporally persistent) sequences of TIR anomalies starting August 1st just in the future epicenter area and approximately in the same area affected by OLR anomalies in the following days. GPS/TEC retrospective analysis based on GIM and TGIM products show anomalies TEC variations 1-3 days, over region north form the Napa earthquake epicenter. The calculated index of atmospheric chemical potential based on the NASA numerical Assimilation weather model GEOS5 indicates for abnormal variations near the epicentral area days before the quake; Our real-time and post-event integration of several atmospheric parameters from satellite and ground observations during the M6.0 on 08.24.2014 in Napa California demonstrated the synergy of related variations of these parameters implying their connection with the earthquake preparation process.

  16. Three dimensional images of geothermal systems: local earthquake P-wave velocity tomography at the Hengill and Krafla geothermal areas, Iceland, and The Geysers, California

    USGS Publications Warehouse

    Julian, B.R.; Prisk, A.; Foulger, G.R.; Evans, J.R.

    1993-01-01

    Local earthquake tomography - the use of earthquake signals to form a 3-dimensional structural image - is now a mature geophysical analysis method, particularly suited to the study of geothermal reservoirs, which are often seismically active and severely laterally inhomogeneous. Studies have been conducted of the Hengill (Iceland), Krafla (Iceland) and The Geysers (California) geothermal areas. All three systems are exploited for electricity and/or heat production, and all are highly seismically active. Tomographic studies of volumes a few km in dimension were conducted for each area using the method of Thurber (1983).

  17. Long Return Periods for Earthquakes in San Gorgonio Pass and Implications for Large Ruptures of the San Andreas Fault in Southern California

    NASA Astrophysics Data System (ADS)

    Yule, J.; McBurnett, P.; Ramzan, S.

    2011-12-01

    The largest discontinuity in the surface trace of the San Andreas fault occurs in southern California at San Gorgonio Pass. Here, San Andreas motion moves through a 20 km-wide compressive stepover on the dextral-oblique-slip thrust system known as the San Gorgonio Pass fault zone. This thrust-dominated system is thought to rupture during very large San Andreas events that also involve strike-slip fault segments north and south of the Pass region. A wealth of paleoseismic data document that the San Andreas fault segments on either side of the Pass, in the San Bernardino/Mojave Desert and Coachella Valley regions, rupture on average every ~100 yrs and ~200 yrs, respectively. In contrast, we report here a notably longer return period for ruptures of the San Gorgonio Pass fault zone. For example, features exposed in trenches at the Cabezon site reveal that the most recent earthquake occurred 600-700 yrs ago (this and other ages reported here are constrained by C-14 calibrated ages from charcoal). The rupture at Cabezon broke a 10 m-wide zone of east-west striking thrusts and produced a >2 m-high scarp. Slip during this event is estimated to be >4.5 m. Evidence for a penultimate event was not uncovered but presumably lies beneath ~1000 yr-old strata at the base of the trenches. In Millard Canyon, 5 km to the west of Cabezon, the San Gorgonio Pass fault zone splits into two splays. The northern splay is expressed by 2.5 ± 0.7 m and 5.0 ± 0.7 m scarps in alluvial terraces constrained to be ~1300 and ~2500 yrs old, respectively. The scarp on the younger, low terrace postdates terrace abandonment ~1300 yrs ago and probably correlates with the 600-700 yr-old event at Cabezon, though we cannot rule out that a different event produced the northern Millard scarp. Trenches excavated in the low terrace reveal growth folding and secondary faulting and clear evidence for a penultimate event ~1350-1450 yrs ago, during alluvial deposition prior to the abandonment of the low terrace. Subtle evidence for a third event is poorly constrained by age data to have occurred between 1600 and 2500 yrs ago. The southern splay at Millard Canyon forms a 1.5 ± 0.1 m scarp in an alluvial terrace that is inset into the lowest terrace at the northern Millard site, and therefore must be < ~1300 yrs old. Slip on this fault probably occurred during the most recent rupture in the Pass. In summary, we think that the most recent earthquake occurred 600-700 yrs ago and generated ~6 m of slip on the San Gorgonio Pass fault zone. The evidence for two older earthquakes is less complete but suggests that they are similar in style and magnitude to the most recent event. The available data therefore suggest that the San Gorgonio Pass fault zone has produced three large (~6 m) events in the last ~2000 yrs, a return period of ~700 yrs assuming that the next rupture is imminent. We prefer a model whereby a majority of San Andreas fault ruptures end as they approach the Pass region from the north or the south (like the Wrightwood event of A.D. 1812 and possibly the Coachella Valley event of ~A.D. 1680). Relatively rare (once-per-millennia?), through-going San Andreas events break the San Gorgonio Pass fault zone and produce the region's largest earthquakes.

  18. Slip partitioning of the Calaveras Fault, California, and prospects for future earthquakes

    USGS Publications Warehouse

    Oppenheimer, D.H.; Bakun, W.H.; Lindh, A.G.

    1990-01-01

    Examination of main shock and microearthquake data from the Calaveras Fault during the last 20 years reveals that main shock hypocenters occur at depths of 8-9 km near the base of the zone of microearthquakes. Microseismicity extends between depths of 4 and 10 km and defines zones of concentrated microseismicity and aseismic zones. Estimates of the fault regions which slipped during the Coyote Lake and Morgan Hill earthquakes as derived from seismic radiation coincide with zones which are otherwise aseismic. We propose that these persistent aseismic zones represent stuck patches which slip only during moderate earthquakes. From the pattern of microearthquake locations we recognize six aseismic zones where we expect future main shocks will rupture the Calaveras Fault. -from Authors

  19. The 1987 Whittier Narrows earthquake in the Los Angeles metropolitan area, California

    USGS Publications Warehouse

    Hauksson, E.; Jones, L.M.; Davis, T.L.; Hutton, L.K.; Brady, A.G.; Reasenberg, P.A.; Michael, A.J.; Yerkes, R.F.; Williams, Pat; Reagor, G.; Stover, C.W.; Bent, A.L.; Shakal, A.K.; Etheredge, E.; Porcella, R.L.; Bufe, C.G.; Johnston, M.J.S.; Cranswick, E.

    1988-01-01

    The Whittier Narrows earthquake sequence (local magnitude, ML=5.9), which caused over $358-million damage, indicates that assessments of earthquake hazards in the Los Angeles metropolitan area may be underestimated. The sequence ruptured a previously unidentified thrust fault that may be part of a large system of thrust faults that extends across the entire east-west length of the northern margin of the Los Angeles basin. Peak horizontal accelerations from the main shock, which were measured at ground level and in structures, were as high as 0.6g (where g is the acceleration of gravity at sea level) within 50 kilometers of the epicenter. The distribution of the modified Mercalli intensity VII reflects a broad north-south elongated zone of damage that is approximately centered on the main shock epicenter.

  20. The Cape Mendocino, California, earthquakes of April 1992: Subduction at the triple junction

    USGS Publications Warehouse

    Oppenheimer, D.; Beroza, G.; Carver, G.; Dengler, L.; Eaton, J.; Gee, L.; Gonzalez, F.; Jayko, A.; Li, W.H.; Lisowski, M.; Magee, M.; Marshall, G.; Murray, M.; McPherson, R.; Romanowicz, B.; Satake, K.; Simpson, R.; Somerville, P.; Stein, R.; Valentine, D.

    1993-01-01

    The 25 April 1992 magnitude 7.1 Cape Mendocino thrust earthquake demonstrated that the North America-Gorda plate boundary is seismogenic and illustrated hazards that could result from much larger earthquakes forecast for the Cascadia region. The shock occurred just north of the Mendocino Triple Junction and caused strong ground motion and moderate damage in the immediate area. Rupture initiated onshore at a depth of 10.5 kilometers and propagated up-dip and seaward. Slip on steep faults in the Gorda plate generated two magnitude 6.6 aftershocks on 26 April. The main shock did not produce surface rupture on land but caused coastal uplift and a tsunami. The emerging picture of seismicity and faulting at the triple junction suggests that the region is likely to continue experiencing significant seismicity.

  1. Non-shear focal mechanisms of earthquakes at The Geysers, California and Hengill, Iceland, geothermal areas

    USGS Publications Warehouse

    Julian, B.R.; Miller, A.D.; Foulger, G.R.

    1993-01-01

    Several thousand earthquakes were recorded in each area. We report an initial investigation of the focal mechanisms based on P-wave polarities. Distortion by complicated three-dimensional crustal structure was minimized using tomographically derived three-dimensional crustal models. Events with explosive and implosive source mechanisms, suggesting cavity opening and collapse, have been tentatively identified at The Geysers. The new data show that some of these events do not fit the model of tensile cracking accompanied by isotropic pore pressure decreases that was suggested in earlier studies, but that they may instead involve combination of explosive and shear processes. However, the confirmation of earthquakes dominated by explosive components supports the model that the event are caused by crack opening induced by thermal contraction of the heat source.

  2. Triggered surface slips in southern California associated with the 2010 El Mayor-Cucapah, Baja California, Mexico, earthquake

    USGS Publications Warehouse

    Rymer, Michael J.; Treiman, Jerome A.; Kendrick, Katherine J.; Lienkaemper, James J.; Weldon, Ray J.; Bilham, Roger; Wei, Meng; Fielding, Eric J.; Hernandez, Janis L.; Olson, Brian P.E.; Irvine, Pamela J.; Knepprath, Nichole; Sickler, Robert R.; Tong, Xiaopeng; Siem, Martin E.

    2011-01-01

    Triggered slip in the Yuha Desert area occurred along more than two dozen faults, only some of which were recognized before the April 4, 2010, El Mayor-Cucapah earthquake. From east to northwest, slip occurred in seven general areas: (1) in the Northern Centinela Fault Zone (newly named), (2) along unnamed faults south of Pinto Wash, (3) along the Yuha Fault (newly named), (4) along both east and west branches of the Laguna Salada Fault, (5) along the Yuha Well Fault Zone (newly revised name) and related faults between it and the Yuha Fault, (6) along the Ocotillo Fault (newly named) and related faults to the north and south, and (7) along the southeasternmost section of the Elsinore Fault. Faults that slipped in the Yuha Desert area include northwest-trending right-lateral faults, northeast-trending left-lateral faults, and north-south faults, some of which had dominantly vertical offset. Triggered slip along the Ocotillo and Elsinore Faults appears to have occurred only in association with the June 14, 2010 (Mw5.7), aftershock. This aftershock also resulted in slip along other faults near the town of Ocotillo. Triggered offset on faults in the Yuha Desert area was mostly less than 20 mm, with three significant exceptions, including slip of about 50–60 mm on the Yuha Fault, 40 mm on a fault south of Pinto Wash, and about 85 mm on the Ocotillo Fault. All triggered slips in the Yuha Desert area occurred along preexisting faults, whether previously recognized or not.

  3. Magmatic resurgence in Long Valley caldera, California: Possible cause of the 1980 Mammoth Lakes earthquakes

    USGS Publications Warehouse

    Savage, J.C.; Clark, M.M.

    1982-01-01

    Changes in elevation between 1975 and October 1980 along a leveling line across the Long Valley caldera indicate a broad (half-width, 15 kilometers) uplift (maximum, 0.25 meter) centered on the old resurgent dome. This uplift is consistent with reinflation of a magma reservoir at a depth of about 10 kilometers. Stresses generated by this magmatic resurgence may have caused the sequence of four magnitude 6 earthquakes near Mammoth Lakes in May 1980. Copyright ?? 1982 AAAS.

  4. Chapter A. The Loma Prieta, California, Earthquake of October 17, 1989 - Loss Estimation and Procedures

    USGS Publications Warehouse

    Tubbesing, Susan K., (Edited By)

    1994-01-01

    This Professional Paper includes a collection of papers on subjects ranging from evaluation of building safety, to human injuries, to correlation of ground deformation with building damage. What these papers share is a common goal to improve the tools available to the research community to measure the nature, extent, and causes of damage and losses due to earthquakes. These measurement tools are critical to reducing future loss.

  5. Chapter D. The Loma Prieta, California, Earthquake of October 17, 1989 - Recovery, Mitigation, and Reconstruction

    USGS Publications Warehouse

    Nigg, Joanne M., (Edited By)

    1998-01-01

    The papers in this chapter reflect the broad spectrum of issues that arise following a major damaging urban earthquake-the regional economic consequences, rehousing problems, reconstruction strategies and policies, and opportunities for mitigation before the next major seismic event. While some of these papers deal with structural or physical science topics, their significant social and policy implications make them relevant for improving our understanding of the processes and dynamics that take place during the recovery period.

  6. Postseismic relaxation following the 1994 Mw6.7 Northridge earthquake, southern California

    USGS Publications Warehouse

    Savage, J.C.; Svarc, J.L.

    2010-01-01

    We have reexamined the postearthquake deformation of a 65 km long linear array of 11 geodetic monuments extending north–south across the rupture (reverse slip on a blind thrust dipping 40°S–20°W) associated with the 1994 Mw6.7 Northridge earthquake. That array was surveyed frequently in the interval from 4 to 2650 days after the earthquake. The velocity of each of the monuments over the interval 100–2650 days postearthquake appears to be constant. Moreover, the profile of those velocities along the length of the array is very similar to a preearthquake velocity profile for a nearby, similarly oriented array. We take this to indicate that significant postseismic relaxation is evident only in the first 100 days postseismic and that the subsequent linear trend is typical of the interseismic interval. The postseismic relaxation (postseismic displacement less displacement that would have occurred at the preseismic velocity) is found to be almost wholly parallel (N70°W) to the nearby (40 km) San Andreas Fault with only negligible relaxation in the direction of coseismic slip (N20°E) on the Northridge rupture. We suggest that the N70°W relaxation is caused by aseismic, right-lateral slip at depth on the San Andreas Fault, excess slip presumably triggered by the Northridge rupture. Finally, using the Dieterich (1994) stress-seismicity relation, we show that return to the preseismic deformation rate within 100 days following the earthquake could be consistent with the cumulative number of M > 2.5 earthquakes observed following the main shock.

  7. Paleoearthquakes on the southern San Andreas Fault, Wrightwood, California, 3000 to 1500 B.C.: A new method for evaluating paleoseismic evidence and earthquake horizons

    USGS Publications Warehouse

    Scharer, K.M.; Weldon, R.J., II; Fumal, T.E.; Biasi, G.P.

    2007-01-01

    We present evidence of 11-14 earthquakes that occurred between 3000 and 1500 B.C. on the San Andreas fault at the Wrightwood paleoseismic site. Earthquake evidence is presented in a novel form in which we rank (high, moderate, poor, or low) the quality of all evidence of ground deformation, which are called "event indicators." Event indicator quality reflects our confidence that the morphologic and sedimentologic evidence can be attributable to a ground-deforming earthquake and that the earthquake horizon is accurately identified by the morphology of the feature. In four vertical meters of section exposed in ten trenches, we document 316 event indicators attributable to 32 separate stratigraphic horizons. Each stratigraphic horizon is evaluated based on the sum of rank (Rs), maximum rank (Rm), average rank (Ra), number of observations (Obs), and sum of higher-quality event indicators (Rs>1). Of the 32 stratigraphic horizons, 14 contain 83% of the event indicators and are qualified based on the number and quality of event indicators; the remaining 18 do not have satisfactory evidence for further consideration. Eleven of the 14 stratigraphic horizons have sufficient number and quality of event indicators to be qualified as "probable" to "very likely" earthquakes; the remaining three stratigraphic horizons are associated with somewhat ambiguous features and are qualified as "possible" earthquakes. Although no single measurement defines an obvious threshold for designation as an earthquake horizon, Rs, Rm, and Rs>1 correlate best with the interpreted earthquake quality. Earthquake age distributions are determined from radio-carbon ages of peat samples using a Bayesian approach to layer dating. The average recurrence interval for the 10 consecutive and highest-quality earthquakes is 111 (93-131) years and individual intervals are ??50% of the average. With comparison with the previously published 14-15 earthquake record between A.D. 500 and present, we find no evidence to suggest significant variations in the average recurrence rate at Wrightwood during the past 5000 years.

  8. Does Geothermal Energy Production Cause Earthquakes in the Geysers Region of Northern California?

    NASA Astrophysics Data System (ADS)

    Grove, K.; Bailey, C.; Sotto, M.; Yu, M.; Cohen, M.

    2003-12-01

    The Geysers region is located in Sonoma County, several hours north of San Francisco. At this location, hot magma beneath the surface heats ground water and creates steam that is used to make electricity. Since 1997, 8 billion gallons of treated wastewater have been injected into the ground, where the water becomes hot and increases the amount of thermal energy that can be produced. Frequent micro-earthquakes (up to magnitude 4.5) occur in the region and seem to be related to the geothermal energy production. The region is mostly uninhabited, except for several small towns such as Anderson Springs, where people have been extremely concerned about potential damage to their property. The energy companies are planning to double the amount of wastewater injected into the ground and to increase their energy production. Geothermal energy is important because it is better for the environment than burning coal, oil, or gas. Air and water pollution, which have negative impacts on living things, are reduced compared to power plants that generate electricity by burning fossil fuels. We have studied the frequency and magnitude of earthquakes that have occurred in the region since the early 1970s and that are occurring today. We used software to analyze the earthquakes and to look for patterns related to water injection and energy production. We are interested in exploring ways that energy production can be continued without having negative impacts on the people in the region.

  9. Loss estimates for a Puente Hills blind-thrust earthquake in Los Angeles, California

    USGS Publications Warehouse

    Field, E.H.; Seligson, H.A.; Gupta, N.; Gupta, V.; Jordan, T.H.; Campbell, K.W.

    2005-01-01

    Based on OpenSHA and HAZUS-MH, we present loss estimates for an earthquake rupture on the recently identified Puente Hills blind-thrust fault beneath Los Angeles. Given a range of possible magnitudes and ground motion models, and presuming a full fault rupture, we estimate the total economic loss to be between $82 and $252 billion. This range is not only considerably higher than a previous estimate of $69 billion, but also implies the event would be the costliest disaster in U.S. history. The analysis has also provided the following predictions: 3,000-18,000 fatalities, 142,000-735,000 displaced households, 42,000-211,000 in need of short-term public shelter, and 30,000-99,000 tons of debris generated. Finally, we show that the choice of ground motion model can be more influential than the earthquake magnitude, and that reducing this epistemic uncertainty (e.g., via model improvement and/or rejection) could reduce the uncertainty of the loss estimates by up to a factor of two. We note that a full Puente Hills fault rupture is a rare event (once every ???3,000 years), and that other seismic sources pose significant risk as well. ?? 2005, Earthquake Engineering Research Institute.

  10. Chapter A. The Loma Prieta, California, Earthquake of October 17, 1989 - Main Shock Characteristics

    USGS Publications Warehouse

    Spudich, Paul, (Edited By)

    1996-01-01

    The October 17, 1989, Loma Prieta, Calif., earthquake (0004:15.2 G.m.t. October 18; lat 37.036? N., long 121.883? W.; 19-km depth) had a local magnitude (ML) of about 6.7, a surface-wave magnitude (MS) of 7.1, a seismic moment of 2.2x1019 N-m to 3.5x1019 N-m, a source duration of 6 to 15 s, and an average stress drop of at least 50 bars. Slip occurred on a dipping fault surface about 35 km long and was largely confined to a depth of about 7 to 20 km. The slip vector had a large vertical component, and slip was distributed in two main regions situated northwest and southeast of the hypocenter. This slip distribution caused about half of the earthquake's energy to be focused toward the urbanized San Francisco Bay region, while the other half was focused toward the southeast. Had the rupture initiated at the southeast end of the aftershock zone, shaking in the bay region would have been both longer and stronger. These source parameters suggest that the earthquake was not a typical shallow San Andreas-type event but a deeper event on a different fault with a recurrence interval of many hundreds of years. Therefore, the potential for a damaging shallow event on the San Andreas fault in the Santa Cruz Mountains may still exist.

  11. A century of oilfield operations and earthquakes in the greater Los Angeles Basin, southern California

    USGS Publications Warehouse

    Hauksson, Egill; Goebel, Thomas; Ampuero, Jean-Paul; Cochran, Elizabeth S.

    2015-01-01

    Most of the seismicity in the Los Angeles Basin (LA Basin) occurs at depth below the sediments and is caused by transpressional tectonics related to the big bend in the San Andreas fault. However, some of the seismicity could be associated with fluid extraction or injection in oil fields that have been in production for almost a century and cover ∼ 17% of the basin. In a recent study, first the influence of industry operations was evaluated by analyzing seismicity characteristics, including normalized seismicity rates, focal depths, and b-values, but no significant difference was found in seismicity characteristics inside and outside the oil fields. In addition, to identify possible temporal correlations, the seismicity and available monthly fluid extraction and injection volumes since 1977 were analyzed. Second, the production and deformation history of the Wilmington oil field were used to evaluate whether other oil fields are likely to experience similar surface deformation in the future. Third, the maximum earthquake magnitudes of events within the perimeters of the oil fields were analyzed to see whether they correlate with total net injected volumes, as suggested by previous studies. Similarly, maximum magnitudes were examined to see whether they exhibit an increase with net extraction volume. Overall, no obvious previously unidentified induced earthquakes were found, and the management of balanced production and injection of fluids appears to reduce the risk of induced-earthquake activity in the oil fields.

  12. Migrating swarms of brittle-failure earthquakes in the lower crust beneath Mammoth Mountain, California

    USGS Publications Warehouse

    Shelly, D.R.; Hill, D.P.

    2011-01-01

    Brittle-failure earthquakes in the lower crust, where high pressures and temperatures would typically promote ductile deformation, are relatively rare but occasionally observed beneath active volcanic centers. Where they occur, these earthquakes provide a rare opportunity to observe volcanic processes in the lower crust, such as fluid injection and migration, which may induce brittle faulting under these conditions. Here, we examine recent short-duration earthquake swarms deep beneath the southwestern margin of Long Valley Caldera, near Mammoth Mountain. We focus in particular on a swarm that occurred September 29-30, 2009. To maximally illuminate the spatial-temporal progression, we supplement catalog events by detecting additional small events with similar waveforms in the continuous data, achieving up to a 10-fold increase in the number of locatable events. We then relocate all events, using cross-correlation and a double-difference algorithm. We find that the 2009 swarm exhibits systematically decelerating upward migration, with hypocenters shallowing from 21 to 19 km depth over approximately 12 hours. This relatively high migration rate, combined with a modest maximum magnitude of 1.4 in this swarm, suggests the trigger might be ascending CO2 released from underlying magma.

  13. Near-fault measurement of postseismic slip associated with the 1989 Loma Prieta, California, earthquake

    USGS Publications Warehouse

    Rymer, M.J.

    1990-01-01

    Five small-aperture (0.5 to 7.7 m) quadrilaterals were installed across the San Andreas fault and newly formed extensional cracks shortly after the October 17, 1989 Loma Prieta M7.1 earthquake. Measurements of line-length changes from as early as 4 d up to 184 d after the earthquake in nail quadrilaterals indicate a small amount of right-lateral postseismic slip on the San Andreas fault. The site near the southeast end of the 1989 aftershock zone on the San Andreas fault showed about 5??2 mm of postseismic right-lateral component of slip in addition to 23 mm of right-lateral coseismic movement. The site near the northwest end of aftershocks likewise showed about 5??2 mm of postseismic slip, but after only 5 mm of coseismic slip. The smal slip values, in spite of uncertainties, clearly show that the lack of coseismic surface slip associated with the earthquake was not followed by large postseismic slip. -from Author

  14. Situated Preparedness: The Negotiation of a Future Catastrophic Earthquake in a California University

    ERIC Educational Resources Information Center

    Baker, Natalie Danielle

    2013-01-01

    This dissertation examines disaster preparedness as engaged at a large university in southern California using inductive research and grounded theory data collection and analysis methods. The thesis consists of three parts, all addressing the problem of disaster preparedness as enacted in this at-risk context. I use in-depth interviews, archival…

  15. Activity remotely triggered in volcanic and geothermal centers in California and Washington by the 3 November 2002 Mw=7.9 Alaska earthquake

    NASA Astrophysics Data System (ADS)

    Hill, D. P.; Prejean, S.; Oppenheimer, D.; Pitt, A. M.; S. D. Malone; Richards-Dinger, K.

    2002-12-01

    The M=7.9 Alaska earthquake of 3 November 2002 was followed by bursts of remotely triggered earthquakes at several volcanic and geothermal areas across the western United States at epicentral distances of 2,500 to 3,660 km. Husen et al. (this session) describe the triggered response for Yellowstone caldera, Wyoming. Here we highlight the triggered response for the Geysers geothermal field in northern California, Mammoth Mountain and Long Valley caldera in eastern California, the Coso geothermal field in southeastern California, and Mount Rainier in central Washington. The onset of triggered seismicity at each of these areas began 15 to 17 minutes after the Alaska earthquake during the S-wave coda and the early phases of the Love and Raleigh waves with periods of 5 to 40 seconds and dynamic strains of a few microstrain. In each case, the seismicity was characterized by spasmodic bursts of small (M<2 ), brittle-failure earthquakes. The activity persisted for just a few minutes at Mount Rainier and Mammoth Mountain and roughly 30 minutes at the Geysers and Coso geothermal fields. Many of the triggered earthquakes at all three sites were too small for reliable locations (magnitudes M<1), although their small S-P times indicate hypocentral locations within a few km of the nearest seismic station. Borehole dilatometers in vicinity of Mammoth Mountain recorded strain offsets on the order of 0.1 microstrain coincident in time with the triggered seismicity (Johnston et al. this session), and water level in the 3-km-deep LVEW well in the center of Long Valley caldera dropped by ~13 cm during passage of the seismic wave train from the Alaska earthquake followed by a gradual recovery. The Geysers, Coso, and Mount Rainier have no continuous, high-resolution strain instrumentation. A larger earthquake swarm that began 23.5 hours later (21:38 UT on the 4th) in the south moat of Long Valley caldera and included nine M>2 and one M=3.0 earthquake may represent a delayed response to the Alaska earthquake.

  16. Differentiating Tectonic and Anthropogenic Earthquakes in the Greater Los Angeles Basin, Southern California

    NASA Astrophysics Data System (ADS)

    Hauksson, E.; Goebel, T.; Cochran, E. S.; Ampuero, J. P.

    2014-12-01

    The 2014 flurry of moderate earthquakes in the Los Angeles region raised the concern if some of this or past seismicity was of anthropogenic origin as opposed to being caused by ongoing transpressional tectonics. The Mw5.1 La Habra sequence is located near several major oil fields but the Mw4.4 Encino sequence was located away from oil fields, within the Santa Monica Mountains. The last century of seismicity in the Los Angeles area consists of numerous small and large earthquakes. Most of these earthquakes occur beneath the basin sediments and are associated with transpressional tectonics, related to the big bend in the San Andreas fault, but some could be associated with large oil fields. In particular, both the 1933 Mw6.4 Long Beach and the 1987 Mw5.9 Whittier Narrows earthquakes were spatially associated with two major oil fields, the Huntington Beach and Montebello fields. Numerous large oil fields have been in production for more than 125 years. The geographical locations of the oil fields follow major tectonic trends such as the Newport-Inglewood fault, the Whittier fault, and the thrust belt located at the north edge of the Los Angeles basin. More than 60 fields have oil wells and some of these have both disposal and fracking wells. Before fluid injection became common, Kovach (1974) documented six damaging events induced by fluid extraction from 1947 to 1961 in the Wilmington oil field. Since 1981 the waveform-relocated earthquake catalog for the Los Angeles basin is complete on the average above M2.0. We compare the spatial distribution of these events and the proximity of nearby active oil fields. We will also analyze the seismicity in the context of available monthly fluid extraction and injection volumes and search for temporal correlations. The La Habra sequence apparently correlates with temporal changes in extraction and injection volumes in the Santa Fe Springs oil field but not with activities in other oil fields within closer spatial proximity.

  17. A study of microseismotectonics and earthquake sources in Long Valley Caldera, California

    NASA Astrophysics Data System (ADS)

    Stroujkova, Anastasia Felixovna

    It was noted by Aki (1992) that variety of conventional seismic methods fail in volcanic and geothermal environments due to complexity of structure and underlying processes. Custom-made combinations of data processing techniques should be designed for each particular geothermal site to gain a clear look at its internal structure. This thesis deals with various aspects of seismicity in the Long Valley Caldera, CA, observed during 1997 experiment in the Long Valley. Over 10,000 microearthquakes were detected and located as a result of this experiment. Hypocenters of several hundreds of swarm earthquakes were precisely relocated using the multiplet location technique. The relocation yielded elongated subvertical structures, pointing toward the geothermal plant. Based upon the earthquake relocation and focal mechanism study of these earthquakes, the major feature of this area is a series of sub-vertical faults. The majority of the seismicity in the Casa Diablo area was associated with advancing edges of this fault, failing in mode III (anti-plane). Analysis of about 2000 focal mechanisms showed that the seismicity is compatible with the regional stress field. Moment tensor study suggested that although most of the earthquakes could be explained in terms of double-couple mechanism, the statistical analysis revealed a possibility of a non-double-couple component present in the bulk seismicity. A number of unusual events with harmonic spectra occurred during the most active periods of the swarm. The source-time functions of these events consist of 2--3 sub-events with regular time delays between them, producing modulated, delay-dependent source spectra. The regularity of the delays suggests that the subevents are triggered by a fixed length/time scale process, an example being the length/inflation rate of a magmatic or hydrothermal flow structure. A number of earthquakes with unusual secondary phases were detected in the Casa Diablo area. Some of the phases were interpreted as mode-converted S-to-P reflections. A reflecting body, located at a depth of about 7.6 km, was imaged using the technique similar to Kirhchoff prestack migration. Analysis of the body reflectivity suggests that the material beneath the reflecting contact may be partially molten or fluid saturated.

  18. Soil radon concentration changes preceding and following four magnitude 4. 2--4. 7 earthquakes on the San Jacinto fault in southern California

    SciTech Connect

    Birchard, G.F.; Libby, W.F.

    1980-06-10

    Radon concentrations in soil gas measured in shallow holes by plastic Track Etch detectors show variations which may be associated with four magnitude 4.2 to 4.7 earthquakes on the San Jacinto Fault in southern California. A simple sinusoidal correction for the annual soil temperature cycle removes most of the variability in radon concentration that cannot be correlated with earthquakes. The two earthquakes located approximately 4 and 5.5 km away from the array of detectors showed spatially coherent responses with the largest radon concentration changes occurring closes to the earthquake. The two earthquakes 10--15 km distant from the closest site showed changes apparent when all sites were averaged together. The two nearby earthquakes were right lateral strike-slip events which had quadrants of seismic compression and dilation which correlate with radon concentration increases and decreases, respectively. Radon increases occur when the upward velocity of soil gas increases, since a sharp radon concentration gradient exists in the top few meters of soil. Gas outflow may occur in regions of compression and inflow in areas of dilation, thus producing the observed radon concentration changes.

  19. Earthquake stress drops and inferred fault strength on the Hayward Fault, east San Francisco Bay, California

    USGS Publications Warehouse

    Hardebeck, J.L.; Aron, A.

    2009-01-01

    We study variations in earthquake stress drop with respect to depth, faulting regime, creeping versus locked fault behavior, and wall-rock geology. We use the P-wave displacement spectra from borehole seismic recordings of M 1.0-4.2 earthquakes in the east San Francisco Bay to estimate stress drop using a stack-and-invert empirical Green's function method. The median stress drop is 8.7 MPa, and most stress drops are in the range between 0.4 and 130 MPa. An apparent correlation between stress drop and magnitude is entirely an artifact of the limited frequency band of 4-55 Hz. There is a trend of increasing stress drop with depth, with a median stress drop of ~5 MPa for 1-7 km depth, ~10 MPa for 7-13 km depth, and ~50 MPa deeper than 13 km. We use S=P amplitude ratios measured from the borehole records to better constrain the first-motion focal mechanisms. High stress drops are observed for a deep cluster of thrust-faulting earthquakes. The correlation of stress drops with depth and faulting regime implies that stress drop is related to the applied shear stress. We compare the spatial distribution of stress drops on the Hayward fault to a model of creeping versus locked behavior of the fault and find that high stress drops are concentrated around the major locked patch near Oakland. This also suggests a connection between stress drop and applied shear stress, as the locked patch may experience higher applied shear stress as a result of the difference in cumulative slip or the presence of higher-strength material. The stress drops do not directly correlate with the strength of the proposed wall-rock geology at depth, suggesting that the relationship between fault strength and the strength of the wall rock is complex.

  20. Triggered surface slips in the Salton Trough associated with the 1999 Hector Mine, California, earthquake

    USGS Publications Warehouse

    Rymer, M.J.; Boatwright, J.; Seekins, L.C.; Yule, J.D.; Liu, J.

    2002-01-01

    Surface fracturing occurred along the southern San Andreas, Superstition Hills, and Imperial faults in association with the 16 October 1999 (Mw 7.1) Hector Mine earthquake, making this at least the eighth time in the past 31 years that a regional earthquake has triggered slip along faults in the Salton Trough. Fractures associated with the event formed discontinuous breaks over a 39-km-long stretch of the San Andreas fault, from the Mecca Hills southeastward to Salt Creek and Durmid Hill, a distance from the epicenter of 107 to 139 km. Sense of slip was right lateral; only locally was there a minor (~1 mm) vertical component of slip. Dextral slip ranged from 1 to 13 mm. Maximum slip values in 1999 and earlier triggered slips are most common in the central Mecca Hills. Field evidence indicates a transient opening as the Hector Mine seismic waves passed the southern San Andreas fault. Comparison of nearby strong-motion records indicates several periods of relative opening with passage of the Hector Mine seismic wave-a similar process may have contributed to the field evidence of a transient opening. Slip on the Superstition Hills fault extended at least 9 km, at a distance from the Hector Mine epicenter of about 188 to 196 km. This length of slip is a minimum value, because we saw fresh surface breakage extending farther northwest than our measurement sites. Sense of slip was right lateral; locally there was a minor (~1 mm) vertical component of slip. Dextral slip ranged from 1 to 18 mm, with the largest amounts found distributed (or skewed) away from the Hector Mine earthquake source. Slip triggered on the Superstition Hills fault commonly is skewed away from the earthquake source, most notably in 1968, 1979, and 1999. Surface slip on the Imperial fault and within the Imperial Valley extended about 22 km, representing a distance from the Hector Mine epicenter of about 204 to 226 km. Sense of slip dominantly was right lateral; the right-lateral component of slip ranged from 1 to 19 mm. Locally there was a minor (~1-2 mm) vertical component of slip; larger proportions of vertical slip (up to 10 mm) occurred in Mesquite basin, where scarps indicate long-term oblique-slip motion for this part of the Imperial fault. Slip triggered on the Imperial fault appears randomly distributed relative to location along the fault and source direction. Multiple surface slips, both primary and triggered slip, indicate that slip repeatedly is small at locations of structural complexity.

  1. Three-Dimensional Geologic Map of Northern California: A Foundation for Earthquake Simulations and Other Predictive Modeling

    NASA Astrophysics Data System (ADS)

    Jachens, R. C.; Simpson, R. W.; Graymer, R. W.; Wentworth, C. M.; Brocher, T. M.

    2006-12-01

    Detailed, realistic models of the subsurface are needed for predicting damage patterns from future earthquakes and simulating other phenomena affecting human safety and well being. The simple models used in the past are no longer adequate. In support of a planned simulation of the ground shaking from the Great 1906 San Francisco earthquake, we constructed a three-dimensional (3D) geologic map of northern California that consists of specific geologic units separated by discrete boundaries. It is based on a century of geologic mapping, 50 years of gravity and magnetic surveying, double-difference relocated seismicity, seismic soundings, P-wave tomography, and well logs. The map is a rules-based construction composed of faults that break the map volume into fault blocks, which in turn are populated with geologic units defined by surfaces that represent their tops. The rules define how the faults and tops truncate one another. The map is easily updated as new information becomes available. The 3D map is made up of two related parts. An inner detailed map of central California centered on San Francisco extends from Clear Lake to Monterey, from the edge of the continental shelf to the western Great Valley, and to a depth of 45 km. This is embedded in a less detailed regional map that extends from north of Cape Mendocino to Parkfield, from the ocean basin to the foothills of the Sierra Nevada and Cascade Ranges, and also to a depth of 45 km. The detailed map volume is broken by 25 major faults including the active San Andreas, Hayward, and Calaveras faults. The fault blocks are populated with geologic units in the following groups: water, Plio-Quaternary deposits, Tertiary (or undifferentiated Cenozoic) sedimentary and volcanic deposits, Mesozoic sedimentary or plutonic rocks, mafic lower crust, and mantle rocks. The primary purpose of the regional map is: 1) to provide coverage of the entire reach of the San Andreas Fault that ruptured in 1906 (including the major bedrock units that occupy the fault faces); and 2) to provide a consistent `buffer' surrounding the detailed map to minimize modeling artifacts from boundary discontinuities. The regional map includes major Mesozoic and Tertiary bedrock units, a representation of the Great Valley sedimentary fill, the mafic lower crust, and the mantle. The 3D map was assigned physical properties (seismic wave velocities, densities, and intrinsic attenuations) according to geologic unit and depth, and provided to the seismic-wave modeling community. Successful simulations of ground shaking from the Great 1906 San Francisco earthquake and the 1989 Loma Prieta earthquake based on this 3D map specifically highlighted the role of sedimentary basins in amplifying and prolonging ground shaking, and more generally illustrated the benefits of a `geologic' approach for producing realistic earth models to support predictive process modeling. The present 3D map and its derivative physical property model are appropriate for incorporation into a statewide community fault model and a statewide seismic velocity model.

  2. Along-strike variations in fault frictional properties along the San Andreas Fault near Cholame, California from joint earthquake and low-frequency earthquake relocations

    USGS Publications Warehouse

    Harrington, R.M; Cochran, Elizabeth S.; Griffiths, E.M.; Zeng, X.; Thurber, C.

    2016-01-01

    Recent observations of low‐frequency earthquakes (LFEs) and tectonic tremor along the Parkfield–Cholame segment of the San Andreas fault suggest slow‐slip earthquakes occur in a transition zone between the shallow fault, which accommodates slip by a combination of aseismic creep and earthquakes (<15  km depth), and the deep fault, which accommodates slip by stable sliding (>35  km depth). However, the spatial relationship between shallow earthquakes and LFEs remains unclear. Here, we present precise relocations of 34 earthquakes and 34 LFEs recorded during a temporary deployment of 13 broadband seismic stations from May 2010 to July 2011. We use the temporary array waveform data, along with data from permanent seismic stations and a new high‐resolution 3D velocity model, to illuminate the fine‐scale details of the seismicity distribution near Cholame and the relation to the distribution of LFEs. The depth of the boundary between earthquakes and LFE hypocenters changes along strike and roughly follows the 350°C isotherm, suggesting frictional behavior may be, in part, thermally controlled. We observe no overlap in the depth of earthquakes and LFEs, with an ∼5  km separation between the deepest earthquakes and shallowest LFEs. In addition, clustering in the relocated seismicity near the 2004 Mw 6.0 Parkfield earthquake hypocenter and near the northern boundary of the 1857 Mw 7.8 Fort Tejon rupture may highlight areas of frictional heterogeneities on the fault where earthquakes tend to nucleate.

  3. Forecasting the Next Great San Francisco Earthquake

    NASA Astrophysics Data System (ADS)

    Rundle, P.; Rundle, J. B.; Turcotte, D. L.; Donnellan, A.; Yakovlev, G.; Tiampo, K. F.

    2005-12-01

    The great San Francisco earthquake of 18 April 1906 and its subsequent fires killed more than 3,000 persons, and destroyed much of the city leaving 225,000 out of 400,000 inhabitants homeless. The 1906 earthquake occurred on a km segment of the San Andreas fault that runs from the San Juan Bautista north to Cape Mendocino and is estimated to have had a moment magnitude m ,l 7.9. Observations of surface displacements across the fault were in the range m. As we approach the 100 year anniversary of this event, a critical concern is the hazard posed by another such earthquake. In this talk we examine the assumptions presently used to compute the probability of occurrence of these earthquakes. We also present the results of a numerical simulation of interacting faults on the San Andreas system. Called Virtual California, this simulation can be used to compute the times, locations and magnitudes of simulated earthquakes on the San Andreas fault in the vicinity of San Francisco. Of particular importance are new results for the statistical distribution of interval times between great earthquakes, results that are difficult or impossible to obtain from a purely field-based approach. We find that our results are fit well under most circumstances by the Weibull statistical distribution, and we compute waiting times to future earthquakes based upon our simulation results. A contrasting approach to the same problem has been adopted by the Working Group on California Earthquake Probabilities, who use observational data combined with statistical assumptions to compute probabilities of future earthquakes.

  4. Implications of diverse fault orientations imaged in relocated aftershocks of the Mount Lewis, ML 5.7, California, earthquake

    NASA Astrophysics Data System (ADS)

    Kilb, D.; Rubin, A. M.

    2002-11-01

    We use seismic waveform cross correlation to determine the relative positions of 2747 microearthquakes near Mount Lewis, California, that have waveforms recorded from 1984 to 1999. These earthquakes include the aftershock sequence of the 1986 ML5.7 Mount Lewis earthquake. Approximately 90% of these aftershocks are located beyond the tips of the approximately north striking main shock, defining an hourglass with the long axis aligned approximately with the main shock. Surprisingly, our relocation demonstrates that many of these aftershocks illuminate a series of near-vertical east-west faults that are ˜0.5-1 km long and separated by as little as ˜200 m. We propose that these structures result from the growth of a relatively young fault in which displacement across a right-lateral approximately north striking fault zone is accommodated by slip on secondary left-lateral approximately east striking faults. We derive the main shock-induced static Coulomb failure function (Δσf) on the dominant fault orientation in our study area using a three-dimensional (3-D) boundary element program. To bound viable friction coefficients, we measure the correlation between the rank ordering of relative amplitudes of Δσf and seismicity rate change. We find that likely friction coefficients are 0.2-0.6 and that the assumed main shock geometry introduces the largest uncertainties in the favored friction values. We obtain similar results from a visual correlation of calculated Δσf contours with the distribution of aftershocks. Viable rate-and-state constitutive parameters bound the observed relationship between magnitude of Δσf and seismicity rate change, and for our favored main shock model a maximum correlation is achieved when Δσf is computed with friction coefficients of 0.3-0.6. These values are below those previously cited for young faults.

  5. Interseismic Strain Accumulation in the Imperial Valley and Implications for Triggering of Large Earthquakes in Southern California

    NASA Astrophysics Data System (ADS)

    Crowell, B. W.; Bock, Y.; Sandwell, D. T.

    2009-12-01

    From February, 2008 to March, 2009, we performed three rapid-static Global Positioning System (GPS) surveys of 115 geodetic monuments stretching from the United States-Mexico border into the Coachella Valley using the method of instantaneous positioning. The monuments are located in key areas near the Imperial, Superstition Hills, San Jacinto, San Andreas and Brawley Faults with nominal baselines generally less than 10 km. We perform a bicubic spline interpolation on the crustal motion vectors from the campaign measurements and 1005 continuous GPS monuments in western North America and solve for the velocity gradient tensor to look at the maximum shear strain, dilatation and rotation rates in the Imperial Valley. We then compare our computed strain field to that computed using the Southern California Earthquake Center Crustal Motion Map 3.0, which extends through 2003 and includes 840 measurements. We show that there is an interseismic strain transient that corresponds to an increase in the maximum shear strain rate of 0.7 μstrain/yr near Obsidian Buttes since 2003 along a fault referred to as the Obsidian Buttes Fault (OBF). A strong subsidence signal of 27 mm/yr and a left-lateral increase of 10 mm/yr are centered along the OBF. Changes in the dilatation and rotation rates confirm the increase in left-lateral motion, as well as infer a strong increase in spreading rate in the southern Salton Sea. The increase in spreading rate has caused an accelerated slip rate along the southern San Andreas near Durmid Hill as evidenced by continuous GPS, which has the potential for earthquake triggering.

  6. Average Stress Drops of Southern California Earthquakes in the Context of Crustal Geophysics: Implications for Fault Zone Healing

    NASA Astrophysics Data System (ADS)

    Hauksson, Egill

    2015-05-01

    To understand how fault healing processes affect earthquake stress drops, we search for a possible dependency of stress drops on crustal conditions and geophysical parameters. We reanalyze the stress drop values of ~60,000 earthquakes in southern California which were originally determined by Shearer et al. J Geophys Res 111:B06303, (2006) using a spectral method. We modify the dataset to include only stress drops that are derived from at least 10 spectra and with corner frequencies between 3 and 30 Hz, and correct the rupture velocity for increasing S-wave speed with depth. We see no dependence of stress drop on moment magnitude or depth, except for a small, poorly determined increase from 15 to 25 km. We use six crustal geophysics parameters to search for obvious correlations that may explain changes in the mean values of the stress drops: (1) crustal thickness, (2) isostatic gravity, (3) heat flow, (4) shear strain rate, (5) crustal stress regime, and (6) style of faulting. None of the variables reduce the scatter but most can explain up to 10-20 % variations in the mean stress drops. The geographical distribution of the grouped mean stress drops includes very high stress drops near Ridgecrest, eastern California, as well as near fault jogs within the San Andreas Fault system. Low stress drops dominate in trans-tensional regions. Heat flow and GPS-based shear strain rate estimates have the largest influence on stress drop variations. In the range of low to medium heat flow, the stress drops increase with increasing heat flow. In contrast, at high heat flow in thin crust, the stress drops decrease systematically with increasing heat flow. Increasing shear strain rate systematically correlates with decreasing stress drops. The crustal stress regime and style of faulting also influence the stress drops as demonstrated by lower stress drops for north-northeast trending principal horizontal stress and in areas of dip-slip faulting. The mean variations in stress drops with heat flow, stress regime, crustal thickness, and density can be explained in the context of fault healing (grain boundary growth) and corresponding increase in fault zone strength on time scales modulated by the tectonic shear strain rate.

  7. Survey of strong motion earthquake effects on thermal power plants in California with emphasis on piping systems. Volume 1, Main report

    SciTech Connect

    Stevenson, J.D.

    1995-11-01

    Since 1982, there has been a major effort expended to evaluate the susceptibility of nuclear Power plant equipment to failure and significant damage during seismic events. This was done by making use of data on the performance of electrical and mechanical equipment in conventional power plants and other similar industrial facilities during strong motion earthquakes. This report is intended as an extension of the seismic experience data collection effort and a compilation of experience data specific to power plant piping and supports designed and constructed US power piping code requirements which have experienced strong motion earthquakes. Eight damaging (Richter Magnitude 7.7 to 5.5) California earthquakes and their effects on 8 power generating facilities in use natural gas and California were reviewed. All of these facilities were visited and evaluated. Seven fossel-fueled (dual use natural gas and oil) and one nuclear fueled plants consisting of a total of 36 individual boiler or reactor units were investigated. Peak horizontal ground accelerations that either had been recorded on site at these facilities or were considered applicable to these power plants on the basis of nearby recordings ranged between 0.20g and 0.5lg with strong motion durations which varied from 3.5 to 15 seconds. Most US nuclear power plants are designed for a safe shutdown earthquake peak ground acceleration equal to 0.20g or less with strong motion durations which vary from 10 to 15 seconds.

  8. Structural Constraints and Earthquake Recurrence Estimates for the West Tahoe-Dollar Point Fault, Lake Tahoe Basin, California

    NASA Astrophysics Data System (ADS)

    Maloney, J. M.; Driscoll, N. W.; Kent, G.; Brothers, D. S.; Baskin, R. L.; Babcock, J. M.; Noble, P. J.; Karlin, R. E.

    2011-12-01

    Previous work in the Lake Tahoe Basin (LTB), California, identified the West Tahoe-Dollar Point Fault (WTDPF) as the most hazardous fault in the region. Onshore and offshore geophysical mapping delineated three segments of the WTDPF extending along the western margin of the LTB. The rupture patterns between the three WTDPF segments remain poorly understood. Fallen Leaf Lake (FLL), Cascade Lake, and Emerald Bay are three sub-basins of the LTB, located south of Lake Tahoe, that provide an opportunity to image primary earthquake deformation along the WTDPF and associated landslide deposits. We present results from recent (June 2011) high-resolution seismic CHIRP surveys in FLL and Cascade Lake, as well as complete multibeam swath bathymetry coverage of FLL. Radiocarbon dates obtained from the new piston cores acquired in FLL provide age constraints on the older FLL slide deposits and build on and complement previous work that dated the most recent event (MRE) in Fallen Leaf Lake at ~4.1-4.5 k.y. BP. The CHIRP data beneath FLL image slide deposits that appear to correlate with contemporaneous slide deposits in Emerald Bay and Lake Tahoe. A major slide imaged in FLL CHIRP data is slightly younger than the Tsoyowata ash (7950-7730 cal yrs BP) identified in sediment cores and appears synchronous with a major Lake Tahoe slide deposit (7890-7190 cal yrs BP). The equivalent age of these slides suggests the penultimate earthquake on the WTDPF may have triggered them. If correct, we postulate a recurrence interval of ~3-4 k.y. These results suggest the FLL segment of the WTDPF is near its seismic recurrence cycle. Additionally, CHIRP profiles acquired in Cascade Lake image the WTDPF for the first time in this sub-basin, which is located near the transition zone between the FLL and Rubicon Point Sections of the WTDPF. We observe two fault-strands trending N45°W across southern Cascade Lake for ~450 m. The strands produce scarps of ~5 m and ~2.7 m, respectively, on the lake floor, but offset increases down-section to ~14 m and ~8 m at the acoustic basement. Studying the style and timing of earthquake deformation in Fallen Leaf Lake, Cascade Lake, Emerald Bay and Lake Tahoe will help us to understand how strain is partitioned between adjacent segments and the potential rupture magnitude.

  9. Post-earthquake relaxation evidence for laterally variable viscoelastic structure and elevated water concentration in the southwestern California mantle

    NASA Astrophysics Data System (ADS)

    Pollitz, F. F.

    2014-12-01

    I re-examine the lower crust and mantle relaxation following two large events in the Mojave Desert: the 1992 M7.3 Landers and 1999 M7.1 Hector Mine, California, earthquakes. More than a decade of GPS time series from regional sites out to 250 km from the ruptures are used to constrain models of postseismic relaxation. Crustal motions in the Mojave Desert region are elevated for several years following each event, with perturbations from a pre-Landers background of order mm to cm per year. I consider afterslip and relaxation of the ductile lower crust and mantle to explain these motions. To account for broad scale relaxation, the Burgers body model is employed, involving Kelvin (transient) viscosity and rigidity and Maxwell (steady state) viscosity and rigidity. I use the code VISCO2.5D to perform 2.5D modeling of the postseismic relaxation (3D quasi-static motions computed on 2D, laterally heterogeneous viscoelastic structures; Pollitz, 2014 GJI). Joint afterslip / postseismic relaxation modeling of continuous GPS time series up to 10.46 years following the Hector Mine earthquake (i.e. up to the time of the 2010 M7.2 El Mayor-Cucapah earthquake) reveals that a northwest-trending `southwest domain' that envelopes the San Andreas fault system and western Mojave Desert has ~4 times larger Maxwell mantle viscosity than the adjacent `northeast domain' that extends inland and envelopes the Landers and Hector Mine rupture areas in the central Mojave Desert. This pattern is counter to that expected from regional heat flow, which is higher in the northeast domain, but it is explicable by means of a non-linear rheology that includes dependence on both strain rate and water concentration. I infer that the southwest domain mantle has a relatively low steady-state viscosity because of its high strain rate and water content. The relatively low mantle water content of the northeast domain is interpreted to result from the continual extraction of water through igneous and volcanic activity over the past ~20 Myr. The inference of Maxwellian viscosities is possible because the material relaxation times involved (5 years and 20 years for the SW and NE domains, respectively) are to a large extent spanned by the decade of available post-Hector Mine observations.

  10. Prevalence and Predictors of Somatic Symptoms among Child and Adolescents with Probable Posttraumatic Stress Disorder: A Cross-Sectional Study Conducted in 21 Primary and Secondary Schools after an Earthquake

    PubMed Central

    Zhang, Ye; Zhu, Shenyue; Du, Changhui

    2015-01-01

    Purpose To explore the prevalence rates and predictors of somatic symptoms among child and adolescent survivors with probable posttraumatic stress disorder (PTSD) after an earthquake. Methods A total of 3053 students from 21 primary and secondary schools in Baoxing County were administered the Patient Health Questionnaire-13 (PHQ-13), a short version of PHQ-15 without the two items about sexuality and menstruation, the Children's Revised Impact of Event Scale (CRIES), and the self-made Earthquake-Related Experience Questionnaire 3 months after the Lushan earthquake. Results Among child and adolescent survivors, the prevalence rates of all somatic symptoms were higher in the probable PTSD group compared with the controls. The most frequent somatic symptoms were trouble sleeping (83.2%), feeling tired or having low energy (74.4%), stomach pain (63.2%), dizziness (58.1%), and headache (57.7%) in the probable PTSD group. Older age, having lost family members, having witnessed someone get seriously injured, and having witnessed someone get buried were predictors for somatic symptoms among child and adolescent survivors with probable PTSD. Conclusions Somatic symptoms among child and adolescent earthquake survivors with probable PTSD in schools were common, and predictors of these somatic symptoms were identified. These findings may help those providing psychological health programs to find the child and adolescent students with probable PTSD who are at high risk of somatic symptoms in schools after an earthquake in China. PMID:26327455

  11. Electrical structure in a region of the Transverse Ranges, southern California. [for earthquake prediction

    NASA Technical Reports Server (NTRS)

    Reddy, I. K.; Phillips, R. J.; Whitcomb, J. H.; Rankin, D.

    1977-01-01

    Magnetotelluric sounding at a site in the Transverse Ranges province in southern California indicates a low-resistivity region in the lower crust and possibly also the upper mantle. A two-dimensional model fit to the data indicates that the resistivity of this region is between 1 and 10 ohm-meters. The depth to the top surface of this zone is between 15 and 20 km. The lateral extent of this feature, which strikes N65 deg W, appears to be confined to the Transverse Ranges province. The petrological characteristics of this region cannot be deduced unambiguously from the magnetotelluric sounding alone.

  12. Earthquake watch

    USGS Publications Warehouse

    Hill, M.

    1976-01-01

     When the time comes that earthquakes can be predicted accurately, what shall we do with the knowledge? This was the theme of a November 1975 conference on earthquake warning and response held in San Francisco called by Assistant Secretary of the Interior Jack W. Carlson. Invited were officials of State and local governments from Alaska, California, Hawaii, Idaho, Montana, Nevada, utah, Washington, and Wyoming and representatives of the news media. 

  13. Non-double-couple earthquake mechanisms at the Geysers geothermal area, California

    USGS Publications Warehouse

    Ross, A.; Foulger, G.R.; Julian, B.R.

    1996-01-01

    Inverting P- and S-wave polarities and P:SH amplitude ratios using linear programming methods suggests that about 20% of earthquakes at The Geysers geothermal area have significantly non-double-couple focal mechanisms, with explosive volumetric components as large as 33% of the seismic moment. This conclusion contrasts with those of earlier studies, which interpreted data in terms of double couples. The non-double-couple mechanisms are consistent with combined shear and tensile faulting, possibly caused by industrial water injection. Implosive mechanisms, which might be expected because of rapid steam withdrawal, have not been found. Significant compensated-linear-vector-dipole (CLVD) components in some mechanisms may indicate rapid fluid flow accompanying crack opening. Copyright 1996 by the American Geophysical Union.

  14. EFFECTS OF THE 1906 EARTHQUAKE ON THE BALD HILL OUTLET SYSTEM, SAN MATEO COUNTY, CALIFORNIA.

    USGS Publications Warehouse

    Pampeyan, Earl H.

    1986-01-01

    Following the earthquake of April 18, 1906, it was discovered that a brick forebay and other parts of the reservoir outlet system were in the slip zone of the San Andreas fault. The original outlet through which water was directed to San Francisco consisted of two tunnels joined at the brick forebay; one tunnel extends 2,820 ft to the east under Bald Hill on Buri Buri Ridge, and the other tunnel intersects the lake bottom about 250 ft west of the forebay. In 1897 a second intake was added to the system, also joining the original forebay. During the present study the accessible parts of this original outlet system were examined with the hope of learning how the system had been affected by fault slip in 1906.

  15. Relation of the 1992 Landers, California, Earthquake Sequence to Seismic Scattering

    NASA Astrophysics Data System (ADS)

    Revenaugh, Justin

    1995-11-01

    Measurements of crustal scattering for the area surrounding the 1992 Landers earthquake sequence obtained from regional array recordings of teleseismic events for the 10-year period before the sequence showed that the slip distribution on faults could be deducible from the preshock elastic structure. Scattering intensity correlated strongly with the distribution of aftershocks and slip of the moment magnitude (M_w) 7.3 Landers main shock, M_w 6.1 Joshua Tree, and M_w 6.2 Big Bear events, which implies that aftershocks and slip are structurally controlled and broadly predictable. Scattering within the fault zones was directional and consistent with variable along-strike alignment of stress-induced cracks.

  16. Deep Structure Of Long Valley, California, Based On Deep Reflections From Earthquakes

    SciTech Connect

    Zucca, J. J.; Kasameyer, P. W.

    1987-01-01

    Knowledge of the deep structure of Long Valley comes primarily from seismic studies. Most of these efforts have focused on delimiting the top of the inferred magma chamber. We present evidence for the location of the bottom of the low velocity layer (LVL). Two other studies have provided similar information. Steeples and Iyer (1976) inferred from teleseismic P-wave delays that low-velocity material extends from 7 km depth to 25 to 40 km, depending on the velocities assumed. Luetgert and Mooney (1985) have examined seismic refraction data from earthquake sources and have identified a reflection that appears to be from the lower boundary of a magma chamber. They detected the reflection with a linear array of single component stations, and assuming it traveled in a vertical plane, matched the travel time and apparent velocity (6.3 km/sec) to deduce that it was a P-P reflection from within a LVL. We recorded a similar phase with a 2-dimensional array of three-component stations, and carried out a similar analysis, but utilized additional information about the travel path, particle motions and amplitudes to constrain our interpretation. Our data comes from a passive seismic refraction experiment conducted during August 1982. Fourteen portable seismograph stations were deployed in a network with approximately 5 km station spacing in the Mono Craters region north of Long Valley (Figure 1). The network recorded earthquakes located south of Long Valley and in the south moat. Three components of motion were recorded at all sites. The data represent one of the few times that three-component data has been collected for raypaths through a magma chamber in the Long Valley area.

  17. Finite-fault analysis of the 2004 Parkfield, California, earthquake using Pnl waveforms

    USGS Publications Warehouse

    Mendoza, C.; Hartzell, S.

    2008-01-01

    We apply a kinematic finite-fault inversion scheme to Pnl displacement waveforms recorded at 14 regional stations (Δ<2°) to recover the distribution of coseismic slip for the 2004 Parkfield earthquake using both synthetic Green’s functions (SGFs) calculated for one-dimensional (1D) crustal-velocity models and empirical Green’s functions (EGFs) based on the recordings of a single Mw 5.0 aftershock. Slip is modeled on a rectangular fault subdivided into 2×2 km subfaults assuming a constant rupture velocity and a 0.5 sec rise time. A passband filter of 0.1–0.5 Hz is applied to both data and subfault responses prior to waveform inversion. The SGF inversions are performed such that the final seismic moment is consistent with the known magnitude (Mw 6.0) of the earthquake. For these runs, it is difficult to reproduce the entire Pnl waveform due to inaccuracies in the assumed crustal structure. Also, the misfit between observed and predicted vertical waveforms is similar in character for different rupture velocities, indicating that neither the rupture velocity nor the exact position of slip sources along the fault can be uniquely identified. The pattern of coseismic slip, however, compares well with independent source models derived using other data types, indicating that the SGF inversion procedure provides a general first-order estimate of the 2004 Parkfield rupture using the vertical Pnl records. The best-constrained slip model is obtained using the single-aftershock EGF approach. In this case, the waveforms are very well reproduced for both vertical and horizontal components, suggesting that the method provides a powerful tool for estimating the distribution of coseismic slip using the regional Pnl waveforms. The inferred slip model shows a localized patch of high slip (55 cm peak) near the hypocenter and a larger slip area (~50 cm peak) extending between 6 and 20 km to the northwest.

  18. Post seismic deformation associated with the 1992 Mω=7.3 Landers earthquake, southern California

    USGS Publications Warehouse

    Savage, J.C.; Svarc, J.L.

    1997-01-01

    Following the 1992 Mω=7.3 Landers earthquake, a linear array of 10 geodetic monuments at roughly 5-km spacing was established across the Emerson fault segment of the Landers rupture. The array trends perpendicular to the local strike of the fault segment and extends about 30 km on either side of it. The array was surveyed by Global Positioning System 0.034, 0.048, 0.381, 1.27, 1.88, 2.60, and 3.42 years after the Landers earthquake to measure both the spatial and temporal character of the postearthquake relaxation. The temporal behavior is described roughly by a short-term (decay time 84±23 days) exponential relaxation superimposed upon an apparently linear trend. Because the linear trend represents motions much more rapid than the observed preseismic motions, we attribute that trend to a slower (decay time greater than 5 years) postseismic relaxation, the curvature of which cannot be resolved in the short run (3.4 years) of postseismic data. About 100 mm of right-lateral displacement and 50 mm of fault-normal displacement accumulated across the geodetic array in the 3.4-year interval covered by the postseismic surveys. Those displacements are attributed to postseismic, right-lateral slip in the depth interval 10 to 30 km on the downward extension of the rupture trace. The right-lateral slip amounted to about 1 m directly beneath the geodetic array, and the fault-normal displacement is apparently primarily a consequence of the curvature of the rupture. These conclusions are based upon dislocation models fit to the observed deformation. However, no dislocation model was found with rms residuals as small as the expected observational error.

  19. Earthquake precursors: activation or quiescence?

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Holliday, James R.; Yoder, Mark; Sachs, Michael K.; Donnellan, Andrea; Turcotte, Donald L.; Tiampo, Kristy F.; Klein, William; Kellogg, Louise H.

    2011-10-01

    We discuss the long-standing question of whether the probability for large earthquake occurrence (magnitudes m > 6.0) is highest during time periods of smaller event activation, or highest during time periods of smaller event quiescence. The physics of the activation model are based on an idea from the theory of nucleation, that a small magnitude earthquake has a finite probability of growing into a large earthquake. The physics of the quiescence model is based on the idea that the occurrence of smaller earthquakes (here considered as magnitudes m > 3.5) may be due to a mechanism such as critical slowing down, in which fluctuations in systems with long-range interactions tend to be suppressed prior to large nucleation events. To illuminate this question, we construct two end-member forecast models illustrating, respectively, activation and quiescence. The activation model assumes only that activation can occur, either via aftershock nucleation or triggering, but expresses no choice as to which mechanism is preferred. Both of these models are in fact a means of filtering the seismicity time-series to compute probabilities. Using 25 yr of data from the California-Nevada catalogue of earthquakes, we show that of the two models, activation and quiescence, the latter appears to be the better model, as judged by backtesting (by a slight but not significant margin). We then examine simulation data from a topologically realistic earthquake model for California seismicity, Virtual California. This model includes not only earthquakes produced from increases in stress on the fault system, but also background and off-fault seismicity produced by a BASS-ETAS driving mechanism. Applying the activation and quiescence forecast models to the simulated data, we come to the opposite conclusion. Here, the activation forecast model is preferred to the quiescence model, presumably due to the fact that the BASS component of the model is essentially a model for activated seismicity. These results lead to the (weak) conclusion that California seismicity may be characterized more by quiescence than by activation, and that BASS-ETAS models may not be robustly applicable to the real data.

  20. History of earthquakes and tsunamis along the eastern Aleutian-Alaska megathrust, with implications for tsunami hazards in the California Continental Borderland

    USGS Publications Warehouse

    Ryan, Holly F.; von Huene, Roland; Wells, Ray E.; Scholl, David W.; Kirby, Stephen; Draut, Amy E.

    2012-01-01

    During the past several years, devastating tsunamis were generated along subduction zones in Indonesia, Chile, and most recently Japan. Both the Chile and Japan tsunamis traveled across the Pacific Ocean and caused localized damage at several coastal areas in California. The question remains as to whether coastal California, in particular the California Continental Borderland, is vulnerable to more extensive damage from a far-field tsunami sourced along a Pacific subduction zone. Assuming that the coast of California is at risk from a far-field tsunami, its coastline is most exposed to a trans-Pacific tsunami generated along the eastern Aleutian-Alaska subduction zone. We present the background geologic constraints that could control a possible giant (Mw ~9) earthquake sourced along the eastern Aleutian-Alaska megathrust. Previous great earthquakes (Mw ~8) in 1788, 1938, and 1946 ruptured single segments of the eastern Aleutian-Alaska megathrust. However, in order to generate a giant earthquake, it is necessary to rupture through multiple segments of the megathrust. Potential barriers to a throughgoing rupture, such as high-relief fracture zones or ridges, are absent on the subducting Pacific Plate between the Fox and Semidi Islands. Possible asperities (areas on the megathrust that are locked and therefore subject to infrequent but large slip) are identified by patches of high moment release observed in the historical earthquake record, geodetic studies, and the location of forearc basin gravity lows. Global Positioning System (GPS) data indicate that some areas of the eastern Aleutian-Alaska megathrust, such as that beneath Sanak Island, are weakly coupled. We suggest that although these areas will have reduced slip during a giant earthquake, they are not really large enough to form a barrier to rupture. A key aspect in defining an earthquake source for tsunami generation is determining the possibility of significant slip on the updip end of the megathrust near the trench. Large slip on the updip part of the eastern Aleutian-Alaska megathrust is a viable possibility owing to the small frontal accretionary prism and the presence of arc basement relatively close to the trench along most of the megathrust.

  1. Analysis of Injection-Induced Micro-Earthquakes in a Geothermal Steam Reservoir, The Geysers Geothermal Field, California

    SciTech Connect

    Rutqvist, Jonny; Rutqvist, J.; Oldenburg, C.M.

    2008-05-15

    In this study we analyze relative contributions to the cause and mechanism of injection-induced micro-earthquakes (MEQs) at The Geysers geothermal field, California. We estimated the potential for inducing seismicity by coupled thermal-hydrological-mechanical analysis of the geothermal steam production and cold water injection to calculate changes in stress (in time and space) and investigated if those changes could induce a rock mechanical failure and associated MEQs. An important aspect of the analysis is the concept of a rock mass that is critically stressed for shear failure. This means that shear stress in the region is near the rock-mass frictional strength, and therefore very small perturbations of the stress field can trigger an MEQ. Our analysis shows that the most important cause for injection-induced MEQs at The Geysers is cooling and associated thermal-elastic shrinkage of the rock around the injected fluid that changes the stress state in such a way that mechanical failure and seismicity can be induced. Specifically, the cooling shrinkage results in unloading and associated loss of shear strength in critically shear-stressed fractures, which are then reactivated. Thus, our analysis shows that cooling-induced shear slip along fractures is the dominant mechanism of injection-induced MEQs at The Geysers.

  2. Rupture directivity and slip distribution of the M 4.3 foreshock to the 1992 Joshua Tree earthquake, Southern California

    USGS Publications Warehouse

    Mori, J.

    1996-01-01

    Details of the M 4.3 foreshock to the Joshua Tree earthquake were studied using P waves recorded on the Southern California Seismic Network and the Anza network. Deconvolution, using an M 2.4 event as an empirical Green's function, corrected for complicated path and site effects in the seismograms and produced simple far-field displacement pulses that were inverted for a slip distribution. Both possible fault planes, north-south and east-west, for the focal mechanism were tested by a least-squares inversion procedure with a range of rupture velocities. The results showed that the foreshock ruptured the north-south plane, similar to the mainshock. The foreshock initiated a few hundred meters south of the mainshock and ruptured to the north, toward the mainshock hypocenter. The mainshock (M 6.1) initiated near the northern edge of the foreshock rupture 2 hr later. The foreshock had a high stress drop (320 to 800 bars) and broke a small portion of the fault adjacent to the mainshock but was not able to immediately initiate the mainshock rupture.

  3. Eruption probabilities for the Lassen Volcanic Center and regional volcanism, northern California, and probabilities for large explosive eruptions in the Cascade Range

    USGS Publications Warehouse

    Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick

    2012-01-01

    Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving more confidence in the estimate, and we use those data to calculate the annual probability of a large eruption in the next year at 1.4x10-5.

  4. Dynamic triggering of creep events in the Salton Trough, Southern California by regional M ≥ 5.4 earthquakes constrained by geodetic observations and numerical simulations

    NASA Astrophysics Data System (ADS)

    Wei, Meng; Liu, Yajing; Kaneko, Yoshihiro; McGuire, Jeffrey J.; Bilham, Roger

    2015-10-01

    Since a regional earthquake in 1951, shallow creep events on strike-slip faults within the Salton Trough, Southern California have been triggered at least 10 times by M ≥ 5.4 earthquakes within 200 km. The high earthquake and creep activity and the long history of digital recording within the Salton Trough region provide a unique opportunity to study the mechanism of creep event triggering by nearby earthquakes. Here, we document the history of fault creep events on the Superstition Hills Fault based on data from creepmeters, InSAR, and field surveys since 1988. We focus on a subset of these creep events that were triggered by significant nearby earthquakes. We model these events by adding realistic static and dynamic perturbations to a theoretical fault model based on rate- and state-dependent friction. We find that the static stress changes from the causal earthquakes are less than 0.1 MPa and too small to instantaneously trigger creep events. In contrast, we can reproduce the characteristics of triggered slip with dynamic perturbations alone. The instantaneous triggering of creep events depends on the peak and the time-integrated amplitudes of the dynamic Coulomb stress change. Based on observations and simulations, the stress change amplitude required to trigger a creep event of a 0.01-mm surface slip is about 0.6 MPa. This threshold is at least an order of magnitude larger than the reported triggering threshold of non-volcanic tremors (2-60 kPa) and earthquakes in geothermal fields (5 kPa) and near shale gas production sites (0.2-0.4 kPa), which may result from differences in effective normal stress, fault friction, the density of nucleation sites in these systems, or triggering mechanisms. We conclude that shallow frictional heterogeneity can explain both the spontaneous and dynamically triggered creep events on the Superstition Hills Fault.

  5. The 1987 Whittier Narrows earthquake sequence in Los Angeles, southern California: Seismological and tectonic analysis

    NASA Astrophysics Data System (ADS)

    Hauksson, Egill; Jones, Lucile M.

    1989-07-01

    The October 1, 1987, Whittier Narrows earthquake (ML = 5.9) was located at 34°2.96'N, 118°4.86'W, at a depth of 14.6±0.5 km in the northeastern Los Angeles basin. The focal mechanism of the mainshock derived from first motion polarities shows pure thrust motion on west striking nodal planes with dips of 25°±5° and 65°±5°, respectively. The aftershocks define an approximately circular surface that dips gently to the north, centered at the hypocenter of the mainshock with a diameter of 4-6 km. Hence the spatial distribution of the mainshock and aftershocks as well as the focal mechanisms of the mainshock indicate that the causative fault was a 25° north dipping thrust fault striking west and is confined to depths from 10 to 16 km. Although most of the 59 aftershock focal mechanisms presented here document a complex sequence of faulting, they are consistent with deformation of the hanging wall caused by the thrust faulting observed in the mainshock. A cluster of reverse faulting events on north striking planes occurred within hours after the mainshock, 2 km to the west of the mainshock. The largest aftershock (ML = 5.3) occurred on October 4 and showed mostly right-lateral faulting on the same north-northwest striking plane within the hanging wall. Similarly, several left-lateral focal mechanisms are observed near the eastern edge of the mainshock rupture. The earthquake and calibration blast arrival time data were inverted to obtain two refined crustal velocity models and a set of station delays. When relocating the blast using the new models and delays, the absolute hypocentral location bias is less than 0.5 km. The mainshock was followed by nearly 500 locatable aftershocks, which is a small number of aftershocks for this magnitude mainshock. The decay rate of aftershock occurrences with time was fast, while the b value was low (0.67±0.05) for a Los Angeles basin sequence.

  6. Moving Mountains and Deep Crustal Earthquakes: Evidence for Deep Magma Injection Beneath Lake Tahoe, Nevada-California

    NASA Astrophysics