Science.gov

Sample records for earthquake loss estimation

  1. Earthquake Loss Estimation Uncertainties

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Aleksander

    2013-04-01

    The paper addresses the reliability issues of strong earthquakes loss assessment following strong earthquakes with worldwide Systems' application in emergency mode. Timely and correct action just after an event can result in significant benefits in saving lives. In this case the information about possible damage and expected number of casualties is very critical for taking decision about search, rescue operations and offering humanitarian assistance. Such rough information may be provided by, first of all, global systems, in emergency mode. The experience of earthquakes disasters in different earthquake-prone countries shows that the officials who are in charge of emergency response at national and international levels are often lacking prompt and reliable information on the disaster scope. Uncertainties on the parameters used in the estimation process are numerous and large: knowledge about physical phenomena and uncertainties on the parameters used to describe them; global adequacy of modeling techniques to the actual physical phenomena; actual distribution of population at risk at the very time of the shaking (with respect to immediate threat: buildings or the like); knowledge about the source of shaking, etc. Needless to be a sharp specialist to understand, for example, that the way a given building responds to a given shaking obeys mechanical laws which are poorly known (if not out of the reach of engineers for a large portion of the building stock); if a carefully engineered modern building is approximately predictable, this is far not the case for older buildings which make up the bulk of inhabited buildings. The way population, inside the buildings at the time of shaking, is affected by the physical damage caused to the buildings is not precisely known, by far. The paper analyzes the influence of uncertainties in strong event parameters determination by Alert Seismological Surveys, of simulation models used at all stages from, estimating shaking intensity

  2. Loss estimation of Membramo earthquake

    NASA Astrophysics Data System (ADS)

    Damanik, R.; Sedayo, H.

    2016-05-01

    Papua Tectonics are dominated by the oblique collision of the Pacific plate along the north side of the island. A very high relative plate motions (i.e. 120 mm/year) between the Pacific and Papua-Australian Plates, gives this region a very high earthquake production rate, about twice as much as that of Sumatra, the western margin of Indonesia. Most of the seismicity occurring beneath the island of New Guinea is clustered near the Huon Peninsula, the Mamberamo region, and the Bird's Neck. At 04:41 local time(GMT+9), July 28th 2015, a large earthquake of Mw = 7.0 occurred at West Mamberamo Fault System. The earthquake focal mechanism are dominated by northwest-trending thrust mechanisms. GMPE and ATC vulnerability curve were used to estimate distribution of damage. Mean of estimated losses was caused by this earthquake is IDR78.6 billion. We estimated insurance loss will be only small portion in total general due to deductible.

  3. Estimating economic losses from earthquakes using an empirical approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.

    2013-01-01

    We extended the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) empirical fatality estimation methodology proposed by Jaiswal et al. (2009) to rapidly estimate economic losses after significant earthquakes worldwide. The requisite model inputs are shaking intensity estimates made by the ShakeMap system, the spatial distribution of population available from the LandScan database, modern and historic country or sub-country population and Gross Domestic Product (GDP) data, and economic loss data from Munich Re's historical earthquakes catalog. We developed a strategy to approximately scale GDP-based economic exposure for historical and recent earthquakes in order to estimate economic losses. The process consists of using a country-specific multiplicative factor to accommodate the disparity between economic exposure and the annual per capita GDP, and it has proven successful in hindcast-ing past losses. Although loss, population, shaking estimates, and economic data used in the calibration process are uncertain, approximate ranges of losses can be estimated for the primary purpose of gauging the overall scope of the disaster and coordinating response. The proposed methodology is both indirect and approximate and is thus best suited as a rapid loss estimation model for applications like the PAGER system.

  4. Status of developing Earthquake Loss Estimation in Korea Using HAZUS

    NASA Astrophysics Data System (ADS)

    Kang, S. Y.; Kim, K. H.

    2015-12-01

    HAZUS, a loss estimation tool due to natural hazards, has been used in Korea. In the earlier development of earthquake loss estimation system in Korea, a ShakeMap due to magnitude 6.7 scenario earthquake in the southeastern Korea prepared by USGS was used. Attenuation relation proposed by Boore et al. (1997) is assumed to simulate the strong ground motion with distance. During the initial stage, details of local site characteristics and attenuation relations were not properly accounted. Later, the attenuation relation proposed by Sadigh et al. (1997) for site classes B, C, and D were reviewed and applied to the Korean Peninsula. Loss estimations were improved using the attenuation relation and the deterministic methods available in HAZUS. Most recently, a site classification map has been derived using geologic and geomorphologic data, which are readily available from the geologic and topographic maps of Korea. Loss estimations using the site classification map differ from earlier ones. For example, earthquake loss using ShakeMap overestimates house damages. 43% of houses are estimated to experience moderate or severe damage in the results using ShakeMap, while 23 % is estimated in those using the site classification map. The number of people seeking emergency shelters is also different from previous estimates. It is considered revised estimates are more realistic since the ground motions ensuing from earthquakes are better represented. In the next application, landslide, liquefaction and fault information are planned to be implemented in HAZUS. The result is expected to better represent any loss under the emergency situation, thus help the planning disaster response and hazard mitigations.

  5. Estimating annualized earthquake losses for the conterminous United States

    USGS Publications Warehouse

    Jaiswal, Kishor S.; Bausch, Douglas; Chen, Rui; Bouabid, Jawhar; Seligson, Hope

    2015-01-01

    We make use of the most recent National Seismic Hazard Maps (the years 2008 and 2014 cycles), updated census data on population, and economic exposure estimates of general building stock to quantify annualized earthquake loss (AEL) for the conterminous United States. The AEL analyses were performed using the Federal Emergency Management Agency's (FEMA) Hazus software, which facilitated a systematic comparison of the influence of the 2014 National Seismic Hazard Maps in terms of annualized loss estimates in different parts of the country. The losses from an individual earthquake could easily exceed many tens of billions of dollars, and the long-term averaged value of losses from all earthquakes within the conterminous U.S. has been estimated to be a few billion dollars per year. This study estimated nationwide losses to be approximately $4.5 billion per year (in 2012$), roughly 80% of which can be attributed to the States of California, Oregon and Washington. We document the change in estimated AELs arising solely from the change in the assumed hazard map. The change from the 2008 map to the 2014 map results in a 10 to 20% reduction in AELs for the highly seismic States of the Western United States, whereas the reduction is even more significant for Central and Eastern United States.

  6. Global Building Inventory for Earthquake Loss Estimation and Risk Management

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David; Porter, Keith

    2010-01-01

    We develop a global database of building inventories using taxonomy of global building types for use in near-real-time post-earthquake loss estimation and pre-earthquake risk analysis, for the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) program. The database is available for public use, subject to peer review, scrutiny, and open enhancement. On a country-by-country level, it contains estimates of the distribution of building types categorized by material, lateral force resisting system, and occupancy type (residential or nonresidential, urban or rural). The database draws on and harmonizes numerous sources: (1) UN statistics, (2) UN Habitat’s demographic and health survey (DHS) database, (3) national housing censuses, (4) the World Housing Encyclopedia and (5) other literature.

  7. Rapid estimation of earthquake loss based on instrumental seismic intensity: design and realization

    NASA Astrophysics Data System (ADS)

    Huang, Hongsheng; Chen, Lin; Zhu, Gengqing; Wang, Lin; Lin, Yanzhao; Wang, Huishan

    2013-11-01

    As a result of our ability to acquire large volumes of real-time earthquake observation data, coupled with increased computer performance, near real-time seismic instrument intensity can be obtained by using ground motion data observed by instruments and by using the appropriate spatial interpolation methods. By combining vulnerability study results from earthquake disaster research with earthquake disaster assessment models, we can estimate the losses caused by devastating earthquakes, in an attempt to provide more reliable information for earthquake emergency response and decision support. This paper analyzes the latest progress on the methods of rapid earthquake loss estimation at home and abroad. A new method involving seismic instrument intensity rapid reporting to estimate earthquake loss is proposed and the relevant software is developed. Finally, a case study using the M L4.9 earthquake that occurred in Shun-chang county, Fujian Province on March 13, 2007 is given as an example of the proposed method.

  8. A global building inventory for earthquake loss estimation and risk management

    USGS Publications Warehouse

    Jaiswal, K.; Wald, D.; Porter, K.

    2010-01-01

    We develop a global database of building inventories using taxonomy of global building types for use in near-real-time post-earthquake loss estimation and pre-earthquake risk analysis, for the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) program. The database is available for public use, subject to peer review, scrutiny, and open enhancement. On a country-by-country level, it contains estimates of the distribution of building types categorized by material, lateral force resisting system, and occupancy type (residential or nonresidential, urban or rural). The database draws on and harmonizes numerous sources: (1) UN statistics, (2) UN Habitat's demographic and health survey (DHS) database, (3) national housing censuses, (4) the World Housing Encyclopedia and (5) other literature. ?? 2010, Earthquake Engineering Research Institute.

  9. Comparing population exposure to multiple Washington earthquake scenarios for prioritizing loss estimation studies

    USGS Publications Warehouse

    Wood, Nathan J.; Ratliff, Jamie L.; Schelling, John; Weaver, Craig S.

    2014-01-01

    Scenario-based, loss-estimation studies are useful for gauging potential societal impacts from earthquakes but can be challenging to undertake in areas with multiple scenarios and jurisdictions. We present a geospatial approach using various population data for comparing earthquake scenarios and jurisdictions to help emergency managers prioritize where to focus limited resources on data development and loss-estimation studies. Using 20 earthquake scenarios developed for the State of Washington (USA), we demonstrate how a population-exposure analysis across multiple jurisdictions based on Modified Mercalli Intensity (MMI) classes helps emergency managers understand and communicate where potential loss of life may be concentrated and where impacts may be more related to quality of life. Results indicate that certain well-known scenarios may directly impact the greatest number of people, whereas other, potentially lesser-known, scenarios impact fewer people but consequences could be more severe. The use of economic data to profile each jurisdiction’s workforce in earthquake hazard zones also provides additional insight on at-risk populations. This approach can serve as a first step in understanding societal impacts of earthquakes and helping practitioners to efficiently use their limited risk-reduction resources.

  10. Improving PAGER's real-time earthquake casualty and loss estimation toolkit: a challenge

    USGS Publications Warehouse

    Jaiswal, K.S.; Wald, D.J.

    2012-01-01

    We describe the on-going developments of PAGER’s loss estimation models, and discuss value-added web content that can be generated related to exposure, damage and loss outputs for a variety of PAGER users. These developments include identifying vulnerable building types in any given area, estimating earthquake-induced damage and loss statistics by building type, and developing visualization aids that help locate areas of concern for improving post-earthquake response efforts. While detailed exposure and damage information is highly useful and desirable, significant improvements are still necessary in order to improve underlying building stock and vulnerability data at a global scale. Existing efforts with the GEM’s GED4GEM and GVC consortia will help achieve some of these objectives. This will benefit PAGER especially in regions where PAGER’s empirical model is less-well constrained; there, the semi-empirical and analytical models will provide robust estimates of damage and losses. Finally, we outline some of the challenges associated with rapid casualty and loss estimation that we experienced while responding to recent large earthquakes worldwide.

  11. A new Tool for Estimating Losses due to Earthquakes: QUAKELOSS2

    NASA Astrophysics Data System (ADS)

    Kaestli, P.; Wyss, M.; Bonjour, C.; Wiemer, S.; Wyss, B. M.

    2007-12-01

    WAPMERR and the Swiss Seismological Service are developing new software for estimating mean damage to buildings, number of injured and number of fatalities due to earthquakes worldwide. The focus for applications is real-time estimates of losses after earthquakes in countries without dense seismograph networks, and results that are easy to digest by relief agencies. Therefore, the standard version of the software addresses losses by settlement, subdivisions of settlements and important pieces of infrastructure. However, a generic design, an open source policy and well defined interfaces will allow the software to work on any gridded or discrete building stock data, to do Monte-Carlo simulations for error assessment and to plug in more elaborate source models than simple point and line sources and thus to compute realistic loss scenarios as well as probabilistic risk maps. It will provide interfaces to SHAKEMAP and PAGER, such that innovations developed for the latter programs may be used in QUAKELOSS2, and vice versa. A client server design will provide a front-end web interface where the user may directly manage servers as well as run the software in one's&pown laboratory. The input-output features and mapping will be designed to allow the user to run QUAKELOSS2 remotely with basic functions, as well as in a laboratory setting including a full-featured GIS setup for additional analysis. In many cases, the input data (earthquake parameters as well as population and building stock data) are poorly known for developing countries. Calibration of loss estimates, using past earthquakes that have caused damage and WAPMERR's experience of four years" estimating losses, will help to produce approximately correct results in countries with strong earthquake activity. A worldwide standard dataset on population and building stock will be provided as open source together with the software. The dataset will be improved successively, based on input from satellite images

  12. Loss estimates for a Puente Hills blind-thrust earthquake in Los Angeles, California

    USGS Publications Warehouse

    Field, E.H.; Seligson, H.A.; Gupta, N.; Gupta, V.; Jordan, T.H.; Campbell, K.W.

    2005-01-01

    Based on OpenSHA and HAZUS-MH, we present loss estimates for an earthquake rupture on the recently identified Puente Hills blind-thrust fault beneath Los Angeles. Given a range of possible magnitudes and ground motion models, and presuming a full fault rupture, we estimate the total economic loss to be between $82 and $252 billion. This range is not only considerably higher than a previous estimate of $69 billion, but also implies the event would be the costliest disaster in U.S. history. The analysis has also provided the following predictions: 3,000-18,000 fatalities, 142,000-735,000 displaced households, 42,000-211,000 in need of short-term public shelter, and 30,000-99,000 tons of debris generated. Finally, we show that the choice of ground motion model can be more influential than the earthquake magnitude, and that reducing this epistemic uncertainty (e.g., via model improvement and/or rejection) could reduce the uncertainty of the loss estimates by up to a factor of two. We note that a full Puente Hills fault rupture is a rare event (once every ???3,000 years), and that other seismic sources pose significant risk as well. ?? 2005, Earthquake Engineering Research Institute.

  13. Regional earthquake loss estimation in the Autonomous Province of Bolzano - South Tyrol (Italy)

    NASA Astrophysics Data System (ADS)

    Huttenlau, Matthias; Winter, Benjamin

    2013-04-01

    Beside storm events geophysical events cause a majority of natural hazard losses on a global scale. However, in alpine regions with a moderate earthquake risk potential like in the study area and thereupon connected consequences on the collective memory this source of risk is often neglected in contrast to gravitational and hydrological hazards processes. In this context, the comparative analysis of potential disasters and emergencies on a national level in Switzerland (Katarisk study) has shown that earthquakes are the most serious source of risk in general. In order to estimate the potential losses of earthquake events for different return periods and loss dimensions of extreme events the following study was conducted in the Autonomous Province of Bolzano - South Tyrol (Italy). The applied methodology follows the generally accepted risk concept based on the risk components hazard, elements at risk and vulnerability, whereby risk is not defined holistically (direct, indirect, tangible and intangible) but with the risk category losses on buildings and inventory as a general risk proxy. The hazard analysis is based on a regional macroseismic scenario approach. Thereby, the settlement centre of each community (116 communities) is defined as potential epicentre. For each epicentre four different epicentral scenarios (return periods of 98, 475, 975 and 2475 years) are calculated based on the simple but approved and generally accepted attenuation law according to Sponheuer (1960). The relevant input parameters to calculate the epicentral scenarios are (i) the macroseismic intensity and (ii) the focal depth. The considered macroseismic intensities are based on a probabilistic seismic hazard analysis (PSHA) of the Italian earthquake catalogue on a community level (Dipartimento della Protezione Civile). The relevant focal depth are considered as a mean within a defined buffer of the focal depths of the harmonized earthquake catalogues of Italy and Switzerland as well as

  14. Ways to increase the reliability of earthquake loss estimations in emergency mode

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valeri; Ugarov, Aleksander

    2016-04-01

    The lessons of earthquake disasters in Nepal, China, Indonesia, India, Haiti, Turkey and many others show that authorities in charge of emergency response are most often lacking prompt and reliable information on the disaster itself and its secondary effects. Timely and adequate action just after a strong earthquake can result in significant benefits in saving lives and other benefits, especially, in densely populated areas with high level of industrialization. The reliability of rough and rapid information provided by "global systems" (i.e. systems operated without consideration on wherever the earthquake has occurred), in emergency mode is strongly dependent on many factors dealt with input data and simulation models used in such systems. The paper analyses the different factors contribution to the total "error" of fatality estimation in emergency mode. Examples of four strong events in Nepal, Italy, China, Italy allowed to make a conclusion that the reliability of loss estimations is first of all influenced by the uncertainties in event parameters determination (coordinates, magnitude, source depth); this factors' group rating is the highest; as the degree of influence on reliability of loss estimations is equal to about 50%. The second place is taken by the factors' group responsible for macroseismic field simulation; the degree of influence of the group errors is about 30%. The last place is taken by group of factors, which describes the built environment distribution and regional vulnerability functions; the factors' group contributes about 20% to the error of loss estimation. Ways to minimize the influence of different factors on the reliability of loss assessment in near real time are proposed. The first one is to determine the rating of seismological surveys for different zones in attempting to decrease uncertainties in the earthquake parameters input determination in emergency mode. The second one is to "calibrate" the "global systems" drawing advantage

  15. Estimating earthquake potential

    USGS Publications Warehouse

    Page, R.A.

    1980-01-01

    The hazards to life and property from earthquakes can be minimized in three ways. First, structures can be designed and built to resist the effects of earthquakes. Second, the location of structures and human activities can be chosen to avoid or to limit the use of areas known to be subject to serious earthquake hazards. Third, preparations for an earthquake in response to a prediction or warning can reduce the loss of life and damage to property as well as promote a rapid recovery from the disaster. The success of the first two strategies, earthquake engineering and land use planning, depends on being able to reliably estimate the earthquake potential. The key considerations in defining the potential of a region are the location, size, and character of future earthquakes and frequency of their occurrence. Both historic seismicity of the region and the geologic record are considered in evaluating earthquake potential. 

  16. Estimation of damage and human losses due to earthquakes worldwide - QLARM strategy and experience

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Rosset, P.; Wyss, M.; Wiemer, S.; Bonjour, C.; Cua, G.

    2009-04-01

    Within the framework of the IMRPOVE project, we are constructing our second-generation loss estimation tool QLARM (earthQuake Loss Assessment for Response and Mitigation). At the same time, we are upgrading the input data to be used in real-time and scenario mode. The software and databases will be open to all scientific users. The estimates include: (1) total number of fatalities and injured, (2) casualties by settlement, (3) percent of buildings in five damage grades in each settlement, (4) a map showing mean damage by settlement, and (5) functionality of large medical facilities. We present here our strategy and progress so far in constructing and calibrating the new tool. The QLARM worldwide database of the elements-at-risk consists of point and discrete city models with the following parameters: (1) Soil amplification factors; (2) distribution of building stock and population into vulnerability classes of the European Macroseismic Scale (EMS-98); (3) most recent population numbers by settlement or district; (4) information regarding medical facilities where available. We calculate the seismic demand in terms of (a) macroseismic (seismic intensity) or (b) instrumental (PGA) parameters. Attenuation relationships predicting both parameters will be used for different regions worldwide, considering the tectonic regime and wave propagation characteristics. We estimate damage and losses using: (i) vulnerability models pertinent to EMS-98 vulnerability classes; (ii) building collapse rates pertinent to different regions worldwide; and, (iii) casualty matrices pertinent to EMS-98 vulnerability classes. We also provide approximate estimates for the functionality of large medical facilities considering their structural, non-structural damage and loss-of-function of the medical equipment and installations. We calibrate the QLARM database and the loss estimation tool using macroseismic observations and information regarding damage and human losses from past earthquakes

  17. A simulation of Earthquake Loss Estimation in Southeastern Korea using HAZUS and the local site classification Map

    NASA Astrophysics Data System (ADS)

    Kang, S.; Kim, K.

    2013-12-01

    Regionally varying seismic hazards can be estimated using an earthquake loss estimation system (e.g. HAZUS-MH). The estimations for actual earthquakes help federal and local authorities develop rapid, effective recovery measures. Estimates for scenario earthquakes help in designing a comprehensive earthquake hazard mitigation plan. Local site characteristics influence the ground motion. Although direct measurements are desirable to construct a site-amplification map, such data are expensive and time consuming to collect. Thus we derived a site classification map of the southern Korean Peninsula using geologic and geomorphologic data, which are readily available for the entire southern Korean Peninsula. Class B sites (mainly rock) are predominant in the area, although localized areas of softer soils are found along major rivers and seashores. The site classification map is compared with independent site classification studies to confirm our site classification map effectively represents the local behavior of site amplification during an earthquake. We then estimated the losses due to a magnitude 6.7 scenario earthquake in Gyeongju, southeastern Korea, with and without the site classification map. Significant differences in loss estimates were observed. The loss without the site classification map decreased without variation with increasing epicentral distance, while the loss with the site classification map varied from region to region, due to both the epicentral distance and local site effects. The major cause of the large loss expected in Gyeongju is the short epicentral distance. Pohang Nam-Gu is located farther from the earthquake source region. Nonetheless, the loss estimates in the remote city are as large as those in Gyeongju and are attributed to the site effect of soft soil found widely in the area.

  18. Integration of Near-fault Earthquake Ground Motion Simulations in Damage and Loss Estimation Procedures.

    NASA Astrophysics Data System (ADS)

    Faccioli, E.; Lagomarsino, S.; Demartinos, K.; Smerzini, C.; Stuppazzini, M.; Vanini, M.; Villani, M.; Smolka, A.; Allmann, A.

    2010-05-01

    In this contribution we investigate the advantages and/or limitations of integrating standard approaches for damage and loss estimation procedures with synthetic data from 3D large scale numerical simulations, capable to reproduce the coupling of near-fault conditions, including the focal mechanism of the source and directivity effects, and complex geological configurations, such as deep alluvial basins or irregular topographic profiles. As a matter of fact, the largest portion of damage and losses during a major earthquake occur in near-field conditions, where earthquake ground motion is typically poorly constrained based on standard attenuation relationships, that may not be based on a sufficiently detailed description both of the seismic source and of the local geological conditions. As a case study we decided to use a scenario earthquake of Mw 6.4 occurring in the town of Sulmona, Italy along the active Mount Morrone fault. The area, located only 40 km south of l'Aquila, was selected in the frame of the Italian Project S2 (DPC-INGV 2007-2009) thanks to the amount of geological and seismological information that allowed on one hand to perform near-fault 3D earthquake ground motion simulations, and, on the other side, a reliable quantification of the potential damage thanks to the accurate data characterizing the building stocks. The 3D simulations have been carried out through a high performance Spectral Elements tool, namely GeoELSE (http://geoelse.stru.polimi.it), designed to study linear, non-linear viscoelastic and viscoplastic wave propagation analyses in large-scale earth models, including the seismic source, the propagation path, the local near-surface geology, and, if needed, the interaction with man-made structures . The parallel implementation of the code GeoELSE ensures a reasonable computer time to resolve tens of million of degrees of freedom up to 2.5 Hz. Damage and loss evaluations based on the results of numerical simulations are compared with

  19. Comparison of Loss Estimates for Greater Victoria, British Columbia, from Scenario Earthquakes using HAZUS - Implications for Risk, Response and Recovery

    NASA Astrophysics Data System (ADS)

    Zaleski, M. P.; Clague, J. J.

    2012-12-01

    Victoria, British Columbia, lies near the Cascadia subduction zone, where three distinct classes of earthquakes contribute to local seismic risk. The largest-magnitude events are subduction-interface earthquakes, which generate widespread shaking across the Pacific Northwest region from British Columbia to northern California. Interface-earthquake risk is mitigated somewhat by the low frequency of events and the distance from the source to populated areas. The largest contribution to the probabilistic hazard is from strong deep-focus earthquakes within the down-going Juan de Fuca slab. Intraslab quakes are frequent, but attenuation from depth results in smaller ground motions. The highest-loss scenarios are associated with major earthquakes on shallow west- to northwest-trending crustal faults that extend across Puget Sound and the southern Strait of Georgia. These faults are a result of compression in the North American plate associated with oblique subduction of the Juan de Fuca slab beneath southwestern British Columbia and northwestern Washington. Our understanding of frequency-magnitude relations for individual shallow-crustal faults is hampered by a widespread cover of Pleistocene glacial deposits, thus the risk is difficult to estimate. We have prepared shake maps for several scenario earthquakes that take into account local geologic conditions. We compare strong ground motions from local crustal fault sources with Cascadia plate-boundary, intraslab and probabilistic building code ground motions. Hazard maps from scenario events are combined with models of the build environment within the HAZUS platform to generate loss estimates. The results may be used to identify vulnerabilities, focus advance mitigation efforts, and guide response and recovery planning.

  20. Impact of Uncertainty on Loss Estimates for a Repeat of the 1908 Messina-Reggio Calabria Earthquake in Southern Italy

    SciTech Connect

    Franco, Guillermo; Shen-Tu, Bing Ming; Bazzurro, Paolo; Goretti, Agostino; Valensise, Gianluca

    2008-07-08

    Increasing sophistication in the insurance and reinsurance market is stimulating the move towards catastrophe models that offer a greater degree of flexibility in the definition of model parameters and model assumptions. This study explores the impact of uncertainty in the input parameters on the loss estimates by departing from the exclusive usage of mean values to establish the earthquake event mechanism, the ground motion fields, or the damageability of the building stock. Here the potential losses due to a repeat of the 1908 Messina-Reggio Calabria event are calculated using different plausible alternatives found in the literature that encompass 12 event scenarios, 2 different ground motion prediction equations, and 16 combinations of damage functions for the building stock, a total of 384 loss scenarios. These results constitute the basis for a sensitivity analysis of the different assumptions on the loss estimates that allows the model user to estimate the impact of the uncertainty on input parameters and the potential spread of the model results. For the event under scrutiny, average losses would amount today to about 9.000 to 10.000 million Euros. The uncertainty in the model parameters is reflected in the high coefficient of variation of this loss, reaching approximately 45%. The choice of ground motion prediction equations and vulnerability functions of the building stock contribute the most to the uncertainty in loss estimates. This indicates that the application of non-local-specific information has a great impact on the spread of potential catastrophic losses. In order to close this uncertainty gap, more exhaustive documentation practices in insurance portfolios will have to go hand in hand with greater flexibility in the model input parameters.

  1. Observed and estimated economic losses in Guadeloupe (French Antilles) after Les Saintes Earthquake (2004). Application to risk comparison

    NASA Astrophysics Data System (ADS)

    Monfort, Daniel; Reveillère, Arnaud; Lecacheux, Sophie; Muller, Héloise; Grisanti, Ludovic; Baills, Audrey; Bertil, Didier; Sedan, Olivier; Tinard, Pierre

    2013-04-01

    The main objective of this work is to compare the potential direct economic losses between two different hazards in Guadeloupe (French Antilles), earthquakes and storm surges, for different return periods. In order to validate some hypotheses which are done concerning building typologies and their insured values a comparison between real economic loss data and estimated ones is done using a real event. In 2004 there was an earthquake in Guadeloupe, Mw 6.3, in a little archipelago in the south of Guadeloupe called Les Saintes. The heaviest intensities were VIII in the municipalities of Les Saintes and decreases from VII to IV in the other municipalities of Guadeloupe. The CCR, French Reinsurance Organism, has provided data about the total insured economic losses estimated per municipality (in a situation in 2011) and the insurance penetration ratio, it means, the ratio of insured exposed elements per municipality. Some other information about observed damaged structures is quite irregular all over the archipelago, being the only reliable one the observed macroseismic intensity per municipality (field survey done by BCSF). These data at Guadeloupe's scale has been compared with results coming from a retro damage scenario for this earthquake done with the vulnerability data from current buildings and their mean economic value of each building type and taking into account the local amplification effects on the earthquake propagation. In general the results are quite similar but with some significant differences. The results coming from scenario are quite correlated with the spatial attenuation from the earthquake intensity; the heaviest economic losses are concentrated within the municipalities exposed to a considerable and damageable intensity (VII to VIII). On the other side, CCR data show that heavy economic damages are not only located in the most impacted cities but also in the most important municipalities of the archipelago in terms of economic activity

  2. Planning a Preliminary program for Earthquake Loss Estimation and Emergency Operation by Three-dimensional Structural Model of Active Faults

    NASA Astrophysics Data System (ADS)

    Ke, M. C.

    2015-12-01

    Large scale earthquakes often cause serious economic losses and a lot of deaths. Because the seismic magnitude, the occurring time and the occurring location of earthquakes are still unable to predict now. The pre-disaster risk modeling and post-disaster operation are really important works of reducing earthquake damages. In order to understanding disaster risk of earthquakes, people usually use the technology of Earthquake simulation to build the earthquake scenarios. Therefore, Point source, fault line source and fault plane source are the models which often are used as a seismic source of scenarios. The assessment results made from different models used on risk assessment and emergency operation of earthquakes are well, but the accuracy of the assessment results could still be upgrade. This program invites experts and scholars from Taiwan University, National Central University, and National Cheng Kung University, and tries using historical records of earthquakes, geological data and geophysical data to build underground three-dimensional structure planes of active faults. It is a purpose to replace projection fault planes by underground fault planes as similar true. The analysis accuracy of earthquake prevention efforts can be upgraded by this database. Then these three-dimensional data will be applied to different stages of disaster prevention. For pre-disaster, results of earthquake risk analysis obtained by the three-dimensional data of the fault plane are closer to real damage. For disaster, three-dimensional data of the fault plane can be help to speculate that aftershocks distributed and serious damage area. The program has been used 14 geological profiles to build the three dimensional data of Hsinchu fault and HisnCheng faults in 2015. Other active faults will be completed in 2018 and be actually applied on earthquake disaster prevention.

  3. Urban Earthquake Shaking and Loss Assessment

    NASA Astrophysics Data System (ADS)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Level 2 analysis of the ELER Software (similar to HAZUS and SELENA) is essentially intended for earthquake risk assessment (building damage, consequential human casualties and macro economic loss quantifiers) in urban areas. The basic Shake Mapping is similar to the Level 0 and Level 1 analysis however, options are available for more sophisticated treatment of site response through externally entered data and improvement of the shake map through incorporation

  4. Too generous to a fault? Is reliable earthquake safety a lost art? Errors in expected human losses due to incorrect seismic hazard estimates

    NASA Astrophysics Data System (ADS)

    Bela, James

    2014-11-01

    "One is well advised, when traveling to a new territory, to take a good map and then to check the map with the actual territory during the journey." In just such a reality check, Global Seismic Hazard Assessment Program (GSHAP) maps (prepared using PSHA) portrayed a "low seismic hazard," which was then also assumed to be the "risk to which the populations were exposed." But time-after-time-after-time the actual earthquakes that occurred were not only "surprises" (many times larger than those implied on the maps), but they were often near the maximum potential size (Maximum Credible Earthquake or MCE) that geologically could occur. Given these "errors in expected human losses due to incorrect seismic hazard estimates" revealed globally in these past performances of the GSHAP maps (> 700,000 deaths 2001-2011), we need to ask not only: "Is reliable earthquake safety a lost art?" but also: "Who and what were the `Raiders of the Lost Art?' "

  5. A new method for the production of social fragility functions and the result of its use in worldwide fatality loss estimation for earthquakes

    NASA Astrophysics Data System (ADS)

    Daniell, James; Wenzel, Friedemann

    2014-05-01

    A review of over 200 fatality models over the past 50 years for earthquake loss estimation from various authors has identified key parameters that influence fatality estimation in each of these models. These are often very specific and cannot be readily adapted globally. In the doctoral dissertation of the author, a new method is used for regression of fatalities to intensity using loss functions based not only on fatalities, but also using population models and other socioeconomic parameters created through time for every country worldwide for the period 1900-2013. A calibration of functions was undertaken from 1900-2008, and each individual quake analysed from 2009-2013 in real-time, in conjunction with www.earthquake-report.com. Using the CATDAT Damaging Earthquakes Database containing socioeconomic loss information for 7208 damaging earthquake events from 1900-2013 including disaggregation of secondary effects, fatality estimates for over 2035 events have been re-examined from 1900-2013. In addition, 99 of these events have detailed data for the individual cities and towns or have been reconstructed to create a death rate as a percentage of population. Many historical isoseismal maps and macroseismic intensity datapoint surveys collected globally, have been digitised and modelled covering around 1353 of these 2035 fatal events, to include an estimate of population, occupancy and socioeconomic climate at the time of the event at each intensity bracket. In addition, 1651 events without fatalities but causing damage have also been examined in this way. The production of socioeconomic and engineering indices such as HDI and building vulnerability has been undertaken on a country-level and state/province-level leading to a dataset allowing regressions not only using a static view of risk, but also allowing for the change in the socioeconomic climate between the earthquake events to be undertaken. This means that a year 1920 event in a country, will not simply be

  6. Rapid exposure and loss estimates for the May 12, 2008 Mw 7.9 Wenchuan earthquake provided by the U.S. Geological Survey's PAGER system

    USGS Publications Warehouse

    Earle, P.S.; Wald, D.J.; Allen, T.I.; Jaiswal, K.S.; Porter, K.A.; Hearne, M.G.

    2008-01-01

    One half-hour after the May 12th Mw 7.9 Wenchuan, China earthquake, the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) system distributed an automatically generated alert stating that 1.2 million people were exposed to severe-to-extreme shaking (Modified Mercalli Intensity VIII or greater). It was immediately clear that a large-scale disaster had occurred. These alerts were widely distributed and referenced by the major media outlets and used by governments, scientific, and relief agencies to guide their responses. The PAGER alerts and Web pages included predictive ShakeMaps showing estimates of ground shaking, maps of population density, and a list of estimated intensities at impacted cities. Manual, revised alerts were issued in the following hours that included the dimensions of the fault rupture. Within a half-day, PAGER’s estimates of the population exposed to strong shaking levels stabilized at 5.2 million people. A coordinated research effort is underway to extend PAGER’s capability to include estimates of the number of casualties. We are pursuing loss models that will allow PAGER the flexibility to use detailed inventory and engineering results in regions where these data are available while also calculating loss estimates in regions where little is known about the type and strength of the built infrastructure. Prototype PAGER fatality estimates are currently implemented and can be manually triggered. In the hours following the Wenchuan earthquake, these models predicted fatalities in the tens of thousands.

  7. Trends in global earthquake loss

    NASA Astrophysics Data System (ADS)

    Arnst, Isabel; Wenzel, Friedemann; Daniell, James

    2016-04-01

    Based on the CATDAT damage and loss database we analyse global trends of earthquake losses (in current values) and fatalities for the period between 1900 and 2015 from a statistical perspective. For this time period the data are complete for magnitudes above 6. First, we study the basic statistics of losses and find that losses below 10 bl. US satisfy approximately a power law with an exponent of 1.7 for the cumulative distribution. Higher loss values are modelled with the General Pareto Distribution (GPD). The 'transition' between power law and GPD is determined with the Mean Excess Function. We split the data set into a period of pre 1955 and post 1955 loss data as in those periods the exposure is significantly different due to population growth. The Annual Average Loss (AAL) for direct damage for events below 10 bl. US differs by a factor of 6, whereas the incorporation of the extreme loss events increases the AAL from 25 bl. US/yr to 30 bl. US/yr. Annual Average Deaths (AAD) show little (30%) difference for events below 6.000 fatalities and AAD values of 19.000 and 26.000 deaths per year if extreme values are incorporated. With data on the global Gross Domestic Product (GDP) that reflects the annual expenditures (consumption, investment, government spending) and on capital stock we relate losses to the economic capacity of societies and find that GDP (in real terms) grows much faster than losses so that the latter one play a decreasing role given the growing prosperity of mankind. This reasoning does not necessarily apply on a regional scale. Main conclusions of the analysis are that (a) a correct projection of historic loss values to nowadays US values is critical; (b) extreme value analysis is mandatory; (c) growing exposure is reflected in the AAL and AAD results for the periods pre and post 1955 events; (d) scaling loss values with global GDP data indicates that the relative size - from a global perspective - of losses decreases rapidly over time.

  8. Origin of Human Losses due to the Emilia Romagna, Italy, M5.9 Earthquake of 20 May 2012 and their Estimate in Real Time

    NASA Astrophysics Data System (ADS)

    Wyss, M.

    2012-12-01

    Estimating human losses within less than an hour worldwide requires assumptions and simplifications. Earthquake for which losses are accurately recorded after the event provide clues concerning the influence of error sources. If final observations and real time estimates differ significantly, data and methods to calculate losses may be modified or calibrated. In the case of the earthquake in the Emilia Romagna region with M5.9 on May 20th, the real time epicenter estimates of the GFZ and the USGS differed from the ultimate location by the INGV by 6 and 9 km, respectively. Fatalities estimated within an hour of the earthquake by the loss estimating tool QLARM, based on these two epicenters, numbered 20 and 31, whereas 7 were reported in the end, and 12 would have been calculated if the ultimate epicenter released by INGV had been used. These four numbers being small, do not differ statistically. Thus, the epicenter errors in this case did not appreciably influence the results. The QUEST team of INGV has reported intensities with I ≥ 5 at 40 locations with accuracies of 0.5 units and QLARM estimated I > 4.5 at 224 locations. The differences between the observed and calculated values at the 23 common locations show that the calculation in the 17 instances with significant differences were too high on average by one unit. By assuming higher than average attenuation within standard bounds for worldwide loss estimates, the calculated intensities model the observed ones better: For 57% of the locations, the difference was not significant; for the others, the calculated intensities were still somewhat higher than the observed ones. Using a generic attenuation law with higher than average attenuation, but not tailored to the region, the number of estimated fatalities becomes 12 compared to 7 reported ones. Thus, attenuation in this case decreased the discrepancy between observed and reported death by approximately a factor of two. The source of the fatalities is

  9. Pan-European Seismic Risk Assessment: A proof of concept using the Earthquake Loss Estimation Routine (ELER)

    NASA Astrophysics Data System (ADS)

    Corbane, Christina; Hancilar, Ufuk; Silva, Vitor; Ehrlich, Daniele; De Groeve, Tom

    2016-04-01

    One of the key objectives of the new EU civil protection mechanism is an enhanced understanding of risks the EU is facing. Developing a European perspective may create significant opportunities of successfully combining resources for the common objective of preventing and mitigating shared risks. Risk assessments and mapping represent the first step in these preventive efforts. The EU is facing an increasing number of natural disasters. Among them earthquakes are the second deadliest after extreme temperatures. A better-shared understanding of where seismic risk lies in the EU is useful to identify which regions are most at risk and where more detailed seismic risk assessments are needed. In that scope, seismic risk assessment models at a pan-European level have a great potential in obtaining an overview of the expected economic and human losses using a homogeneous quantitative approach and harmonized datasets. This study strives to demonstrate the feasibility of performing a probabilistic seismic risk assessment at a pan-European level with an open access methodology and using open datasets available across the EU. It aims also at highlighting the challenges and needs in datasets and the information gaps for a consistent seismic risk assessment at the pan-European level. The study constitutes a "proof of concept" that can complement the information provided by Member States in their National Risk Assessments. Its main contribution lies in pooling open-access data from different sources in a homogeneous format, which could serve as baseline data for performing more in depth risk assessments in Europe.

  10. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  11. A quick earthquake disaster loss assessment method supported by dasymetric data for emergency response in China

    NASA Astrophysics Data System (ADS)

    Xu, Jinghai; An, Jiwen; Nie, Gaozong

    2016-04-01

    Improving earthquake disaster loss estimation speed and accuracy is one of the key factors in effective earthquake response and rescue. The presentation of exposure data by applying a dasymetric map approach has good potential for addressing this issue. With the support of 30'' × 30'' areal exposure data (population and building data in China), this paper presents a new earthquake disaster loss estimation method for emergency response situations. This method has two phases: a pre-earthquake phase and a co-earthquake phase. In the pre-earthquake phase, we pre-calculate the earthquake loss related to different seismic intensities and store them in a 30'' × 30'' grid format, which has several stages: determining the earthquake loss calculation factor, gridding damage probability matrices, calculating building damage and calculating human losses. Then, in the co-earthquake phase, there are two stages of estimating loss: generating a theoretical isoseismal map to depict the spatial distribution of the seismic intensity field; then, using the seismic intensity field to extract statistics of losses from the pre-calculated estimation data. Thus, the final loss estimation results are obtained. The method is validated by four actual earthquakes that occurred in China. The method not only significantly improves the speed and accuracy of loss estimation but also provides the spatial distribution of the losses, which will be effective in aiding earthquake emergency response and rescue. Additionally, related pre-calculated earthquake loss estimation data in China could serve to provide disaster risk analysis before earthquakes occur. Currently, the pre-calculated loss estimation data and the two-phase estimation method are used by the China Earthquake Administration.

  12. ELER software - a new tool for urban earthquake loss assessment

    NASA Astrophysics Data System (ADS)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Erdik, M.

    2010-12-01

    Rapid loss estimation after potentially damaging earthquakes is critical for effective emergency response and public information. A methodology and software package, ELER-Earthquake Loss Estimation Routine, for rapid estimation of earthquake shaking and losses throughout the Euro-Mediterranean region was developed under the Joint Research Activity-3 (JRA3) of the EC FP6 Project entitled "Network of Research Infrastructures for European Seismology-NERIES". Recently, a new version (v2.0) of ELER software has been released. The multi-level methodology developed is capable of incorporating regional variability and uncertainty originating from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. Although primarily intended for quasi real-time estimation of earthquake shaking and losses, the routine is also equally capable of incorporating scenario-based earthquake loss assessments. This paper introduces the urban earthquake loss assessment module (Level 2) of the ELER software which makes use of the most detailed inventory databases of physical and social elements at risk in combination with the analytical vulnerability relationships and building damage-related casualty vulnerability models for the estimation of building damage and casualty distributions, respectively. Spectral capacity-based loss assessment methodology and its vital components are presented. The analysis methods of the Level 2 module, i.e. Capacity Spectrum Method (ATC-40, 1996), Modified Acceleration-Displacement Response Spectrum Method (FEMA 440, 2005), Reduction Factor Method (Fajfar, 2000) and Coefficient Method (ASCE 41-06, 2006), are applied to the selected building types for validation and verification purposes. The damage estimates are compared to the results obtained from the other studies available in the literature, i.e. SELENA v4.0 (Molina et al., 2008) and

  13. Extreme Earthquake Risk Estimation by Hybrid Modeling

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Garcia, S.; Emerson, D.; Perea, N.; Salazar, A.; Moulinec, C.

    2012-12-01

    The estimation of the hazard and the economical consequences i.e. the risk associated to the occurrence of extreme magnitude earthquakes in the neighborhood of urban or lifeline infrastructure, such as the 11 March 2011 Mw 9, Tohoku, Japan, represents a complex challenge as it involves the propagation of seismic waves in large volumes of the earth crust, from unusually large seismic source ruptures up to the infrastructure location. The large number of casualties and huge economic losses observed for those earthquakes, some of which have a frequency of occurrence of hundreds or thousands of years, calls for the development of new paradigms and methodologies in order to generate better estimates, both of the seismic hazard, as well as of its consequences, and if possible, to estimate the probability distributions of their ground intensities and of their economical impacts (direct and indirect losses), this in order to implement technological and economical policies to mitigate and reduce, as much as possible, the mentioned consequences. Herewith, we propose a hybrid modeling which uses 3D seismic wave propagation (3DWP) and neural network (NN) modeling in order to estimate the seismic risk of extreme earthquakes. The 3DWP modeling is achieved by using a 3D finite difference code run in the ~100 thousands cores Blue Gene Q supercomputer of the STFC Daresbury Laboratory of UK, combined with empirical Green function (EGF) techniques and NN algorithms. In particular the 3DWP is used to generate broadband samples of the 3D wave propagation of extreme earthquakes (plausible) scenarios corresponding to synthetic seismic sources and to enlarge those samples by using feed-forward NN. We present the results of the validation of the proposed hybrid modeling for Mw 8 subduction events, and show examples of its application for the estimation of the hazard and the economical consequences, for extreme Mw 8.5 subduction earthquake scenarios with seismic sources in the Mexican

  14. The OPAL Project: Open source Procedure for Assessment of Loss using Global Earthquake Modelling software

    NASA Astrophysics Data System (ADS)

    Daniell, James

    2010-05-01

    This paper provides a comparison between Earthquake Loss Estimation (ELE) software packages and their application using an "Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software" (OPAL). The OPAL procedure has been developed to provide a framework for optimisation of a Global Earthquake Modelling process through: 1) Overview of current and new components of earthquake loss assessment (vulnerability, hazard, exposure, specific cost and technology); 2) Preliminary research, acquisition and familiarisation with all available ELE software packages; 3) Assessment of these 30+ software packages in order to identify the advantages and disadvantages of the ELE methods used; and 4) Loss analysis for a deterministic earthquake (Mw7.2) for the Zeytinburnu district, Istanbul, Turkey, by applying 3 software packages (2 new and 1 existing): a modified displacement-based method based on DBELA (Displacement Based Earthquake Loss Assessment), a capacity spectrum based method HAZUS (HAZards United States) and the Norwegian HAZUS-based SELENA (SEismic Loss EstimatioN using a logic tree Approach) software which was adapted for use in order to compare the different processes needed for the production of damage, economic and social loss estimates. The modified DBELA procedure was found to be more computationally expensive, yet had less variability, indicating the need for multi-tier approaches to global earthquake loss estimation. Similar systems planning and ELE software produced through the OPAL procedure can be applied to worldwide applications, given exposure data. Keywords: OPAL, displacement-based, DBELA, earthquake loss estimation, earthquake loss assessment, open source, HAZUS

  15. Losses from the Northridge earthquake: disruption to high-technology industries in the Los Angeles Basin.

    PubMed

    Suarez-Villa, L; Walrod, W

    1999-03-01

    This study explores the relationship between industrial location geography, metropolitan patterns and earthquake disasters. Production losses from the 1994 Northridge earthquake to the Los Angeles Basin's most important high-technology industrial sector are evaluated in the context of that area's polycentric metropolitan form. Locations for each one of the Los Angeles Basin's 1,126 advanced electronics manufacturing establishments were identified and mapped, providing an indication of the patterns and clusters of the industry. An extensive survey of those establishments gathered information on disruptions from the Northridge earthquake. Production losses were then estimated, based on the sampled plants' lost workdays and the earthquake's distance-decay effects. A conservative estimate of total production losses to establishments in seven four-digit SIC advanced electronics industrial groups placed their value at US$220.4 million. Based on this estimate of losses, it is concluded that the Northridge earthquake's economic losses were much higher than initially anticipated. PMID:10204286

  16. Ten Years of Real-Time Earthquake Loss Alerts

    NASA Astrophysics Data System (ADS)

    Wyss, M.

    2013-12-01

    The priorities of the most important parameters of an earthquake disaster are: Number of fatalities, number of injured, mean damage as a function of settlement, expected intensity of shaking at critical facilities. The requirements to calculate these parameters in real time are: 1) Availability of reliable earthquake source parameters within minutes. 2) Capability of calculating expected intensities of strong ground shaking. 3) Data sets on population distribution and conditions of building stock as a function of settlements. 4) Data on locations of critical facilities. 5) Verified methods of calculating damage and losses. 6) Personnel available on a 24/7 basis to perform and review these calculations. There are three services available that distribute information about the likely consequences of earthquakes within about half an hour of the event. Two of these calculate losses, one gives only general information. Although, much progress has been made during the last ten years improving the data sets and the calculating methods, much remains to be done. The data sets are only first order approximations and the methods bare refinement. Nevertheless, the quantitative loss estimates after damaging earthquakes in real time are generally correct in the sense that they allow distinguishing disastrous from inconsequential events.

  17. Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software (OPAL)

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.

    2011-07-01

    This paper provides a comparison between Earthquake Loss Estimation (ELE) software packages and their application using an "Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software" (OPAL). The OPAL procedure was created to provide a framework for optimisation of a Global Earthquake Modelling process through: 1. overview of current and new components of earthquake loss assessment (vulnerability, hazard, exposure, specific cost, and technology); 2. preliminary research, acquisition, and familiarisation for available ELE software packages; 3. assessment of these software packages in order to identify the advantages and disadvantages of the ELE methods used; and 4. loss analysis for a deterministic earthquake (Mw = 7.2) for the Zeytinburnu district, Istanbul, Turkey, by applying 3 software packages (2 new and 1 existing): a modified displacement-based method based on DBELA (Displacement Based Earthquake Loss Assessment, Crowley et al., 2006), a capacity spectrum based method HAZUS (HAZards United States, FEMA, USA, 2003) and the Norwegian HAZUS-based SELENA (SEismic Loss EstimatioN using a logic tree Approach, Lindholm et al., 2007) software which was adapted for use in order to compare the different processes needed for the production of damage, economic, and social loss estimates. The modified DBELA procedure was found to be more computationally expensive, yet had less variability, indicating the need for multi-tier approaches to global earthquake loss estimation. Similar systems planning and ELE software produced through the OPAL procedure can be applied to worldwide applications, given exposure data.

  18. Quantitative assessment of earthquake damages: approximate economic loss

    NASA Astrophysics Data System (ADS)

    Badal, J.; Vazquez-Prada, M.; Gonzalez, A.; Samardzhieva, E.

    2003-04-01

    Prognostic estimations about the approximate direct economic cost associated with the damages caused by earthquakes are made following a suitable methodology of wide-ranging application. For an evaluation in advance of the economic cost derived from the damages, we take into account the local social wealth as a function of the gross domestic product of the country. We use a GIS-based tool, tacking advantage of the possibilities of such a system for the treatment of space-distributed data. The work is performed on the basis of the relationship between macroseismic intensity and earthquake economic loss in percentage of the wealth. We have implemented interactive software permitting to efficiently show the information by screen and the rapid visual evaluation of the performance of our method. Such an approach to earthquake casualties and damages is carried out for sites near to important urban concentrations located in a seismically active zone of Spain, thus contributing to an easier taking of decisions in contemporary earthquake engineering, emergency preparedness planning and seismic risk prevention.

  19. Losses to single-family housing from ground motions in the 1994 Northridge, California, earthquake

    USGS Publications Warehouse

    Wesson, R.L.; Perkins, D.M.; Leyendecker, E.V.; Roth, R.J., Jr.; Petersen, M.D.

    2004-01-01

    The distributions of insured losses to single-family housing following the 1994 Northridge, California, earthquake for 234 ZIP codes can be satisfactorily modeled with gamma distributions. Regressions of the parameters in the gamma distribution on estimates of ground motion, derived from ShakeMap estimates or from interpolated observations, provide a basis for developing curves of conditional probability of loss given a ground motion. Comparison of the resulting estimates of aggregate loss with the actual aggregate loss gives satisfactory agreement for several different ground-motion parameters. Estimates of loss based on a deterministic spatial model of the earthquake ground motion, using standard attenuation relationships and NEHRP soil factors, give satisfactory results for some ground-motion parameters if the input ground motions are increased about one and one-half standard deviations above the median, reflecting the fact that the ground motions for the Northridge earthquake tended to be higher than the median ground motion for other earthquakes with similar magnitude. The results give promise for making estimates of insured losses to a similar building stock under future earthquake loading. ?? 2004, Earthquake Engineering Research Institute.

  20. Using Socioeconomic Data to Calibrate Loss Estimates

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2013-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  1. Real-Time Loss Estimation Using Hazus and Shakemap Data

    NASA Astrophysics Data System (ADS)

    Kircher, C. A.

    2003-12-01

    This paper describes real-time damage and loss estimation using the HAZUS earthquake loss estimation technology and ShakeMap data, and provides an example comparison of predicted and observed losses for the 1994 Northridge earthquake. HAZUS [NIBS, 1999, Kircher et al., 1997a/1997b, Whitman et al., 1997] is the standardized earthquake loss estimation methodology developed by the National Institute of Building Sciences (NIBS) for the United States Federal Emergency Management Agency (FEMA). HAZUS was originally developed to assist emergency response planners to "provide local, state and regional officials with the tools necessary to plan and stimulate efforts to reduce risk from earthquakes and to prepare for emergency response and recovery from an earthquake." HAZUS can also be used to make regional estimates of damage and loss following and earthquake using ground motion, ShakeMap, data provided by the United States Geological Survey (USGS) as part of Tri-Net in Southern California [Wald et al., 1999] or by other regional strong-motion instrumentation networks.

  2. Development of fragility functions to estimate homelessness after an earthquake

    NASA Astrophysics Data System (ADS)

    Brink, Susan A.; Daniell, James; Khazai, Bijan; Wenzel, Friedemann

    2014-05-01

    used to estimate homelessness as a function of information that is readily available immediately after an earthquake. These fragility functions could be used by relief agencies and governments to provide an initial assessment of the need for allocation of emergency shelter immediately after an earthquake. Daniell JE (2014) The development of socio-economic fragility functions for use in worldwide rapid earthquake loss estimation procedures, Ph.D. Thesis (in publishing), Karlsruhe, Germany. Daniell, J. E., Khazai, B., Wenzel, F., & Vervaeck, A. (2011). The CATDAT damaging earthquakes database. Natural Hazards and Earth System Science, 11(8), 2235-2251. doi:10.5194/nhess-11-2235-2011 Daniell, J.E., Wenzel, F. and Vervaeck, A. (2012). "The Normalisation of socio-economic losses from historic worldwide earthquakes from 1900 to 2012", 15th WCEE, Lisbon, Portugal, Paper No. 2027. Jaiswal, K., & Wald, D. (2010). An Empirical Model for Global Earthquake Fatality Estimation. Earthquake Spectra, 26(4), 1017-1037. doi:10.1193/1.3480331

  3. Creating a Global Building Inventory for Earthquake Loss Assessment and Risk Management

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.

    2008-01-01

    Earthquakes have claimed approximately 8 million lives over the last 2,000 years (Dunbar, Lockridge and others, 1992) and fatality rates are likely to continue to rise with increased population and urbanizations of global settlements especially in developing countries. More than 75% of earthquake-related human casualties are caused by the collapse of buildings or structures (Coburn and Spence, 2002). It is disheartening to note that large fractions of the world's population still reside in informal, poorly-constructed & non-engineered dwellings which have high susceptibility to collapse during earthquakes. Moreover, with increasing urbanization half of world's population now lives in urban areas (United Nations, 2001), and half of these urban centers are located in earthquake-prone regions (Bilham, 2004). The poor performance of most building stocks during earthquakes remains a primary societal concern. However, despite this dark history and bleaker future trends, there are no comprehensive global building inventories of sufficient quality and coverage to adequately address and characterize future earthquake losses. Such an inventory is vital both for earthquake loss mitigation and for earthquake disaster response purposes. While the latter purpose is the motivation of this work, we hope that the global building inventory database described herein will find widespread use for other mitigation efforts as well. For a real-time earthquake impact alert system, such as U.S. Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER), (Wald, Earle and others, 2006), we seek to rapidly evaluate potential casualties associated with earthquake ground shaking for any region of the world. The casualty estimation is based primarily on (1) rapid estimation of the ground shaking hazard, (2) aggregating the population exposure within different building types, and (3) estimating the casualties from the collapse of vulnerable buildings. Thus, the

  4. Earthquakes trigger the loss of groundwater biodiversity.

    PubMed

    Galassi, Diana M P; Lombardo, Paola; Fiasca, Barbara; Di Cioccio, Alessia; Di Lorenzo, Tiziana; Petitta, Marco; Di Carlo, Piero

    2014-01-01

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and "ecosystem engineers", we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems. PMID:25182013

  5. Earthquakes trigger the loss of groundwater biodiversity

    NASA Astrophysics Data System (ADS)

    Galassi, Diana M. P.; Lombardo, Paola; Fiasca, Barbara; di Cioccio, Alessia; di Lorenzo, Tiziana; Petitta, Marco; di Carlo, Piero

    2014-09-01

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and ``ecosystem engineers'', we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems.

  6. Earthquakes trigger the loss of groundwater biodiversity

    PubMed Central

    Galassi, Diana M. P.; Lombardo, Paola; Fiasca, Barbara; Di Cioccio, Alessia; Di Lorenzo, Tiziana; Petitta, Marco; Di Carlo, Piero

    2014-01-01

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and “ecosystem engineers”, we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems. PMID:25182013

  7. Social vulnerability analysis of earthquake risk using HAZUS-MH losses from a M7.8 scenario earthquake on the San Andreas fault

    NASA Astrophysics Data System (ADS)

    Noriega, G. R.; Grant Ludwig, L.

    2010-12-01

    Natural hazards research indicates earthquake risk is not equitably distributed. Demographic differences are significant in determining the risks people encounter, whether and how they prepare for disasters, and how they fare when disasters occur. In this study, we analyze the distribution of economic and social losses in all 88 cities of Los Angeles County from the 2008 ShakeOut scenario earthquake. The ShakeOut scenario earthquake is a scientifically plausible M 7.8 scenario earthquake on the San Andreas fault that was developed and applied for regional earthquake preparedness planning and risk mitigation from a compilation of collaborative studies and findings by the 2007 Working Group on California Earthquake Probabilities (WGCEP). The scenario involved 1) developing a realistic scenario earthquake using the best available and most recent earthquake research findings, 2) estimation of physical damage, 3) estimation of social impact of the earthquake, and 4) identifying changes that will help to prevent a catastrophe due to an earthquake. Estimated losses from this scenario earthquake include 1,800 deaths and $213 billion dollars in economic losses. We use regression analysis to examine the relationship between potential city losses due to the ShakeOut scenario earthquake and the cities' demographic composition. The dependent variables are economic and social losses calculated in HAZUS-MH methodology for the scenario earthquake. The independent variables -median household income, tenure and race/ethnicity- have been identified as indicators of social vulnerability to natural disasters (Mileti, 1999; Cutter, 2006; Cutter & Finch, 2008). Preliminary Ordinary Least Squares (OLS) regression analysis of economic losses on race/ethnicity, income and tenure, indicates that cities with lower Hispanic population are associated with lower economic losses. Cities with higher Hispanic population are associated with higher economic losses, though this relationship is

  8. The Enormous Challenge faced by China to Reduce Earthquake Losses

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Mooney, W. D.; Wang, B.

    2014-12-01

    In past six years, several big earthquakes occurred in Chinese continent that have caused enormous economic loss and casualties. These earthquakes include the following: 2008 Mw=7.9 Wenchuan; 2010 Mw=6.9 Yushu; 2013 Mw=6.6 Lushan; and 2013 Mw=5.9 Minxian events. On August 4, 2014 the Mw=6.1 earthquake struck Ludian in Yunnan province. Althought it was a moderate size earthquake, the casualties have reached at least 589 people. In fact, more than 50% of Chinese cities and more than 70% of large to medium size cities are located in the areas where the seismic intensity may reach Ⅶ or higher. Collapsing buildings are the main cause of Chinese earthquake casualties; the secondary causes are induced geological disasters such as landslide and barrier lakes. Several enormous challenges must be overcome to reduce hazards from earthquakes and secondary disasters.(1)Much of the infrastructure in China cannot meet the engineering standard for adequate seismic protection. In particular, some buildings are not strong enough to survive the potential strong ground shaking, and some of them did do not keep away from the active fault with a safe distance. It will be very costly to reinforce or rebuild such buildings. (2) There is lack of the rigorous legislation on earthquake disaster protection. (3) It appears that both government and citizen rely too much on earthquake prediction to avoid earthquake casualties. (4) Geologic conditions is very complicate and in need of additional studies, especially in southwest of China. There still lack of detail survey on potential geologic disasters, such as landslides. Although we still cannot predict earthquakes, it is possible to greatly reduce earthquake hazards. For example, some Chinese scientists have begun studies with the aim of identifying active faults under large cities and to propose higher building standards. It will be a very difficult work to improve the quality and scope of earthquake disaster protection dramatically in

  9. Rapid earthquake hazard and loss assessment for Euro-Mediterranean region

    NASA Astrophysics Data System (ADS)

    Erdik, Mustafa; Sesetyan, Karin; Demircioglu, Mine; Hancilar, Ufuk; Zulfikar, Can; Cakti, Eser; Kamer, Yaver; Yenidogan, Cem; Tuzun, Cuneyt; Cagnan, Zehra; Harmandar, Ebru

    2010-10-01

    The almost-real time estimation of ground shaking and losses after a major earthquake in the Euro-Mediterranean region was performed in the framework of the Joint Research Activity 3 (JRA-3) component of the EU FP6 Project entitled "Network of Research Infra-structures for European Seismology, NERIES". This project consists of finding the most likely location of the earthquake source by estimating the fault rupture parameters on the basis of rapid inversion of data from on-line regional broadband stations. It also includes an estimation of the spatial distribution of selected site-specific ground motion parameters at engineering bedrock through region-specific ground motion prediction equations (GMPEs) or physical simulation of ground motion. By using the Earthquake Loss Estimation Routine (ELER) software, the multi-level methodology developed for real time estimation of losses is capable of incorporating regional variability and sources of uncertainty stemming from GMPEs, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships.

  10. Future Earth: Reducing Loss By Automating Response to Earthquake Shaking

    NASA Astrophysics Data System (ADS)

    Allen, R. M.

    2014-12-01

    Earthquakes pose a significant threat to society in the U.S. and around the world. The risk is easily forgotten given the infrequent recurrence of major damaging events, yet the likelihood of a major earthquake in California in the next 30 years is greater than 99%. As our societal infrastructure becomes ever more interconnected, the potential impacts of these future events are difficult to predict. Yet, the same inter-connected infrastructure also allows us to rapidly detect earthquakes as they begin, and provide seconds, tens or seconds, or a few minutes warning. A demonstration earthquake early warning system is now operating in California and is being expanded to the west coast (www.ShakeAlert.org). In recent earthquakes in the Los Angeles region, alerts were generated that could have provided warning to the vast majority of Los Angelinos who experienced the shaking. Efforts are underway to build a public system. Smartphone technology will be used not only to issue that alerts, but could also be used to collect data, and improve the warnings. The MyShake project at UC Berkeley is currently testing an app that attempts to turn millions of smartphones into earthquake-detectors. As our development of the technology continues, we can anticipate ever-more automated response to earthquake alerts. Already, the BART system in the San Francisco Bay Area automatically stops trains based on the alerts. In the future, elevators will stop, machinery will pause, hazardous materials will be isolated, and self-driving cars will pull-over to the side of the road. In this presentation we will review the current status of the earthquake early warning system in the US. We will illustrate how smartphones can contribute to the system. Finally, we will review applications of the information to reduce future losses.

  11. An Atlas of ShakeMaps and population exposure catalog for earthquake loss modeling

    USGS Publications Warehouse

    Allen, T.I.; Wald, D.J.; Earle, P.S.; Marano, K.D.; Hotovec, A.J.; Lin, K.; Hearne, M.G.

    2009-01-01

    We present an Atlas of ShakeMaps and a catalog of human population exposures to moderate-to-strong ground shaking (EXPO-CAT) for recent historical earthquakes (1973-2007). The common purpose of the Atlas and exposure catalog is to calibrate earthquake loss models to be used in the US Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER). The full ShakeMap Atlas currently comprises over 5,600 earthquakes from January 1973 through December 2007, with almost 500 of these maps constrained-to varying degrees-by instrumental ground motions, macroseismic intensity data, community internet intensity observations, and published earthquake rupture models. The catalog of human exposures is derived using current PAGER methodologies. Exposure to discrete levels of shaking intensity is obtained by correlating Atlas ShakeMaps with a global population database. Combining this population exposure dataset with historical earthquake loss data, such as PAGER-CAT, provides a useful resource for calibrating loss methodologies against a systematically-derived set of ShakeMap hazard outputs. We illustrate two example uses for EXPO-CAT; (1) simple objective ranking of country vulnerability to earthquakes, and; (2) the influence of time-of-day on earthquake mortality. In general, we observe that countries in similar geographic regions with similar construction practices tend to cluster spatially in terms of relative vulnerability. We also find little quantitative evidence to suggest that time-of-day is a significant factor in earthquake mortality. Moreover, earthquake mortality appears to be more systematically linked to the population exposed to severe ground shaking (Modified Mercalli Intensity VIII+). Finally, equipped with the full Atlas of ShakeMaps, we merge each of these maps and find the maximum estimated peak ground acceleration at any grid point in the world for the past 35 years. We subsequently compare this "composite ShakeMap" with existing global

  12. Estimation of earthquake risk curves of physical building damage

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias; Janouschkowetz, Silke; Fischer, Thomas; Simon, Christian

    2014-05-01

    In this study, a new approach to quantify seismic risks is presented. Here, the earthquake risk curves for the number of buildings with a defined physical damage state are estimated for South Africa. Therein, we define the physical damage states according to the current European macro-seismic intensity scale (EMS-98). The advantage of such kind of risk curve is that its plausibility can be checked more easily than for other types. The earthquake risk curve for physical building damage can be compared with historical damage and their corresponding empirical return periods. The number of damaged buildings from historical events is generally explored and documented in more detail than the corresponding monetary losses. The latter are also influenced by different economic conditions, such as inflation and price hikes. Further on, the monetary risk curve can be derived from the developed risk curve of physical building damage. The earthquake risk curve can also be used for the validation of underlying sub-models such as the hazard and vulnerability modules.

  13. Application of the loss estimation tool QLARM in Algeria

    NASA Astrophysics Data System (ADS)

    Rosset, P.; Trendafiloski, G.; Yelles, K.; Semmane, F.; Wyss, M.

    2009-04-01

    During the last six years, WAPMERR has used Quakeloss for real-time loss estimation for more than 440 earthquakes worldwide. Loss reports, posted with an average delay of 30 minutes, include a map showing the average degree of damage in settlements near the epicenter, the total number of fatalities, the total number of injured, and a detailed list of casualties and damage rates in these settlements. After the M6.7 Boumerdes earthquake in 2003, we reported 1690-3660 fatalities. The official death toll was around 2270. Since the El Asnam earthquake, seismic events in Algeria have killed about 6,000 people, injured more than 20,000 and left more than 300,000 homeless. On average, one earthquake with the potential to kill people (M>5.4) happens every three years in Algeria. In the frame of a collaborative project between WAPMERR and CRAAG, we propose to calibrate our new loss estimation tool QLARM (qlarm.ethz.ch) and estimate human losses for future likely earthquakes in Algeria. The parameters needed for this calculation are the following. (1) Ground motion relation and soil amplification factors (2) distribution of building stock and population into vulnerability classes of the European Macroseismic Scale (EMS-98) as given in the PAGER database and (3) population by settlement. Considering the resolution of the available data, we construct 1) point city models for cases where only summary data for the city are available and, 2) discrete city models when data regarding city districts are available. Damage and losses are calculated using: (a) vulnerability models pertinent to EMS-98 vulnerability classes previously validated with the existing ones in Algeria (Tipaza and Chlef) (b) building collapse models pertinent to Algeria as given in the World Housing Encyclopedia and, (c) casualty matrices pertinent to EMS-98 vulnerability classes assembled from HAZUS casualty rates. As a first trial, we simulated the 2003 Boumerdes earthquake to check the validity of the proposed

  14. Modelling the Epistemic Uncertainty in the Vulnerability Assessment Component of an Earthquake Loss Model

    NASA Astrophysics Data System (ADS)

    Crowley, H.; Modica, A.

    2009-04-01

    Loss estimates have been shown in various studies to be highly sensitive to the methodology employed, the seismicity and ground-motion models, the vulnerability functions, and assumed replacement costs (e.g. Crowley et al., 2005; Molina and Lindholm, 2005; Grossi, 2000). It is clear that future loss models should explicitly account for these epistemic uncertainties. Indeed, a cause of frequent concern in the insurance and reinsurance industries is precisely the fact that for certain regions and perils, available commercial catastrophe models often yield significantly different loss estimates. Of equal relevance to many users is the fact that updates of the models sometimes lead to very significant changes in the losses compared to the previous version of the software. In order to model the epistemic uncertainties that are inherent in loss models, a number of different approaches for the hazard, vulnerability, exposure and loss components should be clearly and transparently applied, with the shortcomings and benefits of each method clearly exposed by the developers, such that the end-users can begin to compare the results and the uncertainty in these results from different models. This paper looks at an application of a logic-tree type methodology to model the epistemic uncertainty in the vulnerability component of a loss model for Tunisia. Unlike other countries which have been subjected to damaging earthquakes, there has not been a significant effort to undertake vulnerability studies for the building stock in Tunisia. Hence, when presented with the need to produce a loss model for a country like Tunisia, a number of different approaches can and should be applied to model the vulnerability. These include empirical procedures which utilise observed damage data, and mechanics-based methods where both the structural characteristics and response of the buildings are analytically modelled. Some preliminary applications of the methodology are presented and discussed

  15. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part B, historical earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.

  16. Proceedings: Earthquake Ground-Motion Estimation in Eastern North America

    SciTech Connect

    1988-08-01

    Experts in seismology and earthquake engineering convened to evaluate state-of-the-art methods for estimating ground motion from earthquakes in eastern North America. Workshop results presented here will help focus research priorities in ground-motion studies to provide more-realistic design standards for critical facilities.

  17. Precise estimation of repeating earthquake moment: Example from parkfield, california

    USGS Publications Warehouse

    Rubinstein, J.L.; Ellsworth, W.L.

    2010-01-01

    We offer a new method for estimating the relative size of repeating earthquakes using the singular value decomposition (SVD). This method takes advantage of the highly coherent waveforms of repeating earthquakes and arrives at far more precise and accurate descriptions of earthquake size than standard catalog techniques allow. We demonstrate that uncertainty in relative moment estimates is reduced from ??75% for standard coda-duration techniques employed by the network to an uncertainty of ??6.6% when the SVD method is used. This implies that a single-station estimate of moment using the SVD method has far less uncertainty than the whole-network estimates of moment based on coda duration. The SVD method offers a significant improvement in our ability to describe the size of repeating earthquakes and thus an opportunity to better understand how they accommodate slip as a function of time.

  18. A Model For Rapid Estimation of Economic Loss

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2012-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  19. Seismic Risk Assessment and Loss Estimation for Tbilisi City

    NASA Astrophysics Data System (ADS)

    Tsereteli, Nino; Alania, Victor; Varazanashvili, Otar; Gugeshashvili, Tengiz; Arabidze, Vakhtang; Arevadze, Nika; Tsereteli, Emili; Gaphrindashvili, Giorgi; Gventcadze, Alexander; Goguadze, Nino; Vephkhvadze, Sophio

    2013-04-01

    The proper assessment of seismic risk is of crucial importance for society protection and city sustainable economic development, as it is the essential part to seismic hazard reduction. Estimation of seismic risk and losses is complicated tasks. There is always knowledge deficiency on real seismic hazard, local site effects, inventory on elements at risk, infrastructure vulnerability, especially for developing countries. Lately great efforts was done in the frame of EMME (earthquake Model for Middle East Region) project, where in the work packages WP1, WP2 , WP3 and WP4 where improved gaps related to seismic hazard assessment and vulnerability analysis. Finely in the frame of work package wp5 "City Scenario" additional work to this direction and detail investigation of local site conditions, active fault (3D) beneath Tbilisi were done. For estimation economic losses the algorithm was prepared taking into account obtained inventory. The long term usage of building is very complex. It relates to the reliability and durability of buildings. The long term usage and durability of a building is determined by the concept of depreciation. Depreciation of an entire building is calculated by summing the products of individual construction unit' depreciation rates and the corresponding value of these units within the building. This method of calculation is based on an assumption that depreciation is proportional to the building's (constructions) useful life. We used this methodology to create a matrix, which provides a way to evaluate the depreciation rates of buildings with different type and construction period and to determine their corresponding value. Finally loss was estimated resulting from shaking 10%, 5% and 2% exceedance probability in 50 years. Loss resulting from scenario earthquake (earthquake with possible maximum magnitude) also where estimated.

  20. Earthquake Loss Assessment for Post-2000 Buildings in Istanbul

    NASA Astrophysics Data System (ADS)

    Hancilar, Ufuk; Cakti, Eser; Sesetyan, Karin

    2016-04-01

    Current building inventory of Istanbul city, which was compiled by street surveys in 2008, consists of more than 1.2 million buildings. The inventory provides information on lateral-load carrying system, number of floors and construction year, where almost 200,000 buildings are reinforced concrete frame type structures built after 2000. These buildings are assumed to be designed based on the provisions of Turkish Earthquake Resistant Design Code (1998) and are tagged as high-code buildings. However, there are no empirical or analytical fragility functions associated with these types of buildings. In this study we perform a damage and economic loss assessment exercise focusing on the post-2000 building stock of Istanbul. Three M7.4 scenario earthquakes near the city represent the input ground motion. As for the fragility functions, those provided by Hancilar and Cakti (2015) for code complying reinforced concrete frames are used. The results are compared with the number of damaged buildings given in the loss assessment studies available in the literature wherein expert judgment based fragilities for post-2000 buildings were used.

  1. Building losses assessment for Lushan earthquake utilization multisource remote sensing data and GIS

    NASA Astrophysics Data System (ADS)

    Nie, Juan; Yang, Siquan; Fan, Yida; Wen, Qi; Xu, Feng; Li, Lingling

    2015-12-01

    On 20 April 2013, a catastrophic earthquake of magnitude 7.0 struck the Lushan County, northwestern Sichuan Province, China. This earthquake named Lushan earthquake in China. The Lushan earthquake damaged many buildings. The situation of building loss is one basis for emergency relief and reconstruction. Thus, the building losses of the Lushan earthquake must be assessed. Remote sensing data and geographic information systems (GIS) can be employed to assess the building loss of the Lushan earthquake. The building losses assessment results for Lushan earthquake disaster utilization multisource remote sensing dada and GIS were reported in this paper. The assessment results indicated that 3.2% of buildings in the affected areas were complete collapsed. 12% and 12.5% of buildings were heavy damaged and slight damaged, respectively. The complete collapsed buildings, heavy damaged buildings, and slight damaged buildings mainly located at Danling County, Hongya County, Lushan County, Mingshan County, Qionglai County, Tianquan County, and Yingjing County.

  2. Estimates of radiated energy from global shallow subduction zone earthquakes

    NASA Astrophysics Data System (ADS)

    Bilek, S. L.; Lay, T.; Ruff, L.

    2002-12-01

    Previous studies used seismic energy to moment ratios for datasets of large earthquakes as a useful discriminant for tsunami earthquakes. We extend this idea of a "slowness" discriminant to a large dataset of subduction zone underthrusting earthquakes. We determined estimates of energy release in these shallow earthquakes using a large dataset of source time functions. This dataset contains source time functions for 418 shallow (< 70 km depth) earthquakes ranging from Mw 5.5 - 8.0 from 14 circum-Pacific subduction zones. Also included are tsunami earthquakes for which source time functions are available. We calculate energy using two methods, a substitution of a simplified triangle and integration of the original source time function. In the first method, we use a triangle substitution of peak moment and duration to find a minimum estimate of energy. The other method incorporates more of the source time function information and can be influenced by source time function complexity. We examine patterns in source time function complexity with respect to the energy estimates. For comparison with other earthquake parameters, it is useful to remove the effect of seismic moment on the energy estimates. We use the seismic energy to moment ratio (E/Mo) to highlight variations with depth, moment, and subduction zone. There is significant scatter in this ratio using both methods of energy calculation. We observe a slight increase in E/Mo with increasing Mw. There is not much variation in E/Mo with depth seen in entire dataset. However, a slight increase in E/Mo with depth is apparent in a few subduction zones such as Alaska, Central America, and Peru. An average E/Mo of 5x10e-6 roughly characterizes this shallow earthquake dataset, although with a factor of 10 scatter. This value is within about a factor of 2 of E/Mo ratios determined by Choy and Boatwright (1995). Tsunami earthquakes suggest an average E/Mo of 2x10e-7, significantly lower than the average for the shallow

  3. Application of linear statistical models of earthquake magnitude versus fault length in estimating maximum expectable earthquakes

    USGS Publications Warehouse

    Mark, Robert K.

    1977-01-01

    Correlation or linear regression estimates of earthquake magnitude from data on historical magnitude and length of surface rupture should be based upon the correct regression. For example, the regression of magnitude on the logarithm of the length of surface rupture L can be used to estimate magnitude, but the regression of log L on magnitude cannot. Regression estimates are most probable values, and estimates of maximum values require consideration of one-sided confidence limits.

  4. A Multidisciplinary Approach for Estimation of Seismic Losses: A Case Study in Turkey

    NASA Astrophysics Data System (ADS)

    Askan, A.; Erberik, M.; Un, E.

    2012-12-01

    Estimation of seismic losses including the physical, economic and social losses as well as casualties concern a wide range of authorities varying from geophysical and earthquake engineers, physical and economic planners to insurance companies. Due to the inherent uncertainties involved at each component, a probabilistic framework is required to estimate seismic losses. This study aims to propose an integrated method for predicting the potential seismic loss for a selected urban region. The main components of the proposed loss model are the seismic hazard estimation tool, building vulnerability functions, human losses and economic losses as functions of damage states of buildings. The input data for risk calculations involves regional seismicity and building fragility information. The casualty model for a given damage level considers the occupancy type, population of the building, occupancy at the time of earthquake occurrence, number of trapped occupants in the collapse, injury distribution at collapse and mortality post collapse. The economic loss module involves direct economic loss to buildings in terms of replacement, structural repair, non-structural repair costs and contents losses. Finally, the proposed loss model combines the input components within a conditional probability approach. The results are expressed in terms of expected loss. We calibrate the method with loss data from the 12 November 1999 Düzce earthquake and then predict losses for another city in Turkey (Bursa) with high seismic hazard.

  5. Rapid Ice Mass Loss: Does It Have an Influence on Earthquake Occurrence in Southern Alaska?

    NASA Technical Reports Server (NTRS)

    Sauber, Jeanne M.

    2008-01-01

    The glaciers of southern Alaska are extensive, and many of them have undergone gigatons of ice wastage on time scales on the order of the seismic cycle. Since the ice loss occurs directly above a shallow main thrust zone associated with subduction of the Pacific-Yakutat plate beneath continental Alaska, the region between the Malaspina and Bering Glaciers is an excellent test site for evaluating the importance of recent ice wastage on earthquake faulting potential. We demonstrate the influence of cumulative glacial mass loss following the 1899 Yakataga earthquake (M=8.1) by using a two dimensional finite element model with a simple representation of ice fluctuations to calculate the incremental stresses and change in the fault stability margin (FSM) along the main thrust zone (MTZ) and on the surface. Along the MTZ, our results indicate a decrease in FSM between 1899 and the 1979 St. Elias earthquake (M=7.4) of 0.2 - 1.2 MPa over an 80 km region between the coast and the 1979 aftershock zone; at the surface, the estimated FSM was larger but more localized to the lower reaches of glacial ablation zones. The ice-induced stresses were large enough, in theory, to promote the occurrence of shallow thrust earthquakes. To empirically test the influence of short-term ice fluctuations on fault stability, we compared the seismic rate from a reference background time period (1988-1992) against other time periods (1993-2006) with variable ice or tectonic change characteristics. We found that the frequency of small tectonic events in the Icy Bay region increased in 2002-2006 relative to the background seismic rate. We hypothesize that this was due to a significant increase in the rate of ice wastage in 2002-2006 instead of the M=7.9, 2002 Denali earthquake, located more than 100km away.

  6. Probabilistic assessment of decoupling loss-of-coolant accident and earthquake in nuclear power plant design

    SciTech Connect

    Lu, S.C.; Harris, D.O.

    1981-01-01

    This paper describes a research project conducted at Lawrence Livermore National Laboratory to establish a technical basis for reassessing the requirement of combining large loss-of-coolant-accident (LOCA) and earthquake loads in nuclear power plant design. A large LOCA is defined herein as a double-ended guillotine break of the primary reactor coolant loop piping (the hot leg, cold leg, and crossover) of a pressureized water reactor (PWR). A systematic probability approach has been employed to estimate the probability of a large LOCA directly and indirectly induced by earthquakes. The probability of a LOCA directly induced by earthquakes was assessed by a numerical simulation of pipe rupture of a reactor coolant system. The simulation employed a deterministic fracture mechanics model which dictates the fatigue growth of pre-existing cracks in the pipe. The simulation accounts for the stochastic nature of input elements such as the initial crack size distribution, the crack occurrence rate, crack and leak detection probabilities as functions of crack size, plant transient occurrence rates, the seismic hazard, stress histories, and crack growth model parameters. Effects on final results due to variation an uncertainty of input elements were assessed by a limited sensitivity study. Results of the simulation indicate that the probability of a double-ended guillotine break, either with or without an earthquake, is very small (on the orer of 10/sup -12/). The probability of a leak was found to be several orders of magnitudes greater than that of a complete break.

  7. An empirical model for global earthquake fatality estimation

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David

    2010-01-01

    We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U. S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits. [DOI: 10.1193/1.3480331

  8. Estimating pore fluid pressures during the Youngstown, Ohio earthquakes

    NASA Astrophysics Data System (ADS)

    Hsieh, P. A.

    2014-12-01

    Several months after fluid injection began in December 2010 at the Northstar 1 well in Youngstown, Ohio, low-magnitude earthquakes were detected in the Youngstown area, where no prior earthquakes had been detected. Concerns that the injection might have triggered the earthquakes lead to shutdown of the well in December 2011. Earthquake relocation analysis by Kim (2013, J. Geophy. Res., v 118, p. 3506-3518) showed that, from March 2011 to January 2012, 12 earthquakes with moment magnitudes of 1.8 to 3.9 occurred at depths of 3.5 to 4 km in the Precambrian basement along a previously unmapped vertical fault. The 2.8 km deep Northstar 1 well, which penetrated the top 60 m of the basement, appeared to have been drilled into the same fault. The earthquakes occurred at lateral distances of 0 to 1 km from the well. The present study aims to estimate the fluid pressure increase due to injection. The groundwater flow model MODFLOW is used to simulate fluid pressure propagation from the well injection interval into the basement fault and two permeable sandstone layers above the basement. The basement rock away from the fault is assumed impermeable. Reservoir properties (permeability and compressibility) of the fault and sandstone layers are estimated by calibrating the model to match injection history and wellhead pressure recorded daily during the operational period. Although the available data are not sufficient to uniquely determine reservoir properties, it is possible to determine reasonable ranges. Simulated fluid pressure increases at the locations and times of the earthquakes range from less than 0.01 MPa to about 1 MPa. Pressure measurements in the well after shut-in might enhance the estimation of reservoir properties. Such data could also improve the estimation of pore fluid pressure increase due to injection.

  9. An empirical evolutionary magnitude estimation for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wu, Yih-Min; Chen, Da-Yi

    2016-04-01

    For earthquake early warning (EEW) system, it is a difficult mission to accurately estimate earthquake magnitude in the early nucleation stage of an earthquake occurrence because only few stations are triggered and the recorded seismic waveforms are short. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters within the initial portion of waveform after P-wave arrival. However, a large-magnitude earthquake (Mw > 7.0) may take longer time to complete the whole ruptures of the causative fault. Instead of adopting amplitude contents in fixed-length time window, that may underestimate magnitude for large-magnitude events, we suppose a fast, robust and unsaturated approach to estimate earthquake magnitudes. In this new method, the EEW system can initially give a bottom-bund magnitude in a few second time window and then update magnitude without saturation by extending the time window. Here we compared two kinds of time windows for adopting amplitudes. One is pure P-wave time widow (PTW); the other is whole-wave time window after P-wave arrival (WTW). The peak displacement amplitude in vertical component were adopted from 1- to 10-s length PTW and WTW, respectively. Linear regression analysis were implemented to find the empirical relationships between peak displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that using WTW to estimate magnitudes accompanies with smaller standard deviation. In addition, large uncertainties exist in the 1-second time widow. Therefore, for magnitude estimations we suggest the EEW system need to progressively adopt peak displacement amplitudes form 2- to 10-s WTW.

  10. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  11. An Account of Preliminary Landslide Damage and Losses Resulting from the February 28, 2001, Nisqually, Washington, Earthquake

    USGS Publications Warehouse

    Highland, Lynn M.

    2003-01-01

    The February 28, 2001, Nisqually, Washington, earthquake (Mw = 6.8) damaged an area of the northwestern United States that previously experienced two major historical earthquakes, in 1949 and in 1965. Preliminary estimates of direct monetary losses from damage due to earthquake-induced landslides is approximately $34.3 million. However, this figure does not include costs from damages to the elevated portion of the Alaskan Way Viaduct, a major highway through downtown Seattle, Washington that will be repaired or rebuilt, depending on the future decision of local and state authorities. There is much debate as to the cause of the damage to this viaduct with evaluations of cause ranging from earthquake shaking and liquefaction to lateral spreading to a combination of these effects. If the viaduct is included in the costs, the losses increase to $500+ million (if it is repaired) or to more than $1+ billion (if it is replaced). Preliminary estimate of losses due to all causes of earthquake damage is approximately $2 billion, which includes temporary repairs to the Alaskan Way Viaduct. These preliminary dollar figures will no doubt increase when plans and decisions regarding the Viaduct are completed.

  12. Monitoring road losses for Lushan 7.0 earthquake disaster utilization multisource remote sensing images

    NASA Astrophysics Data System (ADS)

    Huang, He; Yang, Siquan; Li, Suju; He, Haixia; Liu, Ming; Xu, Feng; Lin, Yueguan

    2015-12-01

    Earthquake is one major nature disasters in the world. At 8:02 on 20 April 2013, a catastrophic earthquake with Ms 7.0 in surface wave magnitude occurred in Sichuan province, China. The epicenter of this earthquake located in the administrative region of Lushan County and this earthquake was named the Lushan earthquake. The Lushan earthquake caused heavy casualties and property losses in Sichuan province. After the earthquake, various emergency relief supplies must be transported to the affected areas. Transportation network is the basis for emergency relief supplies transportation and allocation. Thus, the road losses of the Lushan earthquake must be monitoring. The road losses monitoring results for Lushan earthquake disaster utilization multisource remote sensing images were reported in this paper. The road losses monitoring results indicated that there were 166 meters' national roads, 3707 meters' provincial roads, 3396 meters' county roads, 7254 meters' township roads, and 3943 meters' village roads were damaged during the Lushan earthquake disaster. The damaged roads mainly located at Lushan County, Baoxing County, Tianquan County, Yucheng County, Mingshan County, and Qionglai County. The results also can be used as a decision-making information source for the disaster management government in China.

  13. Development of a Global Slope Dataset for Estimation of Landslide Occurrence Resulting from Earthquakes

    USGS Publications Warehouse

    Verdin, Kristine L.; Godt, Jonathan W.; Funk, Christopher C.; Pedreros, Diego; Worstell, Bruce; Verdin, James

    2007-01-01

    Landslides resulting from earthquakes can cause widespread loss of life and damage to critical infrastructure. The U.S. Geological Survey (USGS) has developed an alarm system, PAGER (Prompt Assessment of Global Earthquakes for Response), that aims to provide timely information to emergency relief organizations on the impact of earthquakes. Landslides are responsible for many of the damaging effects following large earthquakes in mountainous regions, and thus data defining the topographic relief and slope are critical to the PAGER system. A new global topographic dataset was developed to aid in rapidly estimating landslide potential following large earthquakes. We used the remotely-sensed elevation data collected as part of the Shuttle Radar Topography Mission (SRTM) to generate a slope dataset with nearly global coverage. Slopes from the SRTM data, computed at 3-arc-second resolution, were summarized at 30-arc-second resolution, along with statistics developed to describe the distribution of slope within each 30-arc-second pixel. Because there are many small areas lacking SRTM data and the northern limit of the SRTM mission was lat 60?N., statistical methods referencing other elevation data were used to fill the voids within the dataset and to extrapolate the data north of 60?. The dataset will be used in the PAGER system to rapidly assess the susceptibility of areas to landsliding following large earthquakes.

  14. Associations between economic loss, financial strain and the psychological status of Wenchuan earthquake survivors.

    PubMed

    Huang, Yunong; Wong, Hung; Tan, Ngoh Tiong

    2015-10-01

    This study examines the effects of economic loss on the life satisfaction and mental health of Wenchuan earthquake survivors. Economic loss is measured by earthquake impacts on the income and houses of the survivors. The correlation analysis shows that earthquake impact on income is significantly correlated with life satisfaction and depression. The regression analyses indicate that earthquake impact on income is indirectly associated with life satisfaction and depression through its effect on financial strain. The research highlights the importance of coping strategies in maintaining a balance between economic status and living demands for disaster survivors. PMID:25754768

  15. Estimation of the magnitudes and epicenters of Philippine historical earthquakes

    NASA Astrophysics Data System (ADS)

    Bautista, Maria Leonila P.; Oike, Kazuo

    2000-02-01

    The magnitudes and epicenters of Philippine earthquakes from 1589 to 1895 are estimated based on the review, evaluation and interpretation of historical accounts and descriptions. The first step involves the determination of magnitude-felt area relations for the Philippines for use in the magnitude estimation. Data used were the earthquake reports of 86, recent, shallow events with well-described effects and known magnitude values. Intensities are assigned according to the modified Mercalli intensity scale of I to XII. The areas enclosed by Intensities III to IX [ A(III) to A(IX)] are measured and related to magnitude values. The most robust relations are found for magnitudes relating to A(VI), A(VII), A(VIII) and A(IX). Historical earthquake data are obtained from primary sources in libraries in the Philippines and Spain. Most of these accounts were made by Spanish priests and officials stationed in the Philippines during the 15th to 19th centuries. More than 3000 events are catalogued, interpreted and their intensities determined by considering the possible effects of local site conditions, type of construction and the number and locations of existing towns to assess completeness of reporting. Of these events, 485 earthquakes with the largest number of accounts or with at least a minimum report of damage are selected. The historical epicenters are estimated based on the resulting generalized isoseismal maps augmented by information on recent seismicity and location of known tectonic structures. Their magnitudes are estimated by using the previously determined magnitude-felt area equations for recent events. Although historical epicenters are mostly found to lie on known tectonic structures, a few, however, are found to lie along structures that show not much activity during the instrumented period. A comparison of the magnitude distributions of historical and recent events showed that only the period 1850 to 1900 may be considered well-reported in terms of

  16. Global Earthquake Casualties due to Secondary Effects: A Quantitative Analysis for Improving PAGER Losses

    USGS Publications Warehouse

    Wald, David J.

    2010-01-01

    This study presents a quantitative and geospatial description of global losses due to earthquake-induced secondary effects, including landslide, liquefaction, tsunami, and fire for events during the past 40 years. These processes are of great importance to the US Geological Survey’s (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, which is currently being developed to deliver rapid earthquake impact and loss assessments following large/significant global earthquakes. An important question is how dominant are losses due to secondary effects (and under what conditions, and in which regions)? Thus, which of these effects should receive higher priority research efforts in order to enhance PAGER’s overall assessment of earthquakes losses and alerting for the likelihood of secondary impacts? We find that while 21.5% of fatal earthquakes have deaths due to secondary (non-shaking) causes, only rarely are secondary effects the main cause of fatalities. The recent 2004 Great Sumatra–Andaman Islands earthquake is a notable exception, with extraordinary losses due to tsunami. The potential for secondary hazards varies greatly, and systematically, due to regional geologic and geomorphic conditions. Based on our findings, we have built country-specific disclaimers for PAGER that address potential for each hazard (Earle et al., Proceedings of the 14th World Conference of the Earthquake Engineering, Beijing, China, 2008). We will now focus on ways to model casualties from secondary effects based on their relative importance as well as their general predictability.

  17. Global earthquake casualties due to secondary effects: A quantitative analysis for improving rapid loss analyses

    USGS Publications Warehouse

    Marano, K.D.; Wald, D.J.; Allen, T.I.

    2010-01-01

    This study presents a quantitative and geospatial description of global losses due to earthquake-induced secondary effects, including landslide, liquefaction, tsunami, and fire for events during the past 40 years. These processes are of great importance to the US Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, which is currently being developed to deliver rapid earthquake impact and loss assessments following large/significant global earthquakes. An important question is how dominant are losses due to secondary effects (and under what conditions, and in which regions)? Thus, which of these effects should receive higher priority research efforts in order to enhance PAGER's overall assessment of earthquakes losses and alerting for the likelihood of secondary impacts? We find that while 21.5% of fatal earthquakes have deaths due to secondary (non-shaking) causes, only rarely are secondary effects the main cause of fatalities. The recent 2004 Great Sumatra-Andaman Islands earthquake is a notable exception, with extraordinary losses due to tsunami. The potential for secondary hazards varies greatly, and systematically, due to regional geologic and geomorphic conditions. Based on our findings, we have built country-specific disclaimers for PAGER that address potential for each hazard (Earle et al., Proceedings of the 14th World Conference of the Earthquake Engineering, Beijing, China, 2008). We will now focus on ways to model casualties from secondary effects based on their relative importance as well as their general predictability. ?? Springer Science+Business Media B.V. 2009.

  18. Rapid estimate of earthquake source duration: application to tsunami warning.

    NASA Astrophysics Data System (ADS)

    Reymond, Dominique; Jamelot, Anthony; Hyvernaud, Olivier

    2016-04-01

    We present a method for estimating the source duration of the fault rupture, based on the high-frequency envelop of teleseismic P-Waves, inspired from the original work of (Ni et al., 2005). The main interest of the knowledge of this seismic parameter is to detect abnormal low velocity ruptures that are the characteristic of the so called 'tsunami-earthquake' (Kanamori, 1972). The validation of the results of source duration estimated by this method are compared with two other independent methods : the estimated duration obtained by the Wphase inversion (Kanamori and Rivera, 2008, Duputel et al., 2012) and the duration calculated by the SCARDEC process that determines the source time function (M. Vallée et al., 2011). The estimated source duration is also confronted to the slowness discriminant defined by Newman and Okal, 1998), that is calculated routinely for all earthquakes detected by our tsunami warning process (named PDFM2, Preliminary Determination of Focal Mechanism, (Clément and Reymond, 2014)). Concerning the point of view of operational tsunami warning, the numerical simulations of tsunami are deeply dependent on the source estimation: better is the source estimation, better will be the tsunami forecast. The source duration is not directly injected in the numerical simulations of tsunami, because the cinematic of the source is presently totally ignored (Jamelot and Reymond, 2015). But in the case of a tsunami-earthquake that occurs in the shallower part of the subduction zone, we have to consider a source in a medium of low rigidity modulus; consequently, for a given seismic moment, the source dimensions will be decreased while the slip distribution increased, like a 'compact' source (Okal, Hébert, 2007). Inversely, a rapid 'snappy' earthquake that has a poor tsunami excitation power, will be characterized by higher rigidity modulus, and will produce weaker displacement and lesser source dimensions than 'normal' earthquake. References: CLément, J

  19. Blood loss estimation in epistaxis scenarios.

    PubMed

    Beer, H L; Duvvi, S; Webb, C J; Tandon, S

    2005-01-01

    Thirty-two members of staff from the Ear, Nose and Throat Department at Warrington General Hospital were asked to estimate blood loss in commonly encountered epistaxis scenarios. Results showed that once the measured volume was above 100 ml, visual estimation became grossly inaccurate. Comparison of medical and non-medical staff showed under-estimation was more marked in the non-medical group. Comparison of doctors versus nurses showed no difference in estimation, and no difference was found between grades of staff. PMID:15807956

  20. Centralized web-based loss estimation tool: INLET for disaster response

    NASA Astrophysics Data System (ADS)

    Huyck, C. K.; Chung, H.-C.; Cho, S.; Mio, M. Z.; Ghosh, S.; Eguchi, R. T.; Mehrotra, S.

    2006-03-01

    In the years following the 1994 Northridge earthquake, many researchers in the earthquake community focused on the development of GIS-based loss estimation tools such as HAZUS. These highly customizable programs have many users, and different results after an event can be problematic. Online IMS (Internet Map Servers) offer a centralized system where data, model updates and results cascade to all users. INLET (Internet-based Loss Estimation Tool) is the first online real-time loss estimation system available to the emergency management and response community within Southern California. In the event of a significant earthquake, Perl scripts written to respond to USGS ShakeCast notifications will call INLET routines that use USGS ShakeMaps to estimate losses within minutes after an event. INLET incorporates extensive publicly available GIS databases and uses damage functions simplified from FEMA's HAZUS (R) software. INLET currently estimates building damage, transportation impacts, and casualties. The online model simulates the effects of earthquakes, in the context of the larger RESCUE project, in order to test the integration of IT in evacuation routing. The simulation tool provides a "testbed" environment for researchers to model the effect that disaster awareness and route familiarity can have on traffic congestion and evacuation time.

  1. Time-varying loss forecast for an earthquake scenario in Basel, Switzerland

    NASA Astrophysics Data System (ADS)

    Herrmann, Marcus; Zechar, Jeremy D.; Wiemer, Stefan

    2014-05-01

    When an unexpected earthquake occurs, people suddenly want advice on how to cope with the situation. The 2009 L'Aquila quake highlighted the significance of public communication and pushed the usage of scientific methods to drive alternative risk mitigation strategies. For instance, van Stiphout et al. (2010) suggested a new approach for objective evacuation decisions on short-term: probabilistic risk forecasting combined with cost-benefit analysis. In the present work, we apply this approach to an earthquake sequence that simulated a repeat of the 1356 Basel earthquake, one of the most damaging events in Central Europe. A recent development to benefit society in case of an earthquake are probabilistic forecasts of the aftershock occurrence. But seismic risk delivers a more direct expression of the socio-economic impact. To forecast the seismic risk on short-term, we translate aftershock probabilities to time-varying seismic hazard and combine this with time-invariant loss estimation. Compared with van Stiphout et al. (2010), we use an advanced aftershock forecasting model and detailed settlement data to allow us spatial forecasts and settlement-specific decision-making. We quantify the risk forecast probabilistically in terms of human loss. For instance one minute after the M6.6 mainshock, the probability for an individual to die within the next 24 hours is 41 000 times higher than the long-term average; but the absolute value remains at minor 0.04 %. The final cost-benefit analysis adds value beyond a pure statistical approach: it provides objective statements that may justify evacuations. To deliver supportive information in a simple form, we propose a warning approach in terms of alarm levels. Our results do not justify evacuations prior to the M6.6 mainshock, but in certain districts afterwards. The ability to forecast the short-term seismic risk at any time-and with sufficient data anywhere-is the first step of personal decision-making and raising risk

  2. Blood Loss Estimation Using Gauze Visual Analogue

    PubMed Central

    Ali Algadiem, Emran; Aleisa, Abdulmohsen Ali; Alsubaie, Huda Ibrahim; Buhlaiqah, Noora Radhi; Algadeeb, Jihad Bagir; Alsneini, Hussain Ali

    2016-01-01

    Background Estimating intraoperative blood loss can be a difficult task, especially when blood is mostly absorbed by gauze. In this study, we have provided an improved method for estimating blood absorbed by gauze. Objectives To develop a guide to estimate blood absorbed by surgical gauze. Materials and Methods A clinical experiment was conducted using aspirated blood and common surgical gauze to create a realistic amount of absorbed blood in the gauze. Different percentages of staining were photographed to create an analogue for the amount of blood absorbed by the gauze. Results A visual analogue scale was created to aid the estimation of blood absorbed by the gauze. The absorptive capacity of different gauze sizes was determined when the gauze was dripping with blood. The amount of reduction in absorption was also determined when the gauze was wetted with normal saline before use. Conclusions The use of a visual analogue may increase the accuracy of blood loss estimation and decrease the consequences related to over or underestimation of blood loss. PMID:27626017

  3. The ratio of injured to fatalities in earthquakes, estimated from intensity and building properties

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Trendafiloski, G.

    2009-04-01

    a city with poorly constructed buildings. The over all ratio for Bam was R=0.33 and for three districts it was R=0.2. In the only other city in the epicentral area, Baravat, located within about four kilometers of the epicenter R=0.55. Our contention that R is a function of I is further supported by analyzing R(I) for earthquakes where R is known for several settlements. The uncertainties in input parameters like earthquake source properties and Fat are moderate, those in Inj are large. Nevertheless our results are robust because the difference between R in the developed and developing world is enormous and the dependence on I is obvious. We conclude that R in most earthquakes results from a mixture of low values near the epicenter and high values farther away where intensities decrease to VI. The range between settlements in one single earthquake can be approximately 0.2 < R < 100, due to varying distance and hence varying I. Further, R(developed) = 25 R(developing), approximately. We also simulated several past earthquakes in Algeria, Peru and Iran to compare the values of estimated R(I) resulting from the use of ATC-13 and HAZUS casualty matrices with observations. We evaluated these matrices because they are supposed to apply worldwide and they consider all damage states as possible cause of casualties. Our initial conclusion is that the later matrices fit the observations better, in particular for intensity range VII-IX. However, to improve the estimates for all intensity values, we propose that casualty matrices for estimating human losses due to earthquakes should account for differences in I and in the building quality in different parts of the world.

  4. Real Time Seismic Loss Estimation in Italy

    NASA Astrophysics Data System (ADS)

    Goretti, A.; Sabetta, F.

    2009-04-01

    By more than 15 years the Seismic Risk Office is able to perform a real-time evaluation of the earthquake potential loss in any part of Italy. Once the epicentre and the magnitude of the earthquake are made available by the National Institute for Geophysiscs and Volca-nology, the model, based on the Italian Geographic Information Sys-tems, is able to evaluate the extent of the damaged area and the consequences on the built environment. In recent years the model has been significantly improved with new methodologies able to conditioning the uncertainties using observa-tions coming from the fields during the first days after the event. However it is reputed that the main challenges in loss analysis are related to the input data, more than to methodologies. Unlike the ur-ban scenario, where the missing data can be collected with enough accuracy, the country-wise analysis requires the use of existing data bases, often collected for other purposed than seismic scenario evaluation, and hence in some way lacking of completeness and homogeneity. Soil properties, building inventory and population dis-tribution are the main input data that are to be known in any site of the whole Italian territory. To this end the National Census on Popu-lation and Dwellings has provided information on the residential building types and the population that lives in that building types. The critical buildings, such as Hospital, Fire Brigade Stations, Schools, are not included in the inventory, since the national plan for seismic risk assessment of critical buildings is still under way. The choice of a proper soil motion parameter, its attenuation with distance and the building type fragility are important ingredients of the model as well. The presentation will focus on the above mentioned issues, highlight-ing the different data sets used and their accuracy, and comparing the model, input data and results when geographical areas with dif-ferent extent are considered: from the urban scenarios

  5. Ground motions estimates for a cascadia earthquake from liquefaction evidence

    USGS Publications Warehouse

    Dickenson, S.E.; Obermeier, S.F.

    1998-01-01

    Paleoseismic studies conducted in the coastal regions of the Pacific Northwest in the past decade have revealed evidence of crustal downdropping and subsequent tsunami inundation, attributable to a large earthquake along the Cascadia subduction zone which occurred approximately 300 years ago, and most likely in 1700 AD. In order to characterize the severity of ground motions from this earthquake, we report on results of a field search for seismically induced liquefaction features. The search was made chiefly along the coastal portions of several river valleys in Washington, rivers along the central Oregon coast, as well as on islands in the Columbia River of Oregon and Washington. In this paper we focus only on the results of the Columbia River investigation. Numerous liquefaction features were found in some regions, but not in others. The regional distribution of liquefaction features is evaluated as a function of geologic and geotechnical factors at each site in order to estimate the intensity of ground shaking.

  6. Estimating the confidence of earthquake damage scenarios: examples from a logic tree approach

    NASA Astrophysics Data System (ADS)

    Molina, S.; Lindholm, C. D.

    2007-07-01

    Earthquake loss estimation is now becoming an important tool in mitigation planning, where the loss modeling usually is based on a parameterized mathematical representation of the damage problem. In parallel with the development and improvement of such models, the question of sensitivity to parameters that carry uncertainties becomes increasingly important. We have to this end applied the capacity spectrum method (CSM) as described in FEMA HAZUS-MH. Multi-hazard Loss Estimation Methodology, Earthquake Model, Advanced Engineering Building Module. Federal Emergency Management Agency, United States (2003), and investigated the effects of selected parameters. The results demonstrate that loss scenarios may easily vary by as much as a factor of two because of simple parameter variations. Of particular importance for the uncertainty is the construction quality of the structure. These results represent a warning against simple acceptance of unbounded damage scenarios and strongly support the development of computational methods in which parameter uncertainties are propagated through the computations to facilitate confidence bounds for the damage scenarios.

  7. Locating earthquakes with surface waves and centroid moment tensor estimation

    NASA Astrophysics Data System (ADS)

    Wei, Shengji; Zhan, Zhongwen; Tan, Ying; Ni, Sidao; Helmberger, Don

    2012-04-01

    Traditionally, P wave arrival times have been used to locate regional earthquakes. In contrast, the travel times of surface waves dependent on source excitation and the source parameters and depth must be determined independently. Thus surface wave path delays need to be known before such data can be used for location. These delays can be estimated from previous earthquakes using the cut-and-paste technique, Ambient Seismic Noise tomography, and from 3D models. Taking the Chino Hills event as an example, we show consistency of path corrections for (>10 s) Love and Rayleigh waves to within about 1 s obtained from these methods. We then use these empirically derived delay maps to determine centroid locations of 138 Southern California moderate-sized (3.5 > Mw> 5.7) earthquakes using surface waves alone. It appears that these methods are capable of locating the main zone of rupture within a few (˜3) km accuracy relative to Southern California Seismic Network locations with 5 stations that are well distributed in azimuth. We also address the timing accuracy required to resolve non-double-couple source parameters which trades-off with location with less than a km error required for a 10% Compensated Linear Vector Dipole resolution.

  8. Earthquake Loss Assessment for the Evaluation of the Sovereign Risk and Financial Sustainability of Countries and Cities

    NASA Astrophysics Data System (ADS)

    Cardona, O. D.

    2013-05-01

    Recently earthquakes have struck cities both from developing as well as developed countries, revealing significant knowledge gaps and the need to improve the quality of input data and of the assumptions of the risk models. The quake and tsunami in Japan (2011) and the disasters due to earthquakes in Haiti (2010), Chile (2010), New Zealand (2011) and Spain (2011), only to mention some unexpected impacts in different regions, have left several concerns regarding hazard assessment as well as regarding the associated uncertainties to the estimation of the future losses. Understanding probable losses and reconstruction costs due to earthquakes creates powerful incentives for countries to develop planning options and tools to cope with sovereign risk, including allocating the sustained budgetary resources necessary to reduce those potential damages and safeguard development. Therefore the use of robust risk models is a need to assess the future economic impacts, the country's fiscal responsibilities and the contingent liabilities for governments and to formulate, justify and implement risk reduction measures and optimal financial strategies of risk retention and transfer. Special attention should be paid to the understanding of risk metrics such as the Loss Exceedance Curve (empiric and analytical) and the Expected Annual Loss in the context of conjoint and cascading hazards.

  9. Soil amplification maps for estimating earthquake ground motions in the Central US

    USGS Publications Warehouse

    Bauer, R.A.; Kiefer, J.; Hester, N.

    2001-01-01

    The State Geologists of the Central United States Earthquake Consortium (CUSEC) are developing maps to assist State and local emergency managers and community officials in evaluating the earthquake hazards for the CUSEC region. The state geological surveys have worked together to produce a series of maps that show seismic shaking potential for eleven 1 X 2 degree (scale 1:250 000 or 1 in. ??? 3.9 miles) quadrangles that cover the high-risk area of the New Madrid Seismic Zone in eight states. Shear wave velocity values for the surficial materials were gathered and used to classify the soils according to their potential to amplify earthquake ground motions. Geologic base maps of surficial materials or 3-D material maps, either existing or produced for this project, were used in conjunction with shear wave velocities to classify the soils for the upper 15-30 m. These maps are available in an electronic form suitable for inclusion in the federal emergency management agency's earthquake loss estimation program (HAZUS). ?? 2001 Elsevier Science B.V. All rights reserved.

  10. The global historical and future economic loss and cost of earthquakes during the production of adaptive worldwide economic fragility functions

    NASA Astrophysics Data System (ADS)

    Daniell, James; Wenzel, Friedemann

    2014-05-01

    Over the past decade, the production of economic indices behind the CATDAT Damaging Earthquakes Database has allowed for the conversion of historical earthquake economic loss and cost events into today's terms using long-term spatio-temporal series of consumer price index (CPI), construction costs, wage indices, and GDP from 1900-2013. As part of the doctoral thesis of Daniell (2014), databases and GIS layers for a country and sub-country level have been produced for population, GDP per capita, net and gross capital stock (depreciated and non-depreciated) using studies, census information and the perpetual inventory method. In addition, a detailed study has been undertaken to collect and reproduce as many historical isoseismal maps, macroseismic intensity results and reproductions of earthquakes as possible out of the 7208 damaging events in the CATDAT database from 1900 onwards. a) The isoseismal database and population bounds from 3000+ collected damaging events were compared with the output parameters of GDP and net and gross capital stock per intensity bound and administrative unit, creating a spatial join for analysis. b) The historical costs were divided into shaking/direct ground motion effects, and secondary effects costs. The shaking costs were further divided into gross capital stock related and GDP related costs for each administrative unit, intensity bound couplet. c) Costs were then estimated based on the optimisation of the function in terms of costs vs. gross capital stock and costs vs. GDP via the regression of the function. Losses were estimated based on net capital stock, looking at the infrastructure age and value at the time of the event. This dataset was then used to develop an economic exposure for each historical earthquake in comparison with the loss recorded in the CATDAT Damaging Earthquakes Database. The production of economic fragility functions for each country was possible using a temporal regression based on the parameters of

  11. Estimating the Threat of Tsunamigenic Earthquakes and Earthquake Induced-Landslide Tsunami in the Caribbean

    NASA Astrophysics Data System (ADS)

    McCann, W. R.

    2007-05-01

    more likely to produce slow earthquakes. Subduction of rough seafloor may activate thrust faults within the accretionary prism above the main decollement, causing indentation of the prism toe. Later reactivation of a dormant decollement would enhance the possibility of slow earthquakes. Subduction of significant seafloor relief and corresponding indentation of the accretionary prism toe would then be another parameter to estimate the likelihood of slow earthquakes. Using these criteria, several regions of the Northeastern Caribbean stand out as more likely sources for slow earthquakes.

  12. Estimation of earthquake effects associated with a great earthquake in the New Madrid seismic zone

    USGS Publications Warehouse

    Hopper, Margaret G.; Algermissen, Sylvester Theodore; Dobrovolny, Ernest E.

    1983-01-01

    Estimates have been made of the effects of a large Ms = 8.6, Io = XI earthquake hypothesed to occur anywhere in the New Madrid seismic zone. The estimates are based on the distributions of intensities associated with the earthquakes of 1811-12, 1843 and 1895 although the effects of other historical shocks are also considered. The resulting composite type intensity map for a maximum intensity XI is believed to represent the upper level of shaking likely to occur. Specific intensity maps have been developed for six cities near the epicentral region taking into account the most likely distribution of site response in each city. Intensities found are: IX for Carbondale, IL; VIII and IX for Evansville, IN; VI and VIII for Little Rock, AR; IX and X for Memphis, TN; VIII, IX, and X for Paducah, KY; and VIII and X for Poplar Bluff, MO. On a regional scale, intensities are found to attenuate from the New Madrid seismic zone most rapidly to the west and southwest sides of the zone, most slowly to the northwest along the Mississippi River, on the northeast along the Ohio River, and on the southeast toward Georgia and South Carolina. Intensities attenuate toward the north, east, and south in a more normal fashion. Known liquefaction effects are documented but much more research is needed to define the liquefaction potential.

  13. Combining earthquakes and GPS data to estimate the probability of future earthquakes with magnitude Mw ≥ 6.0

    NASA Astrophysics Data System (ADS)

    Chen, K.-P.; Tsai, Y.-B.; Chang, W.-Y.

    2013-10-01

    According to Wyss et al. (2000) result indicates that future main earthquakes can be expected along zones characterized by low b values. In this study we combine Benioff strain with global positioning system (GPS) data to estimate the probability of future Mw ≥ 6.0 earthquakes for a grid covering Taiwan. An approach similar to the maximum likelihood method was used to estimate Gutenberg-Richter parameters a and b. The two parameters were then used to estimate the probability of simulating future earthquakes of Mw ≥ 6.0 for each of the 391 grids (grid interval = 0.1°) covering Taiwan. The method shows a high probability of earthquakes in western Taiwan along a zone that extends from Taichung southward to Nantou, Chiayi, Tainan and Kaohsiung. In eastern Taiwan, there also exists a high probability zone from Ilan southward to Hualian and Taitung. These zones are characterized by high earthquake entropy, high maximum shear strain rates, and paths of low b values. A relation between entropy and maximum shear strain rate is also obtained. It indicates that the maximum shear strain rate is about 4.0 times the entropy. The results of this study should be of interest to city planners, especially those concerned with earthquake preparedness. And providing the earthquake insurers to draw up the basic premium.

  14. Earthquake!

    ERIC Educational Resources Information Center

    Markle, Sandra

    1987-01-01

    A learning unit about earthquakes includes activities for primary grade students, including making inferences and defining operationally. Task cards are included for independent study on earthquake maps and earthquake measuring. (CB)

  15. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a ...

  16. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  17. Physics-based estimates of maximum magnitude of induced earthquakes

    NASA Astrophysics Data System (ADS)

    Ampuero, Jean-Paul; Galis, Martin; Mai, P. Martin

    2016-04-01

    In this study, we present new findings when integrating earthquake physics and rupture dynamics into estimates of maximum magnitude of induced seismicity (Mmax). Existing empirical relations for Mmax lack a physics-based relation between earthquake size and the characteristics of the triggering stress perturbation. To fill this gap, we extend our recent work on the nucleation and arrest of dynamic ruptures derived from fracture mechanics theory. There, we derived theoretical relations between the area and overstress of overstressed asperity and the ability of ruptures to either stop spontaneously (sub-critical ruptures) or runaway (super-critical ruptures). These relations were verified by comparison with simulation and laboratory results, namely 3D dynamic rupture simulations on faults governed by slip-weakening friction, and laboratory experiments of frictional sliding nucleated by localized stresses. Here, we apply and extend these results to situations that are representative for the induced seismicity environment. We present physics-based predictions of Mmax on a fault intersecting cylindrical reservoir. We investigate Mmax dependence on pore-pressure variations (by varying reservoir parameters), frictional parameters and stress conditions of the fault. We also derive Mmax as a function of injected volume. Our approach provides results that are consistent with observations but suggests different scaling with injected volume than that of empirical relation by McGarr, 2014.

  18. Strong Ground Motion Estimation During the Kutch, India Earthquake

    NASA Astrophysics Data System (ADS)

    Iyengar, R. N.; Kanth, S. T. G. Raghu

    2006-01-01

    In the absence of strong motion records, ground motion during the 26th January, 2001 Kutch, India earthquake, has been estimated by analytical methods. A contour map of peak ground acceleration (PGA) values in the near source region is provided. These results are validated by comparing them with spectral response recorder data and field observations. It is found that very near the epicenter, PGA would have exceeded 0.6 g. A set of three aftershock records have been used as empirical Green's functions to simulate ground acceleration time history and 5% damped response spectrum at Bhuj City. It is found that at Bhuj, PGA would have been 0.31 g 0.37 g. It is demonstrated that source mechanism models can be effectively used to understand spatial variability of large-scale ground movements near urban areas due to the rupture of active faults.

  19. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  20. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  1. Likely Human Losses in Future Earthquakes in Central Myanmar, Beyond the Northern end of the M9.3 Sumatra Rupture of 2004

    NASA Astrophysics Data System (ADS)

    Wyss, B. M.; Wyss, M.

    2007-12-01

    We estimate that the city of Rangoon and adjacent provinces (Rangoon, Rakhine, Ayeryarwady, Bago) represent an earthquake risk similar in severity to that of Istanbul and the Marmara Sea region. After the M9.3 Sumatra earthquake of December 2004 that ruptured to a point north of the Andaman Islands, the likelihood of additional ruptures in the direction of Myanmar and within Myanmar is increased. This assumption is especially plausible since M8.2 and M7.9 earthquakes in September 2007 extended the 2005 ruptures to the south. Given the dense population of the aforementioned provinces, and the fact that historically earthquakes of M7.5 class have occurred there (in 1858, 1895 and three in 1930), it would not be surprising, if similar sized earthquakes would occur in the coming decades. Considering that we predicted the extent of human losses in the M7.6 Kashmir earthquake of October 2005 approximately correctly six month before it occurred, it seems reasonable to attempt to estimate losses in future large to great earthquakes in central Myanmar and along its coast of the Bay of Bengal. We have calculated the expected number of fatalities for two classes of events: (1) M8 ruptures offshore (between the Andaman Islands and the Myanmar coast, and along Myanmar's coast of the Bay of Bengal. (2) M7.5 repeats of the historic earthquakes that occurred in the aforementioned years. These calculations are only order of magnitude estimates because all necessary input parameters are poorly known. The population numbers, the condition of the building stock, the regional attenuation law, the local site amplification and of course the parameters of future earthquakes can only be estimated within wide ranges. For this reason, we give minimum and maximum estimates, both within approximate error limits. We conclude that the M8 earthquakes located offshore are expected to be less harmful than the M7.5 events on land: For M8 events offshore, the minimum number of fatalities is estimated

  2. Quantitative Estimates of the Numbers of Casualties to be Expected due to Major Earthquakes Near Megacities

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Wenzel, F.

    2004-12-01

    Defining casualties as the sum of the fatalities plus injured, we use their mean number, as calculated by QUAKELOSS (developed by Extreme Situations Research Center, Moscow) as a measure of the extent of possible disasters due to earthquakes. Examples of cities we examined include Algiers, Cairo, Istanbul, Mumbai and Teheran, with populations ranging from about 3 to 20 million. With the assumption that the properties of the building stock has not changed with time since 1950, we find that the number of expected casualties will have increased about 5 to 10 fold by the year 2015. This increase is directly proportional to the increase of the population. For the assumed magnitude, we used M7 and M6.5 because shallow earthquakes in this range can occur in the seismogenic layer, without rupturing the surface. This means, they could occur anywhere in a seismically active area, not only along known faults. As a function of epicentral distance the fraction of casualties of the population decrease from about 6% at 20 km, to 3% at 30 km and 0.5% at 50 km, for an earthquake of M7. At 30 km distance, the assumed variation of the properties of the building stock from country to country give rise to variations of 1% to 5% for the estimate of the percent of the population that become casualties. As a function of earthquake size, the expected number of casualties drop by approximately an order of magnitude for an M6.5, compared to an M7, at 30 km distance. Because the computer code and database in QUAKELOSS are calibrated based on about 1000 earthquakes with fatalities, and verified by real-time loss estimates for about 60 cases, these results are probably of the correct order of magnitude. However, the results should not be taken as overly reliable, because (1) the probability calculations of the losses result in uncertainties of about a factor of two, (2) the method has been tested for medium size cities, not for megacities, and (3) many assumptions were made. Nevertheless, it is

  3. Global Earthquake and Volcanic Eruption Economic losses and costs from 1900-2014: 115 years of the CATDAT database - Trends, Normalisation and Visualisation

    NASA Astrophysics Data System (ADS)

    Daniell, James; Skapski, Jens-Udo; Vervaeck, Armand; Wenzel, Friedemann; Schaefer, Andreas

    2015-04-01

    Over the past 12 years, an in-depth database has been constructed for socio-economic losses from earthquakes and volcanoes. The effects of earthquakes and volcanic eruptions have been documented in many databases, however, many errors and incorrect details are often encountered. To combat this, the database was formed with socioeconomic checks of GDP, capital stock, population and other elements, as well as providing upper and lower bounds to each available event loss. The definition of economic losses within the CATDAT Damaging Earthquakes Database (Daniell et al., 2011a) as of v6.1 has now been redefined to provide three options of natural disaster loss pricing, including reconstruction cost, replacement cost and actual loss, in order to better define the impact of historical disasters. Similarly for volcanoes as for earthquakes, a reassessment has been undertaken looking at the historical net and gross capital stock and GDP at the time of the event, including the depreciated stock, in order to calculate the actual loss. A normalisation has then been undertaken using updated population, GDP and capital stock. The difference between depreciated and gross capital can be removed from the historical loss estimates which have been all calculated without taking depreciation of the building stock into account. The culmination of time series from 1900-2014 of net and gross capital stock, GDP, direct economic loss data, use of detailed studies of infrastructure age, and existing damage surveys, has allowed the first estimate of this nature. The death tolls in earthquakes from 1900-2014 are presented in various forms, showing around 2.32 million deaths due to earthquakes (with a range of 2.18 to 2.63 million) and around 59% due to masonry buildings and 28% from secondary effects. For the death tolls from the volcanic eruption database, 98000 deaths with a range from around 83000 to 107000 is seen from 1900-2014. The application of VSL life costing from death and injury

  4. A comparison of socio-economic loss analysis from the 2013 Haiyan Typhoon and Bohol Earthquake events in the Philippines in near real-time

    NASA Astrophysics Data System (ADS)

    Daniell, James; Mühr, Bernhard; Kunz-Plapp, Tina; Brink, Susan A.; Kunz, Michael; Khazai, Bijan; Wenzel, Friedemann

    2014-05-01

    In the aftermath of a disaster, the extent of the socioeconomic loss (fatalities, homelessness and economic losses) is often not known and it may take days before a reasonable estimate is known. Using the technique of socio-economic fragility functions developed (Daniell, 2014) using a regression of socio-economic indicators through time against historical empirical loss vs. intensity data, a first estimate can be established. With more information from the region as the disaster unfolds, a more detailed estimate can be provided via a calibration of the initial loss estimate parameters. In 2013, two main disasters hit the Philippines; the Bohol earthquake in October and the Haiyan typhoon in November. Although both disasters were contrasting and hit different regions, the same generalised methodology was used for initial rapid estimates and then the updating of the disaster loss estimate through time. The CEDIM Forensic Disaster Analysis Group of KIT and GFZ produced 6 reports for Bohol and 2 reports for Haiyan detailing various aspects of the disasters from the losses to building damage, the socioeconomic profile and also the social networking and disaster response. This study focusses on the loss analysis undertaken. The following technique was used:- 1. A regression of historical earthquake and typhoon losses for the Philippines was examined using the CATDAT Damaging Earthquakes Database, and various Philippines databases respectively. 2. The historical intensity impact of the examined events were placed in a GIS environment in order to allow correlation with the population and capital stock database from 1900-2013 to create a loss function. The modified human development index from 1900-2013 was also used to also calibrate events through time. 3. The earthquake intensity and the wind speed intensity was used from the 2013 events as well as the 2013 capital stock and population in order to calculate the number of fatalities (except in Haiyan), homeless and

  5. Earthquake recurrence rate estimates for eastern Washington and the Hanford Site

    SciTech Connect

    Rohay, A.C.

    1989-08-01

    The historical and instrumental records of earthquakes were used to estimate earthquake recurrence rates for input to a new seismic hazard analysis at the Hanford Site in eastern Washington. Two areas were evaluated, the eastern Washington region and the smaller Yakima Fold Belt, in which the Hanford Site is located. The completeness of a catalog of earthquakes was evaluated for earthquakes with Modified Mercalli Intensity (MMI) IV through VII. Only one MMI VII earthquake was reported in the last 100 years in eastern Washington. The reporting of MMI VI earthquakes appears to be complete for the last 80 years, and the reporting of MMI V earthquakes appears to be complete for the last 65 years. However, MMI IV earthquakes are consistently under-reported. For a limited set of earthquakes, both MMI and magnitude (M/sub L/) have been reported. A plot of these data indicated that the Gutenberg-Richter relationship could be used to estimate earthquakes magnitudes from intensities. A recurrence curve for the historical earthquake data was calculated using the maximum likelihood method, including corrections for the width of the magnitude conversion. The slope of the recurrence curve (i.e., b-value) was found to be -1.15. Another catalog, one that listed instrumentally detected earthquakes from 1969 to the present, was used to supplement the historical earthquake data. Magnitudes were determined using a coda-length method (M/sub c/) that had been approximately calibrated to local magnitude M/sub L/. For earthquakes whose M/sub c/ was between 3 and 5, the b-value ranged from -1.07 to - 1.12. 12 refs., 9 figs., 9 tabs.

  6. Errors in Expected Human Losses Due to Incorrect Seismic Hazard Estimates

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Nekrasova, A.; Kossobokov, V. G.

    2011-12-01

    The probability of strong ground motion is presented in seismic hazard maps, in which peak ground accelerations (PGA) with 10% probability of exceedance in 50 years are shown by color codes. It has become evident that these maps do not correctly give the seismic hazard. On the seismic hazard map of Japan, the epicenters of the recent large earthquakes are located in the regions of relatively low hazard. The errors of the GSHAP maps have been measured by the difference between observed and expected intensities due to large earthquakes. Here, we estimate how the errors in seismic hazard estimates propagate into errors in estimating the potential fatalities and affected population. We calculated the numbers of fatalities that would have to be expected in the regions of the nine earthquakes with more than 1,000 fatalities during the last 10 years with relatively reliable estimates of fatalities, assuming a magnitude which generates as a maximum intensity the one given by the GSHAP maps. This value is the number of fatalities to be exceeded with probability of 10% during 50 years. In most regions of devastating earthquakes, there are no instruments to measure ground accelerations. Therefore, we converted the PGA expected as a likely maximum based on the GSHAP maps to intensity. The magnitude of the earthquake that would cause the intensity expected by GSHAP as a likely maximum was calculated by M(GSHAP) = (I0 +1.5)/1.5. The numbers of fatalities, which were expected, based on earthquakes with M(GSHAP), were calculated using the loss estimating program QLARM. We calibrated this tool for each case by calculating the theoretical damage and numbers of fatalities (Festim) for the disastrous test earthquakes, generating a match with the observe numbers of fatalities (Fobs=Festim) by adjusting the attenuation relationship within the bounds of commonly observed laws. Calculating the numbers of fatalities expected for the earthquakes with M(GSHAP) will thus yield results that

  7. Estimating the extent of stress influence by using earthquake triggering groundwater level variations in Taiwan

    NASA Astrophysics Data System (ADS)

    Wang, Shih-Jung; Hsu, Kuo-Chin; Lai, Wen-Chi; Wang, Chein-Lee

    2015-11-01

    Groundwater level variations associated with earthquake events may reveal useful information. This study estimates the extent of stress influence, defined as the distance over which an earthquake can induce a step change of the groundwater level, using earthquake-triggering groundwater level variations in Taiwan. Groundwater variations were first characterized based on the dynamics of groundwater level changes dominantly triggered by earthquakes. The step-change data in co-seismic groundwater level variations were used to analyze the extent of stress influence for earthquakes. From the data analysis, the maximum extent of stress influence is 250 km around Taiwan. A two-dimensional approach was adopted to develop two models for estimating the maximum extent of stress influence for earthquakes. From the developed models, the extent of stress influence is proportional to the earthquake magnitude and inversely proportional to the groundwater level change. The model equations can be used to calculate the influence radius of stress from an earthquake by using the observed change of groundwater level and the earthquake magnitude. The models were applied to estimate the area of anomalous stress, defined as the possible areas where the strain energy is accumulated, using the cross areas method. The results show that the estimated area of anomalous stress is close to the epicenter. Complex geological structures and material heterogeneity and anisotropy may explain this disagreement. More data collection and model refinements can improve the proposed model. This study shows the potential of using groundwater level variations for capturing seismic information. The proposed concept of extent of stress influence can be used to estimate the earthquake effect in hydraulic engineering, mining engineering, and carbon dioxide sequestration, etc. This study provides a concept for estimating the possible areas of anomalous stress for a forthcoming earthquake.

  8. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  9. Multicomponent seismic loss estimation on the North Anatolian Fault Zone (Turkey)

    NASA Astrophysics Data System (ADS)

    karimzadeh Naghshineh, S.; Askan, A.; Erberik, M. A.; Yakut, A.

    2015-12-01

    Seismic loss estimation is essential to incorporate seismic risk of structures into an efficient decision-making framework. Evaluation of seismic damage of structures requires a multidisciplinary approach including earthquake source characterization, seismological prediction of earthquake-induced ground motions, prediction of structural responses exposed to ground shaking, and finally estimation of induced damage to structures. As the study region, Erzincan, a city on the eastern part of Turkey is selected which is located in the conjunction of three active strike-slip faults as North Anatolian Fault, North East Anatolian Fault and Ovacik fault. Erzincan city center is in a pull-apart basin underlain by soft sediments that has experienced devastating earthquakes such as the 27 December 1939 (Ms=8.0) and the 13 March 1992 (Mw=6.6) events, resulting in extensive amount of physical as well as economical losses. These losses are attributed to not only the high seismicity of the area but also as a result of the seismic vulnerability of the constructed environment. This study focuses on the seismic damage estimation of Erzincan using both regional seismicity and local building information. For this purpose, first, ground motion records are selected from a set of scenario events simulated with the stochastic finite fault methodology using regional seismicity parameters. Then, existing building stock are classified into specified groups represented with equivalent single-degree-of-freedom systems. Through these models, the inelastic dynamic structural responses are investigated with non-linear time history analysis. To assess the potential seismic damage in the study area, fragility curves for the classified structural types are derived. Finally, the estimated damage is compared with the observed damage during the 1992 Erzincan earthquake. The results are observed to have a reasonable match indicating the efficiency of the ground motion simulations and building analyses.

  10. Estimation of strong ground motions from hypothetical earthquakes on the Cascadia subduction zone, Pacific Northwest

    USGS Publications Warehouse

    Heaton, T.H.; Hartzell, S.H.

    1989-01-01

    Strong ground motions are estimated for the Pacific Northwest assuming that large shallow earthquakes, similar to those experienced in southern Chile, southwestern Japan, and Colombia, may also occur on the Cascadia subduction zone. Fifty-six strong motion recordings for twenty-five subduction earthquakes of Ms???7.0 are used to estimate the response spectra that may result from earthquakes Mw<81/4. Large variations in observed ground motion levels are noted for a given site distance and earthquake magnitude. When compared with motions that have been observed in the western United States, large subduction zone earthquakes produce relatively large ground motions at surprisingly large distances. An earthquake similar to the 22 May 1960 Chilean earthquake (Mw 9.5) is the largest event that is considered to be plausible for the Cascadia subduction zone. This event has a moment which is two orders of magnitude larger than the largest earthquake for which we have strong motion records. The empirical Green's function technique is used to synthesize strong ground motions for such giant earthquakes. Observed teleseismic P-waveforms from giant earthquakes are also modeled using the empirical Green's function technique in order to constrain model parameters. The teleseismic modeling in the period range of 1.0 to 50 sec strongly suggests that fewer Green's functions should be randomly summed than is required to match the long-period moments of giant earthquakes. It appears that a large portion of the moment associated with giant earthquakes occurs at very long periods that are outside the frequency band of interest for strong ground motions. Nevertheless, the occurrence of a giant earthquake in the Pacific Northwest may produce quite strong shaking over a very large region. ?? 1989 Birkha??user Verlag.

  11. Mathematical models for estimating earthquake casualties and damage cost through regression analysis using matrices

    NASA Astrophysics Data System (ADS)

    Urrutia, J. D.; Bautista, L. A.; Baccay, E. B.

    2014-04-01

    The aim of this study was to develop mathematical models for estimating earthquake casualties such as death, number of injured persons, affected families and total cost of damage. To quantify the direct damages from earthquakes to human beings and properties given the magnitude, intensity, depth of focus, location of epicentre and time duration, the regression models were made. The researchers formulated models through regression analysis using matrices and used α = 0.01. The study considered thirty destructive earthquakes that hit the Philippines from the inclusive years 1968 to 2012. Relevant data about these said earthquakes were obtained from Philippine Institute of Volcanology and Seismology. Data on damages and casualties were gathered from the records of National Disaster Risk Reduction and Management Council. The mathematical models made are as follows: This study will be of great value in emergency planning, initiating and updating programs for earthquake hazard reductionin the Philippines, which is an earthquake-prone country.

  12. Estimating shaking-induced casualties and building damage for global earthquake events: a proposed modelling approach

    USGS Publications Warehouse

    So, Emily; Spence, Robin

    2013-01-01

    Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.

  13. Improving Estimates of Coseismic Subsidence from southern Cascadia Subduction Zone Earthquakes at northern Humboldt Bay, California

    NASA Astrophysics Data System (ADS)

    Padgett, J. S.; Engelhart, S. E.; Hemphill-Haley, E.; Kelsey, H. M.; Witter, R. C.

    2015-12-01

    Geological estimates of subsidence from past earthquakes help to constrain Cascadia subduction zone (CSZ) earthquake rupture models. To improve subsidence estimates for past earthquakes along the southern CSZ, we apply transfer function analysis on microfossils from 3 intertidal marshes in northern Humboldt Bay, California, ~60 km north of the Mendocino Triple Junction. The transfer function method uses elevation-dependent intertidal foraminiferal and diatom assemblages to reconstruct relative sea-level (RSL) change indicated by shifts in microfossil assemblages. We interpret stratigraphic evidence associated with sudden shifts in microfossils to reflect sudden RSL rise due to subsidence during past CSZ earthquakes. Laterally extensive (>5 km) and sharp mud-over-peat contacts beneath marshes at Jacoby Creek, Mad River Slough, and McDaniel Slough demonstrate widespread earthquake subsidence in northern Humboldt Bay. C-14 ages of plant macrofossils taken from above and below three contacts that correlate across all three sites, provide estimates of the times of subsidence at ~250 yr BP, ~1300 yr BP and ~1700 yr BP. Two further contacts observed at only two sites provide evidence for subsidence during possible CSZ earthquakes at ~900 yr BP and ~1100 yr BP. Our study contributes 20 AMS radiocarbon ages, of identifiable plant macrofossils, that improve estimates of the timing of past earthquakes along the southern CSZ. We anticipate that our results will provide more accurate and precise reconstructions of RSL change induced by southern CSZ earthquakes. Prior to our work, studies in northern Humboldt Bay provided subsidence estimates with vertical uncertainties >±0.5 m; too imprecise to adequately constrain earthquake rupture models. Our method, applied recently in coastal Oregon, has shown that subsidence during past CSZ earthquakes can be reconstructed with a precision of ±0.3m and substantially improves constraints on rupture models used for seismic hazard

  14. Uncertainty of earthquake losses due to model uncertainty of input ground motions in the Los Angeles area

    USGS Publications Warehouse

    Cao, T.; Petersen, M.D.

    2006-01-01

    In a recent study we used the Monte Carlo simulation method to evaluate the ground-motion uncertainty of the 2002 update of the California probabilistic seismic hazard model. The resulting ground-motion distribution is used in this article to evaluate the contribution of the hazard model to the uncertainty in earthquake loss ratio, the ratio of the expected loss to the total value of a structure. We use the Hazards U.S. (HAZUS) methodology for loss estimation because it is a widely used and publicly available risk model and intended for regional studies by public agencies and for use by governmental decision makers. We found that the loss ratio uncertainty depends not only on the ground-motion uncertainty but also on the mean ground-motion level. The ground-motion uncertainty, as measured by the coefficient of variation (COV), is amplified when converting to the loss ratio uncertainty because loss increases concavely with ground motion. By comparing the ground-motion uncertainty with the corresponding loss ratio uncertainty for the structural damage of light wood-frame buildings in Los Angeles area, we show that the COV of loss ratio is almost twice the COV of ground motion with a return period of 475 years around the San Andreas fault and other major faults in the area. The loss ratio for the 2475-year ground-motion maps is about a factor of three higher than for the 475-year maps. However, the uncertainties in ground motion and loss ratio for the longer return periods are lower than for the shorter return periods because the uncertainty parameters in the hazard logic tree are independent of the return period, but the mean ground motion increases with return period.

  15. Estimating surface faulting impacts from the shakeout scenario earthquake

    USGS Publications Warehouse

    Treiman, J.A.; Pontib, D.J.

    2011-01-01

    An earthquake scenario, based on a kinematic rupture model, has been prepared for a Mw 7.8 earthquake on the southern San Andreas Fault. The rupture distribution, in the context of other historic large earthquakes, is judged reasonable for the purposes of this scenario. This model is used as the basis for generating a surface rupture map and for assessing potential direct impacts on lifelines and other infrastructure. Modeling the surface rupture involves identifying fault traces on which to place the rupture, assigning slip values to the fault traces, and characterizing the specific displacements that would occur to each lifeline impacted by the rupture. Different approaches were required to address variable slip distribution in response to a variety of fault patterns. Our results, involving judgment and experience, represent one plausible outcome and are not predictive because of the variable nature of surface rupture. ?? 2011, Earthquake Engineering Research Institute.

  16. Conditional Probabilities for Large Events Estimated by Small Earthquake Rate

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Li, Hsien-Chi

    2016-01-01

    We examined forecasting quiescence and activation models to obtain the conditional probability that a large earthquake will occur in a specific time period on different scales in Taiwan. The basic idea of the quiescence and activation models is to use earthquakes that have magnitudes larger than the completeness magnitude to compute the expected properties of large earthquakes. We calculated the probability time series for the whole Taiwan region and for three subareas of Taiwan—the western, eastern, and northeastern Taiwan regions—using 40 years of data from the Central Weather Bureau catalog. In the probability time series for the eastern and northeastern Taiwan regions, a high probability value is usually yielded in cluster events such as events with foreshocks and events that all occur in a short time period. In addition to the time series, we produced probability maps by calculating the conditional probability for every grid point at the time just before a large earthquake. The probability maps show that high probability values are yielded around the epicenter before a large earthquake. The receiver operating characteristic (ROC) curves of the probability maps demonstrate that the probability maps are not random forecasts, but also suggest that lowering the magnitude of a forecasted large earthquake may not improve the forecast method itself. From both the probability time series and probability maps, it can be observed that the probability obtained from the quiescence model increases before a large earthquake and the probability obtained from the activation model increases as the large earthquakes occur. The results lead us to conclude that the quiescence model has better forecast potential than the activation model.

  17. Bayesian estimation of system reliability under asymmetric loss

    NASA Astrophysics Data System (ADS)

    Thompson, Ronald David

    This research is concerned with estimating the reliability of a k-out-of-p system when the lifetimes of its p components are iid, when subjective beliefs about the behavior of the system's individual components are available, and when losses corresponding to overestimation and underestimation errors can be approximated by a suitable family of asymmetric loss functions. Point estimates for such systems are discussed in the context of Bayes estimation with respect to loss functions. A set of properties is proposed as being minimal properties that all loss functions appropriate to reliability estimation might satisfy. Several families of asymmetric loss functions that satisfy these minimal properties are discussed, and their corresponding posterior Bayes estimators are derived. One of these families, squarex loss functions, is a generalization of linex loss functions. The concept of loss robustness is discussed in the context of parametric families of asymmetric loss functions. As an application, the reliability of O-rings critical to the 1986 catastrophic failure of the Space Shuttle Challenger is estimated. Point estimation of negative exponential stress-strength k-out-of-p systems with respect to reference priors is discussed in this context of asymmetric loss functions.

  18. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  19. Source parameters of the 2013 Lushan, Sichuan, Ms7.0 earthquake and estimation of the near-fault strong ground motion

    NASA Astrophysics Data System (ADS)

    Meng, L.; Zhou, L.; Liu, J.

    2013-12-01

    Abstract: The April 20, 2013 Ms 7.0 earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The Lushan earthquake caused a great of loss of property and 196 deaths. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process and calculated source spectral parameters, estimated the strong ground motion in the near-fault field based on the Brune's circle model at first. A dynamical composite source model (DCSM) has been developed further to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Based on the simulated results of the near-fault strong ground motion, described the intensity distribution of the Lushan earthquake field. The simulated intensity indicated that, the maximum intensity value is IX, and region with and above VII almost 16,000km2, which is consistence with observation intensity published online by China Earthquake Administration (CEA) on April 25. Moreover, the numerical modeling developed in this study has great application in the strong ground motion prediction and intensity estimation for the earthquake rescue purpose. In fact, the estimation methods based on the empirical relationship and numerical modeling developed in this study has great application in the strong ground motion prediction for the earthquake source process understand purpose. Keywords: Lushan, Ms7.0 earthquake; near-fault strong ground motion; DCSM; simulated intensity

  20. Using a genetic algorithm to estimate the details of earthquake slip distributions from point surface displacements

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; Nic Bhloscaidh, M.

    2016-03-01

    Examining fault activity over several earthquake cycles is necessary for long-term modeling of the fault strain budget and stress state. While this requires knowledge of coseismic slip distributions for successive earthquakes along the fault, these exist only for the most recent events. However, overlying the Sunda Trench, sparsely distributed coral microatolls are sensitive to tectonically induced changes in relative sea levels and provide a century-spanning paleogeodetic and paleoseismic record. Here we present a new technique called the Genetic Algorithm Slip Estimator to constrain slip distributions from observed surface deformations of corals. We identify a suite of models consistent with the observations, and from them we compute an ensemble estimate of the causative slip. We systematically test our technique using synthetic data. Applying the technique to observed coral displacements for the 2005 Nias-Simeulue earthquake and 2007 Mentawai sequence, we reproduce key features of slip present in previously published inversions such as the magnitude and location of slip asperities. From the displacement data available for the 1797 and 1833 Mentawai earthquakes, we present slip estimates reproducing observed displacements. The areas of highest modeled slip in the paleoearthquake are nonoverlapping, and our solutions appear to tile the plate interface, complementing one another. This observation is supported by the complex rupture pattern of the 2007 Mentawai sequence, underlining the need to examine earthquake occurrence through long-term strain budget and stress modeling. Although developed to estimate earthquake slip, the technique is readily adaptable for a wider range of applications.

  1. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  2. Probability estimates of seismic event occurrence compared to health hazards - Forecasting Taipei's Earthquakes

    NASA Astrophysics Data System (ADS)

    Fung, D. C. N.; Wang, J. P.; Chang, S. H.; Chang, S. C.

    2014-12-01

    Using a revised statistical model built on past seismic probability models, the probability of different magnitude earthquakes occurring within variable timespans can be estimated. The revised model is based on Poisson distribution and includes the use of best-estimate values of the probability distribution of different magnitude earthquakes recurring from a fault from literature sources. Our study aims to apply this model to the Taipei metropolitan area with a population of 7 million, which lies in the Taipei Basin and is bounded by two normal faults: the Sanchaio and Taipei faults. The Sanchaio fault is suggested to be responsible for previous large magnitude earthquakes, such as the 1694 magnitude 7 earthquake in northwestern Taipei (Cheng et. al., 2010). Based on a magnitude 7 earthquake return period of 543 years, the model predicts the occurrence of a magnitude 7 earthquake within 20 years at 1.81%, within 79 years at 6.77% and within 300 years at 21.22%. These estimates increase significantly when considering a magnitude 6 earthquake; the chance of one occurring within the next 20 years is estimated to be 3.61%, 79 years at 13.54% and 300 years at 42.45%. The 79 year period represents the average lifespan of the Taiwan population. In contrast, based on data from 2013, the probability of Taiwan residents experiencing heart disease or malignant neoplasm is 11.5% and 29%. The inference of this study is that the calculated risk that the Taipei population is at from a potentially damaging magnitude 6 or greater earthquake occurring within their lifetime is just as great as of suffering from a heart attack or other health ailments.

  3. Coastal land loss and gain as potential earthquake trigger mechanism in SCRs

    NASA Astrophysics Data System (ADS)

    Klose, C. D.

    2007-12-01

    In stable continental regions (SCRs), historic data show earthquakes can be triggered by natural tectonic sources in the interior of the crust and also by sources stemming from the Earth's sub/surface. Building off of this framework, the following abstract will discuss both as potential sources that might have triggered the 2007 ML4.2 Folkestone earthquake in Kent, England. Folkestone, located along the Southeast coast of Kent in England, is a mature aseismic region. However, a shallow earthquake with a local magnitude of ML = 4.2 occurred on April 28 2007 at 07:18 UTC about 1 km East of Folkestone (51.008° N, 1.206° E) between Dover and New Romney. The epicentral error is about ±5 km. While coastal land loss has major effects towards the Southwest and the Northeast of Folkestone, research observations suggest that erosion and landsliding do not exist in the immediate Folkestone city area (<1km). Furthermore, erosion removes rock material from the surface. This mass reduction decreases the gravitational stress component and would bring a fault away from failure, given a tectonic normal and strike-slip fault regime. In contrast, land gain by geoengineering (e.g., shingle accumulation) in the harbor of Folkestone dates back to 1806. The accumulated mass of sand and gravel accounted for a 2.8·109 kg (2.8 Mt) in 2007. This concentrated mass change less than 1 km away from the epicenter of the mainshock was able to change the tectonic stress in the strike-slip/normal stress regime. Since 1806, shear and normal stresses increased at most on oblique faults dipping 60±10°. The stresses reached values ranging between 1.0 KPa and 30.0 KPa in up to 2 km depth, which are critical for triggering earthquakes. Furthermore, the ratio between holding and driving forces continuously decreased for 200 years. In conclusion, coastal engineering at the surface most likely dominates as potential trigger mechanism for the 2007 ML4.2 Folkestone earthquake. It can be anticipated that

  4. Estimating locations and magnitudes of earthquakes in eastern North America from Modified Mercalli intensities

    USGS Publications Warehouse

    Bakun, W.H.; Johnston, A.C.; Hopper, M.G.

    2003-01-01

    We use 28 calibration events (3.7 ??? M ??? 7.3) from Texas to the Grand Banks, Newfoundland, to develop a Modified Mercalli intensity (MMI) model and associated site corrections for estimating source parameters of historical earthquakes in eastern North America. The model, MMI = 1.41 + 1.68 ?? M - 0.00345 ?? ?? - 2.08log (??), where ?? is the distance in kilometers from the epicenter and M is moment magnitude, provides unbiased estimates of M and its uncertainty, and, if site corrections are used, of source location. The model can be used for the analysis of historical earthquakes with only a few MMI assignments. We use this model, MMI site corrections, and Bakun and Wentworth's (1997 technique to estimate M and the epicenter for three important historical earthquakes. The intensity magnitude M1 is 6.1 for the 18 November 1755 earthquake near Cape Ann, Massachusetts; 6.0 for the 5 January 1843 earthquake near Marked Tree, Arkansas; and 6.0 for the 31 October 1895 earthquake. The 1895 event probably occurred in southern Illinois, about 100 km north of the site of significant ground failure effects near Charleston, Missouri.

  5. ShakeMap Atlas 2.0: an improved suite of recent historical earthquake ShakeMaps for global hazard analyses and loss model calibration

    USGS Publications Warehouse

    Garcia, D.; Mah, R.T.; Johnson, K.L.; Hearne, M.G.; Marano, K.D.; Lin, K.-W.; Wald, D.J.

    2012-01-01

    We introduce the second version of the U.S. Geological Survey ShakeMap Atlas, which is an openly-available compilation of nearly 8,000 ShakeMaps of the most significant global earthquakes between 1973 and 2011. This revision of the Atlas includes: (1) a new version of the ShakeMap software that improves data usage and uncertainty estimations; (2) an updated earthquake source catalogue that includes regional locations and finite fault models; (3) a refined strategy to select prediction and conversion equations based on a new seismotectonic regionalization scheme; and (4) vastly more macroseismic intensity and ground-motion data from regional agencies All these changes make the new Atlas a self-consistent, calibrated ShakeMap catalogue that constitutes an invaluable resource for investigating near-source strong ground-motion, as well as for seismic hazard, scenario, risk, and loss-model development. To this end, the Atlas will provide a hazard base layer for PAGER loss calibration and for the Earthquake Consequences Database within the Global Earthquake Model initiative.

  6. Ground motion modeling of the 1906 San Francisco earthquake II: Ground motion estimates for the 1906 earthquake and scenario events

    SciTech Connect

    Aagaard, B; Brocher, T; Dreger, D; Frankel, A; Graves, R; Harmsen, S; Hartzell, S; Larsen, S; McCandless, K; Nilsson, S; Petersson, N A; Rodgers, A; Sjogreen, B; Tkalcic, H; Zoback, M L

    2007-02-09

    We estimate the ground motions produced by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  7. USGS approach to real-time estimation of earthquake-triggered ground failure - Results of 2015 workshop

    USGS Publications Warehouse

    Allstadt, Kate E.; Thompson, Eric M.; Wald, David J.; Hamburger, Michael W.; Godt, Jonathan W.; Knudsen, Keith L.; Jibson, Randall W.; Jessee, M. Anna; Zhu, Jing; Hearne, Michael; Baise, Laurie G.; Tanyas, Hakan; Marano, Kristin D.

    2016-01-01

    The U.S. Geological Survey (USGS) Earthquake Hazards and Landslide Hazards Programs are developing plans to add quantitative hazard assessments of earthquake-triggered landsliding and liquefaction to existing real-time earthquake products (ShakeMap, ShakeCast, PAGER) using open and readily available methodologies and products. To date, prototype global statistical models have been developed and are being refined, improved, and tested. These models are a good foundation, but much work remains to achieve robust and defensible models that meet the needs of end users. In order to establish an implementation plan and identify research priorities, the USGS convened a workshop in Golden, Colorado, in October 2015. This document summarizes current (as of early 2016) capabilities, research and operational priorities, and plans for further studies that were established at this workshop. Specific priorities established during the meeting include (1) developing a suite of alternative models; (2) making use of higher resolution and higher quality data where possible; (3) incorporating newer global and regional datasets and inventories; (4) reducing barriers to accessing inventory datasets; (5) developing methods for using inconsistent or incomplete datasets in aggregate; (6) developing standardized model testing and evaluation methods; (7) improving ShakeMap shaking estimates, particularly as relevant to ground failure, such as including topographic amplification and accounting for spatial variability; and (8) developing vulnerability functions for loss estimates.

  8. A discussion of the socio-economic losses and shelter impacts from the Van, Turkey Earthquakes of October and November 2011

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Kunz-Plapp, T.; Vervaeck, A.; Muehr, B.; Markus, M.

    2012-04-01

    The Van earthquake in 2011 hit at 10:41 GMT (13:41 Local) on Sunday, October 23rd, 2011. It was a Mw7.1-7.3 event located at a depth of around 10 km with the epicentre located directly between Ercis (pop. 75,000) and Van (pop. 370,000). Since then, the CEDIM Forensic Analysis Group (using a team of seismologists, engineers, sociologists and meteorologists) and www.earthquake-report.com has reported and analysed on the Van event. In addition, many damaging aftershocks occurring after the main eventwere analysed including a major aftershock centered in Van-Edremit on November 9th, 2011, causing much additional losses. The province of Van has around 1.035 million people as of the last census. The Van province is one of the poorest in Turkey and has much inequality between the rural and urban centers with an average HDI (Human Development Index) around that of Bhutan or Congo. The earthquakes are estimated to have caused 604 deaths (23 October) and 40 deaths (9 November); mostly due to falling debris and house collapse). In addition, between 1 billion TRY to 4 billion TRY (approx. 555 million USD - 2.2 billion USD) is estimated as total economic losses. This represents around 17 to 66% of the provincial GDP of the Van Province (approx. 3.3 billion USD) as of 2011. From the CATDAT Damaging Earthquakes Database, major earthquakes such as this one have occurred in the year 1111 causing major damage and having a magnitude around 6.5-7. In the year 1646 or 1648, Van was again struck by a M6.7 quake killing around 2000 people. In 1881, a M6.3 earthquake near Van killed 95 people. Again, in 1941, a M5.9 earthquake affected Ercis and Van killing between 190 and 430 people. 1945-1946 as well as 1972 brought again damaging and casualty-bearing earthquakes to the Van province. In 1976, the Van-Muradiye earthquake struck the border region with a M7, killing around 3840 people and causing around 51,000 people to become homeless. Key immediate lessons from similar historic

  9. Estimation of the occurrence rate of strong earthquakes based on hidden semi-Markov models

    NASA Astrophysics Data System (ADS)

    Votsi, I.; Limnios, N.; Tsaklidis, G.; Papadimitriou, E.

    2012-04-01

    The present paper aims at the application of hidden semi-Markov models (HSMMs) in an attempt to reveal key features for the earthquake generation, associated with the actual stress field, which is not accessible to direct observation. The models generalize the hidden Markov models by considering the hidden process to form actually a semi-Markov chain. Considering that the states of the models correspond to levels of actual stress fields, the stress field level at the occurrence time of each strong event is revealed. The dataset concerns a well catalogued seismically active region incorporating a variety of tectonic styles. More specifically, the models are applied in Greece and its surrounding lands, concerning a complete data sample with strong (M≥ 6.5) earthquakes that occurred in the study area since 1845 up to present. The earthquakes that occurred are grouped according to their magnitudes and the cases of two and three magnitude ranges for a corresponding number of states are examined. The parameters of the HSMMs are estimated and their confidence intervals are calculated based on their asymptotic behavior. The rate of the earthquake occurrence is introduced through the proposed HSMMs and its maximum likelihood estimator is calculated. The asymptotic properties of the estimator are studied, including the uniformly strongly consistency and the asymptotical normality. The confidence interval for the proposed estimator is given. We assume the state space of both the observable and the hidden process to be finite, the hidden Markov chain to be homogeneous and stationary and the observations to be conditionally independent. The hidden states at the occurrence time of each strong event are revealed and the rate of occurrence of an anticipated earthquake is estimated on the basis of the proposed HSMMs. Moreover, the mean time for the first occurrence of a strong anticipated earthquake is estimated and its confidence interval is calculated.

  10. Using Modified Mercalli Intensities to estimate acceleration response spectra for the 1906 San Francisco earthquake

    USGS Publications Warehouse

    Boatwright, J.; Bundock, H.; Seekins, L.C.

    2006-01-01

    We derive and test relations between the Modified Mercalli Intensity (MMI) and the pseudo-acceleration response spectra at 1.0 and 0.3 s - SA(1.0 s) and SA(0.3 s) - in order to map response spectral ordinates for the 1906 San Francisco earthquake. Recent analyses of intensity have shown that MMI ??? 6 correlates both with peak ground velocity and with response spectra for periods from 0.5 to 3.0 s. We use these recent results to derive a linear relation between MMI and log SA(1.0 s), and we refine this relation by comparing the SA(1.0 s) estimated from Boatwright and Bundock's (2005) MMI map for the 1906 earthquake to the SA(1.0 s) calculated from recordings of the 1989 Loma Prieta earthquake. South of San Jose, the intensity distributions for the 1906 and 1989 earthquakes are remarkably similar, despite the difference in magnitude and rupture extent between the two events. We use recent strong motion regressions to derive a relation between SA(1.0 s) and SA(0.3 s) for a M7.8 strike-slip earthquake that depends on soil type, acceleration level, and source distance. We test this relation by comparing SA(0.3 s) estimated for the 1906 earthquake to SA(0.3 s) calculated from recordings of both the 1989 Loma Prieta and 1994 Northridge earthquakes, as functions of distance from the fault. ?? 2006, Earthquake Engineering Research Institute.

  11. Estimating convective energy losses from solar central receivers

    SciTech Connect

    Siebers, D L; Kraabel, J S

    1984-04-01

    This report outlines a method for estimating the total convective energy loss from a receiver of a solar central receiver power plant. Two types of receivers are considered in detail: a cylindrical, external-type receiver and a cavity-type receiver. The method is intended to provide the designer with a tool for estimating the total convective energy loss that is based on current knowledge of convective heat transfer from receivers to the environment and that is adaptable to new information as it becomes available. The current knowledge consists of information from two recent large-scale experiments, as well as information already in the literature. Also outlined is a method for estimating the uncertainty in the convective loss estimates. Sample estimations of the total convective energy loss and the uncertainties in those convective energy loss estimates for the external receiver of the 10 MWe Solar Thermal Central Receiver Plant (Barstow, California) and the cavity receiver of the International Energy Agency Small Solar Power Systems Project (Almeria, Spain) are included in the appendices.

  12. Estimating the Probability of Earthquake-Induced Landslides

    NASA Astrophysics Data System (ADS)

    McRae, M. E.; Christman, M. C.; Soller, D. R.; Sutter, J. F.

    2001-12-01

    The development of a regionally applicable, predictive model for earthquake-triggered landslides is needed to improve mitigation decisions at the community level. The distribution of landslides triggered by the 1994 Northridge earthquake in the Oat Mountain and Simi Valley quadrangles of southern California provided an inventory of failures against which to evaluate the significance of a variety of physical variables in probabilistic models of static slope stability. Through a cooperative project, the California Division of Mines and Geology provided 10-meter resolution data on elevation, slope angle, coincidence of bedding plane and topographic slope, distribution of pre-Northridge landslides, internal friction angle and cohesive strength of individual geologic units. Hydrologic factors were not evaluated since failures in the study area were dominated by shallow, disrupted landslides in dry materials. Previous studies indicate that 10-meter digital elevation data is required to properly characterize the short, steep slopes on which many earthquake-induced landslides occur. However, to explore the robustness of the model at different spatial resolutions, models were developed at the 10, 50, and 100-meter resolution using classification and regression tree (CART) analysis and logistic regression techniques. Multiple resampling algorithms were tested for each variable in order to observe how resampling affects the statistical properties of each grid, and how relationships between variables within the model change with increasing resolution. Various transformations of the independent variables were used to see which had the strongest relationship with the probability of failure. These transformations were based on deterministic relationships in the factor of safety equation. Preliminary results were similar for all spatial scales. Topographic variables dominate the predictive capability of the models. The distribution of prior landslides and the coincidence of slope

  13. LOSS ESTIMATE FOR ITER ECH TRANSMISSION LINE INCLUDING MULTIMODE PROPAGATION

    SciTech Connect

    Shapiro, Michael; Bigelow, Tim S; Caughman, John B; Rasmussen, David A

    2010-01-01

    The ITER electron cyclotron heating (ECH) transmission lines (TLs) are 63.5-mm-diam corrugated waveguides that will each carry 1 MW of power at 170 GHz. The TL is defined here as the corrugated wave guide system connecting the gyrotron mirror optics unit (MO U) to the entrance of the ECH launcher and includes miter bends and other corrugated wave guide components. The losses on the ITER TL have been calculated for four possible cases corresponding to having HE(11) mode purity at the input of the TL of 100, 97, 90, and 80%. The losses due to coupling, ohmic, and mode conversion loss are evaluated in detail using a numerical code and analytical approaches. Estimates of the calorimetric loss on the line show that the output power is reduced by about 5, +/- 1% because of ohmic loss in each of the four cases. Estimates of the mode conversion loss show that the fraction of output power in the HE(11) mode is similar to 3% smaller than the fraction of input power in the HE(11) mode. High output mode purity therefore can be achieved only with significantly higher input mode purity. Combining both ohmic and mode conversion loss, the efficiency of the TL from the gyrotron MOU to the ECH launcher can be roughly estimated in theory as 92% times the fraction of input power in the HE(11) mode.

  14. The importance of in-situ observations for rapid loss estimates in the Euro-Med region

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Mazet Roux, G.; Gilles, S.

    2009-04-01

    A major (M>7) earthquake occurring in a densely populated area will inevitably cause significant damage and generally speaking the poorer the country the higher the number of fatalities. It was clear for any earthquake monitoring agency that the M7.8 Wenchuan earthquake in May 2008 was a disaster as soon its magnitude and location had been estimated. However, the loss estimates of moderate to strong earthquakes (M5 to M6) occurring close to an urban area is much trickier because the losses are the result of the convolution of many parameters (location, magnitude, depth, directivity, seismic attenuation, site effects, building vulnerability, repartition of the population at the time of the event…) which are either affected by non-negligible uncertainties or poorly constrained at least at a global scale. Just considering one of this parameter, the epicentral location: In this range of magnitude, the characteristic size of the potentially damaged area is comparable to the typical epicentral location uncertainty obtained in real time, i.e. 10 to 15 km. It is then not possible to discriminate in real time between an earthquake location right below a town which could cause significant damage and a location 15 km away which impact would be much lower. Clearly, if the uncertainties affecting each of the parameters are properly taken into account, for such earthquakes the resulting scenarios of losses will range from no impact to very significant impact and then the results will not be of much use. The way to reduce the uncertainties on the loss estimates in such cases is then to collect in-situ information on the local shaking level and/or on the actual damage at a number of localities. In area of low seismic hazard, the cost of installing dense accelerometric network is, in practice, too high and the only remaining solution is to rapidly collect observations of the damage. That is what the EMSC has been developing for the last few years by involving the Citizen in

  15. A General Method to Estimate Earthquake Moment and Magnitude using Regional Phase Amplitudes

    SciTech Connect

    Pasyanos, M E

    2009-11-19

    This paper presents a general method of estimating earthquake magnitude using regional phase amplitudes, called regional M{sub o} or regional M{sub w}. Conceptually, this method uses an earthquake source model along with an attenuation model and geometrical spreading which accounts for the propagation to utilize regional phase amplitudes of any phase and frequency. Amplitudes are corrected to yield a source term from which one can estimate the seismic moment. Moment magnitudes can then be reliably determined with sets of observed phase amplitudes rather than predetermined ones, and afterwards averaged to robustly determine this parameter. We first examine in detail several events to demonstrate the methodology. We then look at various ensembles of phases and frequencies, and compare results to existing regional methods. We find regional M{sub o} to be a stable estimator of earthquake size that has several advantages over other methods. Because of its versatility, it is applicable to many more events, particularly smaller events. We make moment estimates for earthquakes ranging from magnitude 2 to as large as 7. Even with diverse input amplitude sources, we find magnitude estimates to be more robust than typical magnitudes and existing regional methods and might be tuned further to improve upon them. The method yields a more meaningful quantity of seismic moment, which can be recast as M{sub w}. Lastly, it is applied here to the Middle East region using an existing calibration model, but it would be easy to transport to any region with suitable attenuation calibration.

  16. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    USGS Publications Warehouse

    Boyd, Oliver; Cramer, Chris H.

    2014-01-01

    A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.

  17. Heterogeneous rupture in the great Cascadia earthquake of 1700 inferred from coastal subsidence estimates

    USGS Publications Warehouse

    Wang, Pei-Ling; Engelhart, Simon E.; Wang, Kelin; Hawkes, Andrea D.; Horton, Benjamin P.; Nelson, Alan R.; Witter, Robert C.

    2013-01-01

    Past earthquake rupture models used to explain paleoseismic estimates of coastal subsidence during the great A.D. 1700 Cascadia earthquake have assumed a uniform slip distribution along the megathrust. Here we infer heterogeneous slip for the Cascadia margin in A.D. 1700 that is analogous to slip distributions during instrumentally recorded great subduction earthquakes worldwide. The assumption of uniform distribution in previous rupture models was due partly to the large uncertainties of then available paleoseismic data used to constrain the models. In this work, we use more precise estimates of subsidence in 1700 from detailed tidal microfossil studies. We develop a 3-D elastic dislocation model that allows the slip to vary both along strike and in the dip direction. Despite uncertainties in the updip and downdip slip extensions, the more precise subsidence estimates are best explained by a model with along-strike slip heterogeneity, with multiple patches of high-moment release separated by areas of low-moment release. For example, in A.D. 1700, there was very little slip near Alsea Bay, Oregon (~44.4°N), an area that coincides with a segment boundary previously suggested on the basis of gravity anomalies. A probable subducting seamount in this area may be responsible for impeding rupture during great earthquakes. Our results highlight the need for more precise, high-quality estimates of subsidence or uplift during prehistoric earthquakes from the coasts of southern British Columbia, northern Washington (north of 47°N), southernmost Oregon, and northern California (south of 43°N), where slip distributions of prehistoric earthquakes are poorly constrained.

  18. Modified Mercalli Intensity for scenario earthquakes in Evansville, Indiana

    USGS Publications Warehouse

    Cramer, Chris; Haase, Jennifer; Boyd, Oliver

    2012-01-01

    Evansville, Indiana, has experienced minor damage from earthquakes several times in the past 200 years. Because of this history and the fact that Evansville is close to the Wabash Valley and New Madrid seismic zones, there is concern about the hazards from earthquakes. Earthquakes currently cannot be predicted, but scientists can estimate how strongly the ground is likely to shake as a result of an earthquake. Earthquake-hazard maps provide one way of conveying such estimates of strong ground shaking and will help the region prepare for future earthquakes and reduce earthquake-caused losses.

  19. Rapid Estimation of Macroseismic Intensity for On-site Earthquake Early Warning in Italy from Early Radiated Energ

    NASA Astrophysics Data System (ADS)

    Emolo, A.; Zollo, A.; Brondi, P.; Picozzi, M.; Mucciarelli, M.

    2015-12-01

    Earthquake Early Warning System (EEWS) are effective tools for the risk mitigation in active seismic regions. Recently, a feasibility study of a nation-wide earthquake early warning systems has been conducted for Italy considering the RAN Network and the EEW software platform PRESTo. This work showed that a reliable estimations in terms of magnitude and epicentral localization would be available within 3-4 seconds after the first P-wave arrival. On the other hand, given the RAN's density, a regional EEWS approach would result in a Blind Zone (BZ) of 25-30 km in average. Such BZ dimension would provide lead-times greater than zero only for events having magnitude larger than 6.5. Considering that in Italy also smaller events are capable of generating great losses both in human and economic terms, as dramatically experienced during the recent 2009 L'Aquila (ML 5.9) and 2012 Emilia (ML 5.9) earthquakes, it has become urgent to develop and test on-site approaches. The present study is focused on the development of a new on-site EEW metodology for the estimation of the macroseismic intensity at a target site or area. In this analysis we have used a few thousands of accelerometric traces recorded by RAN related to the largest earthquakes (ML>4) occurred in Italy in the period 1997-2013. The work is focused on the integral EW parameter Squared Velocity Integral (IV2) and on its capability to predict the peak ground velocity PGV and the Housner Intensity IH, as well as from these latters we parameterized a new relation between IV2 and the Macroseismic Intensity. To assess the performance of the developed on-site EEW relation, we used data of the largest events occurred in Italy in the last 6 years recorded by the Osservatorio Sismico delle Strutture, as well as on the recordings of the moderate earthquake reported by INGV Strong Motion Data. The results shows that the macroseismic intensity values predicted by IV2 and the one estimated by PGV and IH are in good agreement.

  20. Strong earthquake motion estimates for three sites on the U.C. San Diego campus

    SciTech Connect

    Day, S; Doroudian, M; Elgamal, A; Gonzales, S; Heuze, F; Lai, T; Minster, B; Oglesby, D; Riemer, M; Vernon, F; Vucetic, M; Wagoner, J; Yang, Z

    2002-05-07

    The approach of the Campus Earthquake Program (CEP) is to combine the substantial expertise that exists within the UC system in geology, seismology, and geotechnical engineering, to estimate the earthquake strong motion exposure of UC facilities. These estimates draw upon recent advances in hazard assessment, seismic wave propagation modeling in rocks and soils, and dynamic soil testing. The UC campuses currently chosen for application of our integrated methodology are Riverside, San Diego, and Santa Barbara. The procedure starts with the identification of possible earthquake sources in the region and the determination of the most critical fault(s) related to earthquake exposure of the campus. Combined geological, geophysical, and geotechnical studies are then conducted to characterize each campus with specific focus on the location of particular target buildings of special interest to the campus administrators. We drill, sample, and geophysically log deep boreholes next to the target structure, to provide direct in-situ measurements of subsurface material properties, and to install uphole and downhole 3-component seismic sensors capable of recording both weak and strong motions. The boreholes provide access below the soil layers, to deeper materials that have relatively high seismic shear-wave velocities. Analyses of conjugate downhole and uphole records provide a basis for optimizing the representation of the low-strain response of the sites. Earthquake rupture scenarios of identified causative faults are combined with the earthquake records and with nonlinear soil models to provide site-specific estimates of strong motions at the selected target locations. The predicted ground motions are shared with the UC consultants, so that they can be used as input to the dynamic analysis of the buildings. Thus, for each campus targeted by the CEP project, the strong motion studies consist of two phases, Phase 1--initial source and site characterization, drilling

  1. Strong Earthquake Motion Estimates for Three Sites on the U.C. Riverside Campus

    SciTech Connect

    Archuleta, R.; Elgamal, A.; Heuze, F.; Lai, T.; Lavalle, D.; Lawrence, B.; Liu, P.C.; Matesic, L.; Park, S.; Riemar, M.; Steidl, J.; Vucetic, M.; Wagoner, J.; Yang, Z.

    2000-11-01

    The approach of the Campus Earthquake Program (CEP) is to combine the substantial expertise that exists within the UC system in geology, seismology, and geotechnical engineering, to estimate the earthquake strong motion exposure of UC facilities. These estimates draw upon recent advances in hazard assessment, seismic wave propagation modeling in rocks and soils, and dynamic soil testing. The UC campuses currently chosen for application of our integrated methodology are Riverside, San Diego, and Santa Barbara. The procedure starts with the identification of possible earthquake sources in the region and the determination of the most critical fault(s) related to earthquake exposure of the campus. Combined geological, geophysical, and geotechnical studies are then conducted to characterize each campus with specific focus on the location of particular target buildings of special interest to the campus administrators. We drill and geophysically log deep boreholes next to the target structure, to provide direct in-situ measurements of subsurface material properties, and to install uphole and downhole 3-component seismic sensors capable of recording both weak and strong motions. The boreholes provide access below the soil layers, to deeper materials that have relatively high seismic shear-wave velocities. Analyses of conjugate downhole and uphole records provide a basis for optimizing the representation of the low-strain response of the sites. Earthquake rupture scenarios of identified causative faults are combined with the earthquake records and with nonlinear soil models to provide site-specific estimates of strong motions at the selected target locations. The predicted ground motions are shared with the UC consultants, so that they can be used as input to the dynamic analysis of the buildings. Thus, for each campus targeted by the CEP project, the strong motion studies consist of two phases, Phase 1--initial source and site characterization, drilling, geophysical

  2. Lessons on Seismic Hazard Estimation from the 2003 Bingol, Turkey Earthquake

    NASA Astrophysics Data System (ADS)

    Nalbant, S. S.; Steacy, S.; McCloskey, J.

    2003-12-01

    In a 2002 paper the stress state along the East Anatolian Fault Zone (EAFZ) was estimated by the addition of long term tectonic loading to the static stressing effect of a series of large historical earthquakes. The results clearly indicated two areas of particular concern. The first extended along the EAFZ between the cities of Kahraman Maras and Malatya and the second along the trend of the EAFZ between the cities of Elazig and Bingol. The Bingol (M6.4, 1 May 2003) earthquake occurred within this second area with a focal mechanism which was consistent with left lateral rupture of a buried segment of the EAFZ, prompting suggestions that this represented a success for the idea of using Coulomb Stress Modelling to assess seismic hazard. This success, however, depended on the confirmation of the orientation of the earthquake fault; in the event, and in the absence of surface ruptures, aftershock distributions unambiguously showed that the event was a right lateral failure on an unmapped structure conjugate to the EAFZ. The Bingol earthquake was, therefore, not encouraged by the stress field modelled in the 2002 study. Here we reflect on the lessons learned from this case. We identify three possible reasons for the discrepancy between the calculations and the occurrence of the Bingol earthquake. Firstly, historical earthquakes used in the 2002 study may have been incorrectly modelled in either size or location. Secondly, earthquakes not included in the study, due to either their size or occurrence time, may have had a significant effect on the stress field. Or, finally, the secular stress used to load the faults was inappropriate. We argue that it is through a combination of historical seismology guided and constrained by structural geology, directed paleoseismology and coupled with stress modelling which has been informed by detailed GPS data that an integrated seismic hazard program might have the best chance of success.

  3. A Hierarchical Bayesian Approcah for Earthquake Location and Data Uncertainty Estimation in 3D Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Arroucau, P.; Custodio, S.

    2014-12-01

    Solving inverse problems requires an estimate of data uncertainties. This usually takes the form of a data covariance matrix, which determines the shape of the model posterior distribution. Those uncertainties are yet not always known precisely and it is common practice to simply set them to a fixed, reasonable value. In the case of earthquake location, the hypocentral parameters (longitude, latitude, depth and origin time) are typically inverted for using seismic phase arrival times. But quantitative data variance estimates are rarely provided. Instead, arrival time catalogs usually associate phase picks with a quality factor, which is subsequently interpreted more or less arbitrarily in terms of data uncertainty in the location procedure. Here, we present a hierarchical Bayesian algorithm for earthquake location in 3D heterogeneous media, in which not only the earthquake hypocentral parameters, but also the P- and S-wave arrival time uncertainties, are inverted for, hence allowing more realistic posterior model covariance estimates. Forward modeling is achieved by means of the Fast Marching Method (FMM), an eikonal solver which has the ability to take interfaces into account, so direct, reflected and refracted phases can be used in the inversion. We illustrate the ability of our algorithm to retrieve earthquake hypocentral parameters as well as data uncertainties through synthetic examples and using a subset of arrival time catalogs for mainland Portugal and its Atlantic margin.

  4. Toward reliable automated estimates of earthquake source properties from body wave spectra

    NASA Astrophysics Data System (ADS)

    Ross, Zachary E.; Ben-Zion, Yehuda

    2016-06-01

    We develop a two-stage methodology for automated estimation of earthquake source properties from body wave spectra. An automated picking algorithm is used to window and calculate spectra for both P and S phases. Empirical Green's functions are stacked to minimize nongeneric source effects such as directivity and are used to deconvolve the spectra of target earthquakes for analysis. In the first stage, window lengths and frequency ranges are defined automatically from the event magnitude and used to get preliminary estimates of the P and S corner frequencies of the target event. In the second stage, the preliminary corner frequencies are used to update various parameters to increase the amount of data and overall quality of the deconvolved spectral ratios (target event over stacked Empirical Green's function). The obtained spectral ratios are used to estimate the corner frequencies, strain/stress drops, radiated seismic energy, apparent stress, and the extent of directivity for both P and S waves. The technique is applied to data generated by five small to moderate earthquakes in southern California at hundreds of stations. Four of the five earthquakes are found to have significant directivity. The developed automated procedure is suitable for systematic processing of large seismic waveform data sets with no user involvement.

  5. A hierarchical Bayesian approach for earthquake location and data uncertainty estimation in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Arroucau, Pierre; Custódio, Susana

    2015-04-01

    Solving inverse problems requires an estimate of data uncertainties. This usually takes the form of a data covariance matrix, which determines the shape of the model posterior distribution. Those uncertainties are yet not always known precisely and it is common practice to simply set them to a fixed, reasonable value. In the case of earthquake location, the hypocentral parameters (longitude, latitude, depth and origin time) are typically inverted for using seismic phase arrival times. But quantitative data variance estimates are rarely provided. Instead, arrival time catalogs usually associate phase picks with a quality factor, which is subsequently interpreted more or less arbitrarily in terms of data uncertainty in the location procedure. Here, we present a hierarchical Bayesian algorithm for earthquake location in 3D heterogeneous media, in which not only the earthquake hypocentral parameters, but also the P- and S-wave arrival time uncertainties, are inverted for, hence allowing more realistic posterior model covariance estimates. Forward modeling is achieved by means of the Fast Marching Method (FMM), an eikonal solver which has the ability to take interfaces into account, so direct, reflected and refracted phases can be used in the inversion. We illustrate the ability of our algorithm to retrieve earthquake hypocentral parameters as well as data uncertainties through synthetic examples and using a subset of arrival time catalogs for mainland Portugal and its Atlantic margin.

  6. Re-estimating the epicenter of the 1927 Jericho earthquake using spatial distribution of intensity data

    NASA Astrophysics Data System (ADS)

    Zohar, Motti; Marco, Shmuel

    2012-07-01

    We present a new approach for re-estimating an epicenter of historical earthquake using the spatial distribution of intensity data. We use macroseismic data related to the 1927 Jericho earthquake since this is the first strong earthquake recorded by modern seismographs and is also well documented by historical evidence and reports. The epicenter is located in two sequential steps: (1) Correction of previously-evaluated seismic intensities in accordance with the local site-attributes: construction quality, topographic slope, groundwater level, and surface geology; (2) Spatial correlation of these intensities with a logarithmic variant of the epicentral distance. The resulted location (approximated to 35.5°/31.8°) is consistent with the seismogram-based location calculated by Avni et al. (2002) and also of Ben Menahem et al. (1976) with a spatial error of 50 km. The proposed method suggests an additional approach to the formers based mainly upon spatial analysis of intensity data.

  7. Earthquake.

    PubMed

    Cowen, A R; Denney, J P

    1994-04-01

    On January 25, 1 week after the most devastating earthquake in Los Angeles history, the Southern California Hospital Council released the following status report: 928 patients evacuated from damaged hospitals. 805 beds available (136 critical, 669 noncritical). 7,757 patients treated/released from EDs. 1,496 patients treated/admitted to hospitals. 61 dead. 9,309 casualties. Where do we go from here? We are still waiting for the "big one." We'll do our best to be ready when Mother Nature shakes, rattles and rolls. The efforts of Los Angeles City Fire Chief Donald O. Manning cannot be overstated. He maintained department command of this major disaster and is directly responsible for implementing the fire department's Disaster Preparedness Division in 1987. Through the chief's leadership and ability to forecast consequences, the city of Los Angeles was better prepared than ever to cope with this horrendous earthquake. We also pay tribute to the men and women who are out there each day, where "the rubber meets the road." PMID:10133439

  8. A spatially explicit estimate of avoided forest loss.

    PubMed

    Honey-Rosés, Jordi; Baylis, Kathy; Ramírez, M Isabel

    2011-10-01

    With the potential expansion of forest conservation programs spurred by climate-change agreements, there is a need to measure the extent to which such programs achieve their intended results. Conventional methods for evaluating conservation impact tend to be biased because they do not compare like areas or account for spatial relations. We assessed the effect of a conservation initiative that combined designation of protected areas with payments for environmental services to conserve over wintering habitat for the monarch butterfly (Danaus plexippus) in Mexico. To do so, we used a spatial-matching estimator that matches covariates among polygons and their neighbors. We measured avoided forest loss (avoided disturbance and deforestation) by comparing forest cover on protected and unprotected lands that were similar in terms of accessibility, governance, and forest type. Whereas conventional estimates of avoided forest loss suggest that conservation initiatives did not protect forest cover, we found evidence that the conservation measures are preserving forest cover. We found that the conservation measures protected between 200 ha and 710 ha (3-16%) of forest that is high-quality habitat for monarch butterflies, but had a smaller effect on total forest cover, preserving between 0 ha and 200 ha (0-2.5%) of forest with canopy cover >70%. We suggest that future estimates of avoided forest loss be analyzed spatially to account for how forest loss occurs across the landscape. Given the forthcoming demand from donors and carbon financiers for estimates of avoided forest loss, we anticipate our methods and results will contribute to future studies that estimate the outcome of conservation efforts. PMID:21902720

  9. Strong Earthquake Motion Estimates for the UCSB Campus, and Related Response of the Engineering 1 Building

    SciTech Connect

    Archuleta, R.; Bonilla, F.; Doroudian, M.; Elgamal, A.; Hueze, F.

    2000-06-06

    This is the second report on the UC/CLC Campus Earthquake Program (CEP), concerning the estimation of exposure of the U.C. Santa Barbara campus to strong earthquake motions (Phase 2 study). The main results of Phase 1 are summarized in the current report. This document describes the studies which resulted in site-specific strong motion estimates for the Engineering I site, and discusses the potential impact of these motions on the building. The main elements of Phase 2 are: (1) determining that a M 6.8 earthquake on the North Channel-Pitas Point (NCPP) fault is the largest threat to the campus. Its recurrence interval is estimated at 350 to 525 years; (2) recording earthquakes from that fault on March 23, 1998 (M 3.2) and May 14, 1999 (M 3.2) at the new UCSB seismic station; (3) using these recordings as empirical Green's functions (EGF) in scenario earthquake simulations which provided strong motion estimates (seismic syntheses) at a depth of 74 m under the Engineering I site; 240 such simulations were performed, each with the same seismic moment, but giving a broad range of motions that were analyzed for their mean and standard deviation; (4) laboratory testing, at U.C. Berkeley and U.C. Los Angeles, of soil samples obtained from drilling at the UCSB station site, to determine their response to earthquake-type loading; (5) performing nonlinear soil dynamic calculations, using the soil properties determined in-situ and in the laboratory, to calculate the surface strong motions resulting from the seismic syntheses at depth; (6) comparing these CEP-generated strong motion estimates to acceleration spectra based on the application of state-of-practice methods - the IBC 2000 code, UBC 97 code and Probabilistic Seismic Hazard Analysis (PSHA), this comparison will be used to formulate design-basis spectra for future buildings and retrofits at UCSB; and (7) comparing the response of the Engineering I building to the CEP ground motion estimates and to the design

  10. Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude

    USGS Publications Warehouse

    Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.

    2016-01-01

    We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2  Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances <200  km. Retrospective application of this algorithm to the 2011 Tohoku earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2  Hz) accelerograms has value in the context of earthquake early warning for extremely large events.

  11. Estimating Phosphorus Loss in Runoff from Manure and Fertilizer for a Phosphorus Loss Quantification Tool

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Non-point source pollution of fresh waters by phosphorus (P) is a concern because it contributes to accelerated eutrophication. Qualitative P Indexes that estimate the risk of field-scale P loss have been developed in the USA and Europe. However, given the state of the science concerning agricultura...

  12. Estimation of postfire nutrient loss in the Florida everglades.

    PubMed

    Qian, Y; Miao, S L; Gu, B; Li, Y C

    2009-01-01

    Postfire nutrient release into ecosystem via plant ash is critical to the understanding of fire impacts on the environment. Factors determining a postfire nutrient budget are prefire nutrient content in the combustible biomass, burn temperature, and the amount of combustible biomass. Our objective was to quantitatively describe the relationships between nutrient losses (or concentrations in ash) and burning temperature in laboratory controlled combustion and to further predict nutrient losses in field fire by applying predictive models established based on laboratory data. The percentage losses of total nitrogen (TN), total carbon (TC), and material mass showed a significant linear correlation with a slope close to 1, indicating that TN or TC loss occurred predominantly through volatilization during combustion. Data obtained in laboratory experiments suggest that the losses of TN, TC, as well as the ratio of ash total phosphorus (TP) concentration to leaf TP concentration have strong relationships with burning temperature and these relationships can be quantitatively described by nonlinear equations. The potential use of these nonlinear models relating nutrient loss (or concentration) to temperature in predicting nutrient concentrations in field ash appear to be promising. During a prescribed fire in the northern Everglades, 73.1% of TP was estimated to be retained in ash while 26.9% was lost to the atmosphere, agreeing well with the distribution of TP during previously reported wild fires. The use of predictive models would greatly reduce the cost associated with measuring field ash nutrient concentrations. PMID:19643746

  13. Estimates of the magnitude of aseismic slip associated with small earthquakes near San Juan Bautista, CA

    NASA Astrophysics Data System (ADS)

    Hawthorne, J. C.; Simons, M.

    2013-12-01

    The recurrence intervals of repeating earthquakes raise the possibility that much of the slip associated with small earthquakes is aseismic. To test this hypothesis, we examine the co- and post-seismic strain changes associated with Mc 2 to 4 earthquakes on the San Andreas Fault. We consider several thousand events that occurred near USGS strainmeter SJT, at the northern end of the creeping section. Most of the strain changes associated with these events are below the noise level on a single record, so we bin the earthquakes into 3 to 5 groups according to their magnitude. We then invert for an average time history of strain per seismic moment for each group. The seismic moment M0 is assumed to scale as 10β Mc, where Mc is the preferred magnitude in the NCSN catalog, and β is between 1.1 and 1.6. We try several approaches to account for the spatial pattern of strain, but we focus on the ɛE-N strain component (east extension minus north extension) because it is the most robust to model. Each of the estimated strain time series displays a step at the time of the earthquakes. The ratio of the strain step to seismic moment is larger for the bin with smaller events. If we assume that M0~ 101.5Mc, the ratio increases by a factor of 3 to 5 per unit decrease in Mc. This increase in strain per moment would imply that most of the slip within an hour of small events is aseismic. For instance, the aseismic moment of a Mc 2 earthquake would be at least 5 to 10 times the seismic moment. However, much of the variation in strain per seismic moment is eliminated for a smaller but still plausible value of β. If M0~101.2Mc, the strain per moment increases by about a factor of 2 per unit decrease in Mc.

  14. Dose estimates in a loss of lead shielding truck accident.

    SciTech Connect

    Dennis, Matthew L.; Osborn, Douglas M.; Weiner, Ruth F.; Heames, Terence John

    2009-08-01

    The radiological transportation risk & consequence program, RADTRAN, has recently added an updated loss of lead shielding (LOS) model to it most recent version, RADTRAN 6.0. The LOS model was used to determine dose estimates to first-responders during a spent nuclear fuel transportation accident. Results varied according to the following: type of accident scenario, percent of lead slump, distance to shipment, and time spent in the area. This document presents a method of creating dose estimates for first-responders using RADTRAN with potential accident scenarios. This may be of particular interest in the event of high speed accidents or fires involving cask punctures.

  15. Comparison of models for piping transmission loss estimations

    NASA Astrophysics Data System (ADS)

    Catron, Fred W.; Mann, J. Adin

    2005-09-01

    A frequency dependent model for the transmission loss of piping is important for accurate estimates of the external radiation from pipes and the vibration level of the pipe walls. A statistical energy analysis model is used to predict the transmission loss of piping. Key terms in the model are the modal density and the radiation efficiency of the piping wall. Several available models for each are compared in reference to measured data. In low frequency octave bands, the modal density is low. The model of the transmission loss in these octave bands is augmented with a mass law model in the low frequency regime where the number of modes is small. The different models and a comparison of the models will be presented.

  16. A phase coherence approach to estimating the spatial extent of earthquakes

    NASA Astrophysics Data System (ADS)

    Hawthorne, Jessica C.; Ampuero, Jean-Paul

    2016-04-01

    We present a new method for estimating the spatial extent of seismic sources. The approach takes advantage of an inter-station phase coherence computation that can identify co-located sources (Hawthorne and Ampuero, 2014). Here, however, we note that the phase coherence calculation can eliminate the Green's function and give high values only if both earthquakes are point sources---if their dimensions are much smaller than the wavelengths of the propagating seismic waves. By examining the decrease in coherence at higher frequencies (shorter wavelengths), we can estimate the spatial extents of the earthquake ruptures. The approach can to some extent be seen as a simple way of identifying directivity or variations in the apparent source time functions recorded at various stations. We apply this method to a set of well-recorded earthquakes near Parkfield, CA. We show that when the signal to noise ratio is high, the phase coherence remains high well above 50 Hz for closely spaced M<1.5 earthquake. The high-frequency phase coherence is smaller for larger earthquakes, suggesting larger spatial extents. The implied radii scale roughly as expected from typical magnitude-corner frequency scalings. We also examine a second source of high-frequency decoherence: spatial variation in the shape of the Green's functions. This spatial decoherence appears to occur on a similar wavelengths as the decoherence associated with the apparent source time functions. However, the variation in Green's functions can be normalized away to some extent by comparing observations at multiple components on a single station, which see the same apparent source time functions.

  17. Building Time-Dependent Earthquake Recurrence Models for Probabilistic Loss Computations

    NASA Astrophysics Data System (ADS)

    Fitzenz, D. D.; Nyst, M.

    2013-12-01

    We present a Risk Management perspective on earthquake recurrence on mature faults, and the ways that it can be modeled. The specificities of Risk Management relative to Probabilistic Seismic Hazard Assessment (PSHA), include the non-linearity of the exceedance probability curve for losses relative to the frequency of event occurrence, the fact that losses at all return periods are needed (and not at discrete values of the return period), and the set-up of financial models which sometimes require the modeling of realizations of the order in which events may occur (I.e., simulated event dates are important, whereas only average rates of occurrence are routinely used in PSHA). We use New Zealand as a case study and review the physical characteristics of several faulting environments, contrasting them against properties of three probability density functions (PDFs) widely used to characterize the inter-event time distributions in time-dependent recurrence models. We review the data available to help constrain both the priors and the recurrence process. And we propose that with the current level of knowledge, the best way to quantify the recurrence of large events on mature faults is to use a Bayesian combination of models, i.e., the decomposition of the inter-event time distribution into a linear combination of individual PDFs with their weight given by the posterior distribution. Finally we propose to the community : 1. A general debate on how best to incorporate our knowledge (e.g., from geology, geomorphology) on plausible models and model parameters, but also preserve the information on what we do not know; and 2. The creation and maintenance of a global database of priors, data, and model evidence, classified by tectonic region, special fluid characteristic (pH, compressibility, pressure), fault geometry, and other relevant properties so that we can monitor whether some trends emerge in terms of which model dominates in which conditions.

  18. Teleseismic estimates of radiated seismic energy: The E/M 0 discriminant for tsunami earthquakes

    NASA Astrophysics Data System (ADS)

    Newman, Andrew V.; Okal, Emile A.

    1998-11-01

    We adapt the formalism of Boatwright and Choy for the computation of radiated seismic energy from broadband records at teleseismic distances to the real-time situation when neither the depth nor the focal geometry of the source is known accurately. The analysis of a large data set of more than 500 records from 52 large, recent earthquakes shows that this procedure yields values of the estimated energy, EE, in good agreement with values computed from available source parameters, for example as published by the National Earthquake Information Center (NEIC), the average logarithmic residual being only 0.26 units. We analyze the energy-to-moment ratio by defining Θ = log10(EE/M0). For regular earthquakes, this parameter agrees well with values expected from theoretical models and from the worldwide NEIC catalogue. There is a one-to-one correspondence between values of Θ that are deficient by one full unit or more, and the so-called "tsunami earthquakes", previously identified in the literature as having exceedingly slow sources, and believed due to the presence of sedimentary structures in the fault zone. Our formalism can be applied to single-station measurements, and its coupling to automated real-time measurements of the seismic moment using the mantle magnitude Mm should significantly improve real-time tsunami warning.

  19. Regional intensity attenuation models for France and the estimation of magnitude and location of historical earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Scotti, O.

    2006-01-01

    Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.

  20. Rupture process of the 1946 Nankai earthquake estimated using seismic waveforms and geodetic data

    NASA Astrophysics Data System (ADS)

    Murotani, Satoko; Shimazaki, Kunihiko; Koketsu, Kazuki

    2015-08-01

    The rupture process of the 1946 Nankai earthquake (MJMA 8.0) was estimated using seismic waveforms from teleseismic and strong motion stations together with geodetic data from leveling surveys and tide gauges. The results of joint inversion analysis showed that two areas with large slip are more confined than in previous studies. In our inversion, we assumed spatially varying strike and dip angles and depth of each subfault by fitting those to the actual complex shape of the upper surface of the Philippine Sea plate in the Nankai Trough region. As a result, we calculated the total seismic moment, M0 = 5.5 × 1021 Nm; the moment magnitude, Mw = 8.4; and a maximum slip of 5.1 m, occurring at a point south of Cape Muroto. The estimated slip distribution on the west side of the fault plane appears somewhat complicated, but it explains well the vertical deformations at Tosashimizu and in the vicinity of Inomisaki. Arguments have been made that the westernmost part slipped slowly after the earthquake over a period of days or months as an afterslip because the seismic waveforms can be largely explained without the slip in this part. However, in order to explain the displacement recorded by the tide gauge at Tosashimizu, we conclude that the westernmost part slipped simultaneously with the earthquake. Splay faulting, which was suggested in previous studies, is not required in our model to explain the seismic waveforms and geodetic data.

  1. Rapid estimation of the moment magnitude of the 2011 off the Pacific coast of Tohoku earthquake from coseismic strain steps

    NASA Astrophysics Data System (ADS)

    Itaba, S.; Matsumoto, N.; Kitagawa, Y.; Koizumi, N.

    2012-12-01

    The 2011 off the Pacific coast of Tohoku earthquake, of moment magnitude (Mw) 9.0, occurred at 14:46 Japan Standard Time (JST) on March 11, 2011. The coseismic strain steps caused by the fault slip of this earthquake were observed in the Tokai, Kii Peninsula and Shikoku by the borehole strainmeters which were carefully set by Geological Survey of Japan, AIST. Using these strain steps, we estimated a fault model for the earthquake on the boundary between the Pacific and North American plates. Our model, which is estimated only from several minutes' strain data, is largely consistent with the final fault models estimated from GPS and seismic wave data. The moment magnitude can be estimated about 6 minutes after the origin time, and 4 minutes after wave arrival. According to the fault model, the moment magnitude of the earthquake is 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency announced just after earthquake occurrence was 7.9. Generally coseismic strain steps are considered to be less reliable than seismic waves and GPS data. However our results show that the coseismic strain steps observed by the borehole strainmeters, which were carefully set and monitored, can be relied enough to decide the earthquake magnitude precisely and rapidly. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using strain steps is one of the strong methods for rapid estimation of the magnitude of great earthquakes.

  2. Estimating the economic loss of recent North Atlantic fisheries management

    NASA Astrophysics Data System (ADS)

    Merino, Gorka; Barange, Manuel; Fernandes, Jose A.; Mullon, Christian; Cheung, William; Trenkel, Verena; Lam, Vicky

    2014-12-01

    It is accepted that world's fisheries are not generally exploited at their biological or their economic optimum. Most fisheries assessments focus on the biological capacity of fish stocks to respond to harvesting and few have attempted to estimate the economic efficiency at which ecosystems are exploited. The latter is important as fisheries contribute considerably to the economic development of many coastal communities. Here we estimate the overall potential economic rent for the fishing industry in the North Atlantic to be B€ 12.85, compared to current estimated profits of B€ 0.63. The difference between the potential and the net profits obtained from North Atlantic fisheries is therefore B€ 12.22. In order to increase the profits of North Atlantic fisheries to a maximum, total fish biomass would have to be rebuilt to 108 Mt (2.4 times more than present) by reducing current total fishing effort by 53%. Stochastic simulations were undertaken to estimate the uncertainty associated with the aggregate bioeconomic model that we use and we estimate the economic loss NA fisheries in a range of 2.5 and 32 billion of euro. We provide economic justification for maintaining or restoring fish stocks to above their MSY biomass levels. Our conclusions are consistent with similar global scale studies.

  3. Comparison of three tests for estimating gastroenteral protein loss

    SciTech Connect

    Glaubitti, D.; Marx, M.; Weller, H.

    1984-01-01

    A decisive step in the diagnosis of exudative gastroenteropathy which shows a pathologically increased transfer of plasma proteins into the stomach or intestine is the measurement of fecal radioactivity after intravenous administration of radionuclide-labeled large organic compounds or of small inorganic compounds attaching themselves to plasma proteins within the patient. In 24 patients (12 men and women each) aged 40 to 66 years, the gastroenteral protein loss was estimated after intravenous injection of Cr-51 chloride, Cr-51 human serum albumin, or Fe-59 iron dextran. Each test lasted 6 days. There was an interval of 2 weeks between 2 tests. The feces were collected completely within the test period for determination of radioactivity. External probe counting over liver, spleen, right kidney, and thyroid was performed daily up to 10 days. The results obtained with Cr-51 chloride presented the largest range whereas the test with Fe-59 iron dextran exhibited both the smallest deviation from the mean value and the lowest normal range. During the tests for gastroenteral protein loss external probe counting demonstrated no distinct tendency to a more rapid radionuclide loss from liver, spleen, and kidney in the patients suffering from exudative gastroenteropathy when compared with healthy subjects. The authors conclude that the most suitable test to estimate gastroenteral protein loss is the Fe-59 iron dextran test although Fe-59 iron dextran is not available commercially and causes a higher radiation burden than the other tests do. In second place, the Cr-51 chloride test should be used, the radiopharmaceutical of which is less expensive and has no significant disadvantage in comparison with Cr-51 human serum albumin.

  4. The 1868 Hayward Earthquake Alliance: A Case Study - Using an Earthquake Anniversary to Promote Earthquake Preparedness

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.

    2008-12-01

    Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion

  5. Energy Losses Estimation During Pulsed-Laser Seam Welding

    NASA Astrophysics Data System (ADS)

    Sebestova, Hana; Havelkova, Martina; Chmelickova, Hana

    2014-06-01

    The finite-element tool SYSWELD (ESI Group, Paris, France) was adapted to simulate pulsed-laser seam welding. Besides temperature field distribution, one of the possible outputs of the welding simulation is the amount of absorbed power necessary to melt the required material volume including energy losses. Comparing absorbed or melting energy with applied laser energy, welding efficiencies can be calculated. This article presents achieved results of welding efficiency estimation based on the assimilation both experimental and simulation output data of the pulsed Nd:YAG laser bead on plate welding of 0.6-mm-thick AISI 304 stainless steel sheets using different beam powers.

  6. Estimating the Loss of Crew and Loss of Mission for Crew Spacecraft

    NASA Technical Reports Server (NTRS)

    Lutomski, Michael G.

    2011-01-01

    Once the US Space Shuttle retires in 2011, the Russian Soyuz Launcher and Soyuz Spacecraft will comprise the only means for crew transportation to and from the International Space Station (ISS). The U.S. Government and NASA have contracted for crew transportation services to the ISS with Russia. The resulting implications for the US space program including issues such as astronaut safety must be carefully considered. Are the astronauts and cosmonauts safer on the Soyuz than the Space Shuttle system? Is the Soyuz launch system more robust than the Space Shuttle? The Soyuz launcher has been in operation for over 40 years. There have been only two loss of life incidents and two loss of mission incidents. Given that the most recent incident took place in 1983, how do we determine current reliability of the system? Do failures of unmanned Soyuz rockets impact the reliability of the currently operational man-rated launcher? Does the Soyuz exhibit characteristics that demonstrate reliability growth and how would that be reflected in future estimates of success? NASA s next manned rocket and spacecraft development project will have to meet the Agency Threshold requirements set forth by NASA. The reliability targets are currently several times higher than the Shuttle and possibly even the Soyuz. Can these targets be compared to the reliability of the Soyuz to determine whether they are realistic and achievable? To help answer these questions this paper will explore how to estimate the reliability of the Soyuz Launcher/Spacecraft system, compare it to the Space Shuttle, and its potential impacts for the future of manned spaceflight. Specifically it will look at estimating the Loss of Crew (LOC) and Loss of Mission (LOM) probability using historical data, reliability growth, and Probabilistic Risk Assessment techniques used to generate these numbers.

  7. Coseismic landsliding estimates for an Alpine Fault earthquake and the consequences for erosion of the Southern Alps, New Zealand

    NASA Astrophysics Data System (ADS)

    Robinson, T. R.; Davies, T. R. H.; Wilson, T. M.; Orchiston, C.

    2016-06-01

    Landsliding resulting from large earthquakes in mountainous terrain presents a substantial hazard and plays an important role in the evolution of mountain ranges. However estimating the scale and effect of landsliding from an individual earthquake prior to its occurrence is difficult. This study presents first order estimates of the scale and effects of coseismic landsliding resulting from a plate boundary earthquake in the South Island of New Zealand. We model an Mw 8.0 earthquake on the Alpine Fault, which has produced large (M 7.8-8.2) earthquakes every 329 ± 68 years over the last 8 ka, with the last earthquake ~ 300 years ago. We suggest that such an earthquake could produce ~ 50,000 ± 20,000 landslides at average densities of 2-9 landslides km- 2 in the area of most intense landsliding. Between 50% and 90% are expected to occur in a 7000 km2 zone between the fault and the main divide of the Southern Alps. Total landslide volume is estimated to be 0.81 + 0.87/- 0.55 km3. In major northern and southern river catchments, total landslide volume is equivalent to up to a century of present-day aseismic denudation measured from suspended sediment yields. This suggests that earthquakes occurring at century-timescales are a major driver of erosion in these regions. In the central Southern Alps, coseismic denudation is equivalent to less than a decade of aseismic denudation, suggesting precipitation and uplift dominate denudation processes. Nevertheless, the estimated scale of coseismic landsliding is considered to be a substantial hazard throughout the entire Southern Alps and is likely to present a substantial issue for post-earthquake response and recovery.

  8. THE MISSING EARTHQUAKES OF HUMBOLDT COUNTY: RECONCILING RECURRENCE INTERVAL ESTIMATES, SOUTHERN CASCADIA SUBDUCTION ZONE

    NASA Astrophysics Data System (ADS)

    Patton, J. R.; Leroy, T. H.

    2009-12-01

    Earthquake and tsunami hazard for northwestern California and southern Oregon is predominately based on estimates of recurrence for earthquakes on the Cascadia subduction zone and upper plate thrust faults, each with unique deformation and recurrence histories. Coastal northern California is uniquely located to enable us to distinguish these different sources of seismic hazard as the accretionary prism extends on land in this region. This region experiences ground deformation from rupture of upper plate thrust faults like the Little Salmon fault. Most of this region is thought to be above the locked zone of the megathrust, so is subject to vertical deformation during the earthquake cycle. Secondary evidence of earthquake history is found here in the form of marsh soils that coseismically subside and commonly are overlain by estuarine mud and rarely tsunami sand. It is not currently known what the source of the subsidence is for this region; it may be due to upper plate rupture, megathrust rupture, or a combination of the two. Given that many earlier investigations utilized bulk peat for 14C age determinations and that these early studies were largely reconnaissance work, these studies need to be reevaluated. Recurrence Interval estimates are inconsistent when comparing terrestrial (~500 years) and marine (~220 years) data sets. This inconsistency may be due to 1) different sources of archival bias in marine and terrestrial data sets and/or 2) different sources of deformation. Factors controlling successful archiving of paleoseismic data are considered as this relates to geologic setting and how that might change through time. We compile, evaluate, and rank existing paleoseismic data in order to prioritize future paleoseismic investigations. 14C ages are recalibrated and quality assessments are made for each age determination. We then evaluate geologic setting and prioritize important research locations and goals based on these existing data. Terrestrial core

  9. Estimation of Future Changes in Flood Disaster Losses

    NASA Astrophysics Data System (ADS)

    Konoshima, L.; Hirabayashi, Y.; Roobavannan, M.

    2012-12-01

    Disaster losses can be estimated by hazard intensity, exposure, and vulnerabilities. Many studies have addressed future economic losses from river floods, most of which are focused on Europe (Bouwer et al, 2010). Here flood disaster losses are calculated using the output of multi-model ensembles of CMIP5 GCMs in order to estimate the changes in damage loss due to climate change. For the global distribution of the expected future population and GDP, the ALPS scenario of RITE is population for is used. Here, flood event is defined as river discharge that has a probability of having 100 years return period. The time series of annual maximum daily discharge was fitted using moment fitting method for GEV distribution at each grid. L-moment method (Hosking and Wallis 1997) is used for estimating the parameters of distribution. For probability distribution, Gumbel distribution and Generalized Extreme Value (GEV) distribution were tested to see the future changes of 100-year value. Using the calculation of 100-year flood of present condition and annual maximum discharge for present and future climate conditions, the area exceeding 100-year flood is calculated for each 30 years. And to estimate the economic impact of future changes in occurrence of 100-year flood, affected total GDP is calculated by multiplying the affected population with country's GDP in areas exceeding 100-year flood value of present climate for each present and future conditions. The 100-year flood value is fixed with the value of present condition in calculating the affected value on the future condition. To consider the effect of the climatic condition and changes of economic growth, the regions are classified by continents. The Southeast Asia is divided into Japan and South Korea (No.1) and other countries (No.2), since the GDP and GDP growth rate within the two areas is quite different compared to other regions. Figure 1 shows the average and standard deviation (1-sigma) of future changing ratio

  10. Reevaluation of the macroseismic effects of the 1887 Sonora, Mexico earthquake and its magnitude estimation

    USGS Publications Warehouse

    Suárez, Gerardo; Hough, Susan E.

    2008-01-01

    The Sonora, Mexico, earthquake of 3 May 1887 occurred a few years before the start of the instrumental era in seismology. We revisit all available accounts of the earthquake and assign Modified Mercalli Intensities (MMI), interpreting and analyzing macroseismic information using the best available modern methods. We find that earlier intensity assignments for this important earthquake were unjustifiably high in many cases. High intensity values were assigned based on accounts of rock falls, soil failure or changes in the water table, which are now known to be very poor indicators of shaking severity and intensity. Nonetheless, reliable accounts reveal that light damage (intensity VI) occurred at distances of up to ~200 km in both Mexico and the United States. The resulting set of 98 reevaluated intensity values is used to draw an isoseismal map of this event. Using the attenuation relation proposed by Bakun (2006b), we estimate an optimal moment magnitude of Mw7.6. Assuming this magnitude is correct, a fact supported independently by documented rupture parameters assuming standard scaling relations, our results support the conclusion that northern Sonora as well as the Basin and Range province are characterized by lower attenuation of intensities than California. However, this appears to be at odds with recent results that Lg attenuation in the Basin and Range province is comparable to that in California.

  11. Fast Estimate of Rupture Process of Large Earthquakes via Real Time Hi-net Data

    NASA Astrophysics Data System (ADS)

    Wang, D.; Kawakatsu, H.; Mori, J. J.

    2014-12-01

    We developed a real time system based on Hi-net seismic array that can offer fast and reliable source information, for example, source extent and rupture velocity, for earthquakes that occur at distance of roughly 30°- 85°with respect to the array center. We perform continuous grid search on a Hi-net real time data stream to identify possible source locations (following Nishida, K., Kawakatsu, H., and S. Obara, 2008). Earthquakes that occurred off the bright area of the array (30°- 85°with respect to the array center) will be ignored. Once a large seismic event is identified successfully, back-projection will be implemented to trace the source propagation and energy radiation. Results from extended global GRiD-MT and real time W phase inversion will be combined for the better identification of large seismic events. The time required is mainly due to the travel time from the epicenter to the array stations, so we can get the results between 6 to 13 min depending on the epicenter distances. This system can offer fast and robust estimates of earthquake source information, which will be useful for disaster mitigation, such as tsunami evacuation, emergency rescue, and aftershock hazard evaluation.

  12. Earthquake shaking hazard estimates and exposure changes in the conterminous United States

    USGS Publications Warehouse

    Jaiswal, Kishor S.; Petersen, Mark D.; Rukstales, Kenneth S.; Leith, William S.

    2015-01-01

    A large portion of the population of the United States lives in areas vulnerable to earthquake hazards. This investigation aims to quantify population and infrastructure exposure within the conterminous U.S. that are subjected to varying levels of earthquake ground motions by systematically analyzing the last four cycles of the U.S. Geological Survey's (USGS) National Seismic Hazard Models (published in 1996, 2002, 2008 and 2014). Using the 2013 LandScan data, we estimate the numbers of people who are exposed to potentially damaging ground motions (peak ground accelerations at or above 0.1g). At least 28 million (~9% of the total population) may experience 0.1g level of shaking at relatively frequent intervals (annual rate of 1 in 72 years or 50% probability of exceedance (PE) in 50 years), 57 million (~18% of the total population) may experience this level of shaking at moderately frequent intervals (annual rate of 1 in 475 years or 10% PE in 50 years), and 143 million (~46% of the total population) may experience such shaking at relatively infrequent intervals (annual rate of 1 in 2,475 years or 2% PE in 50 years). We also show that there is a significant number of critical infrastructure facilities located in high earthquake-hazard areas (Modified Mercalli Intensity ≥ VII with moderately frequent recurrence interval).

  13. The CATDAT damaging earthquakes database

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.

    2011-08-01

    The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.

  14. Deep Structure and Earthquake Generating Properties in the Yamasaki Fault Zone Estimated from Dense Seismic Observation

    NASA Astrophysics Data System (ADS)

    Nishigami, K.; Shibutani, T.; Katao, H.; Yamaguchi, S.; Mamada, Y.

    2010-12-01

    We have been estimating crustal heterogeneous structure and earthquake generating properties in and around the Yamasaki fault zone, which is a left-lateral strike-slip active fault with a total length of about 80 km in southwest Japan. We deployed dense seismic observation network, composed of 32 stations with average spacing of 5-10 km around the Yamasaki fault zone. We estimate detailed fault structure such as fault dip and shape, segmentation, and possible location of asperities and rupture initiation point, as well as generating properties of earthquakes in the fault zone, through analyses of accurate hypocenter distribution, focal mechanism, 3-D velocity tomography, coda wave inversion, and other waveform analyses. We also deployed a linear seismic array across the fault, composed of 20 stations with about 20 m spacing, in order to delineate the fault-zone structure in more detail using the seismic waves trapped inside the low velocity zone. We also estimate detailed resistivity structure at shallow depth of the fault zone by AMT (audio-frequency magnetotelluric) and MT surveys. In the scattering analysis of coda waves, we used 2,391 wave traces from 121 earthquakes that occurred in 2002, 2003, 2008 and 2009, recorded at 60 stations, including dense temporary and routine stations. We estimated 3-D distribution of relative scattering coefficients along the Yamasaki fault zone. Microseismicity is high and scattering coefficient is relatively larger in the upper crust along the entire fault zone. The distribution of strong scatterers suggests that the Ohara and Hijima faults, which are the segments in the northwestern part of the Yamasaki fault zone, have almost vertical fault plane from surface to a depth of about 15 km. We used seismic network data operated by Universities, NIED, AIST, and JMA. This study has been carried out as a part of the project "Study on evaluation of earthquake source faults based on surveys of inland active faults" by Japan Nuclear

  15. Twitter as Information Source for Rapid Damage Estimation after Major Earthquakes

    NASA Astrophysics Data System (ADS)

    Eggert, Silke; Fohringer, Joachim

    2014-05-01

    Natural disasters like earthquakes require a fast response from local authorities. Well trained rescue teams have to be available, equipment and technology has to be ready set up, information have to be directed to the right positions so the head quarter can manage the operation precisely. The main goal is to reach the most affected areas in a minimum of time. But even with the best preparation for these cases, there will always be the uncertainty of what really happened in the affected area. Modern geophysical sensor networks provide high quality data. These measurements, however, are only mapping disjoint values from their respective locations for a limited amount of parameters. Using observations of witnesses represents one approach to enhance measured values from sensors ("humans as sensors"). These observations are increasingly disseminated via social media platforms. These "social sensors" offer several advantages over common sensors, e.g. high mobility, high versatility of captured parameters as well as rapid distribution of information. Moreover, the amount of data offered by social media platforms is quite extensive. We analyze messages distributed via Twitter after major earthquakes to get rapid information on what eye-witnesses report from the epicentral area. We use this information to (a) quickly learn about damage and losses to support fast disaster response and to (b) densify geophysical networks in areas where there is sparse information to gain a more detailed insight on felt intensities. We present a case study from the Mw 7.1 Philippines (Bohol) earthquake that happened on Oct. 15 2013. We extract Twitter messages, so called tweets containing one or more specified keywords from the semantic field of "earthquake" and use them for further analysis. For the time frame of Oct. 15 to Oct 18 we get a data base of in total 50.000 tweets whereof 2900 tweets are geo-localized and 470 have a photo attached. Analyses for both national level and locally for

  16. Enhanced estimation of loss in the presence of Kerr nonlinearity

    NASA Astrophysics Data System (ADS)

    Rossi, Matteo A. C.; Albarelli, Francesco; Paris, Matteo G. A.

    2016-05-01

    We address the characterization of dissipative bosonic channels and show that estimation of the loss rate by Gaussian probes (coherent or squeezed) is improved in the presence of Kerr nonlinearity. In particular, enhancement of precision may be substantial for short interaction time, i.e., for media of moderate size, e.g., biological samples. We analyze in detail the behavior of the quantum Fisher information (QFI), and determine the values of nonlinearity maximizing the QFI as a function of the interaction time and of the parameters of the input signal. We also discuss the precision achievable by photon counting and quadrature measurement and present additional results for truncated, few-photon, probe signals. Finally, we discuss the origin of the precision enhancement, showing that it cannot be linked quantitatively to the non-Gaussianity or the nonclassicality of the interacting probe signal.

  17. Estimating conditional quantiles with the help of the pinball loss

    SciTech Connect

    Steinwart, Ingo

    2008-01-01

    Using the so-called pinball loss for estimating conditional quantiles is a well-known tool in both statistics and machine learning. So far, however, only little work has been done to quantify the efficiency of this tool for non-parametric (modified) empirical risk minimization approaches. The goal of this work is to fill this gap by establishing inequalities that describe how close approximate pinball risk minimizers are to the corresponding conditional quantile. These inequalities, which hold under mild assumptions on the data-generating distribution, are then used to establish so-called variance bounds which recently turned out to play an important role in the statistical analysis of (modified) empirical risk minimization approaches. To illustrate the use of the established inequalities, we then use them to establish an oracle inequality for support vector machines that use the pinball loss. Here, it turns out that we obtain learning rates which are optimal in a min-max sense under some standard assumptions on the regularity of the conditional quantile function.

  18. A Simplified Approach to Earthquake Risk in Mainland China

    NASA Astrophysics Data System (ADS)

    Chen, Qi-Fu; Mi, Hongliang; Huang, Jing

    2005-06-01

    There are limitations in conventional earthquake loss procedures if attempts are made to apply these to assess the social and economic impacts of recent disastrous earthquakes. This paper addresses the need to develop an applicable model for estimating the significant increases of earthquake loss in mainland China. The casualties of earthquakes were studied first. The casualties of earthquakes are strongly related to earthquake strength, occurrence time (day or night) and the distribution of population in the affected area. Using data on earthquake casualties in mainland China from 1980 to 2000, we suggest a relationship between average losses of life and the magnitude of earthquakes. Combined with information on population density and earthquake occurrence times, we use these data to give a further relationship between the loss of life and factors like population density, intensity and occurrence time of the earthquake. Earthquakes that occurred from 2001 to 2003 were tested for the given relationships. This paper also explores the possibility of using a macroeconomic indicator, here GDP (Gross Domestic Product), to roughly estimate earthquake exposure in situations where no detailed insurance or similar inventories exist, thus bypassing some problems of the conventional method.

  19. Estimation of Crustal Thickness in Nepal Himalayas Using Local and Regional Earthquake Data

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, S.; Koulakov, I.; Maksotova, G.; Raoof, J.; Kayal, J. R.; Jakovlev, A.; Vasilevsky, A.

    2014-12-01

    Variation of crustal thickness beneath Nepal Himalayas is estimated by tomographic inversion of regional earthquake data. The Nepal Himalayas is fairly well distributed with denser network and earthquakes. Some 10864 P- and 5293 S-arrival times from 821 selected events Mw > 4.0 recorded during 2004-2014 are used for this study; on average, almost 20 phases per event have been available. The tomographic results shed a new light on crustal thickness variation along and across the Nepal Himalayas. The crustal thickness varies between 40 and 80 km from foothills to high Himalayas, which is verified by synthetic modeling. The crustal thickness also widely varies along the strike of the Himalayas. The zones of higher and lower crustal thicknesses may be correlated with some hidden transverse structures in the foothills region, which are well reflected in gravity and magnetic maps. The estimated crustal thickness matches fairly well with the free air gravity anomaly; thinner crust corresponds to lower gravity anomaly and vice versa. Some correlation with the magnetic field anomaly is also observed. Higher magnetic anomaly corresponds to thicker crust. We propose that the more rigid segments of incoming Indian crust comprising of igneous and metamorphic rocks cause more compression in the Himalayan thrust zone and leads to stronger crustal thickening. Under thrusting of weaker crust / sediments, on the other hand, is associated with less shortening, and thus cause the thinner crust in the collision zone.

  20. Application of universal kriging for estimation of earthquake ground motion: Statistical significance of results

    SciTech Connect

    Carr, J.R.; Roberts, K.P.

    1989-02-01

    Universal kriging is compared with ordinary kriging for estimation of earthquake ground motion. Ordinary kriging is based on a stationary random function model; universal kriging is based on a nonstationary random function model representing first-order drift. Accuracy of universal kriging is compared with that for ordinary kriging; cross-validation is used as the basis for comparison. Hypothesis testing on these results shows that accuracy obtained using universal kriging is not significantly different from accuracy obtained using ordinary kriging. Test based on normal distribution assumptions are applied to errors measured in the cross-validation procedure; t and F tests reveal no evidence to suggest universal and ordinary kriging are different for estimation of earthquake ground motion. Nonparametric hypothesis tests applied to these errors and jackknife statistics yield the same conclusion: universal and ordinary kriging are not significantly different for this application as determined by a cross-validation procedure. These results are based on application to four independent data sets (four different seismic events).

  1. Estimation of seismic source parameters for earthquakes in the southern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Rhee, H.; Sheen, D.

    2013-12-01

    Recent seismicity in the Korean Peninsula is shown to be low but there is the potential for more severe seismic activity. Historical records show that there were many damaging earthquakes around the Peninsula. Absence of instrumental records of damaging earthquakes hinders our efforts to understand seismotectonic characteristics in the Peninsula and predict seismic hazards. Therefore it is important to analyze instrumental records precisely to help improve our knowledge of seismicity in this region. Several studies on seismic source parameters in the Korean Peninsula were performed to find source parameters for a single event (Kim, 2001; Jo and Baag, 2007; Choi, 2009; Choi and Shim, 2009; Choi, 2010; Choi and Noh, 2010; Kim et al., 2010), to find relationships between source parameters (Kim and Kim, 2008; Shin and Kang, 2008) or to determine the input parameters for the stochastic strong ground motion simulation (Jo and Baag, 2001; Junn et al., 2002). In all previous studies, however, the source parameters were estimated only from small numbers of large earthquakes in this region. To understand the seismotectonic environment in low seismicity region, it will be better that a study on the source parameters is performed by using as many data as we can. In this study, therefore, we estimated seismic source parameters, such as the corner frequency, Brune stress drop and moment magnitude, from 503 events with ML≥1.6 that occurred in the southern part of the Korean Peninsula from 2001 to 2012. The data set consist of 2,834 S-wave trains on three-component seismograms recorded at broadband seismograph stations which have been operating by the Korea Meteorological Administration and the Korea Institute of Geoscience and Mineral Resources. To calculate the seismic source parameters, we used the iterative method of Jo and Baag (2001) based on the methods of Snoke (1987) and Andrews (1986). In this method, the source parameters are estimated by using the integration of

  2. Seismic moment of the 1891 Nobi, Japan, earthquake estimated from historical seismograms

    NASA Astrophysics Data System (ADS)

    Fukuyama, E.; Muramatu, I.; Mikumo, T.

    2007-06-01

    The seismic moment of the 1891 Nobi, Japan, earthquake has been evaluated from the historical seismogram recorded at the Central Meteorological Observatory in Tokyo. For this purpose, synthetic seismograms from point and finite source models with various fault parameters have been calculated by a discrete wave-number method, incorporating the instrumental response of the Gray-Milne-Ewing seismograph, and then compared with the original records. Our estimate of the seismic moment (Mo) is 1.8 × 1020 N m corresponding to a moment magnitude (Mw) 7.5. This is significantly smaller than the previous estimates from the distribution of damage, but is consistent with that inferred from geological field survey (Matsuda, 1974) of the surface faults.

  3. Uncertainty estimations for moment tensor inversions: the issue of the 2012 May 20 Emilia earthquake

    NASA Astrophysics Data System (ADS)

    Scognamiglio, Laura; Magnoni, Federica; Tinti, Elisa; Casarotti, Emanuele

    2016-08-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Geoscientists ordinarily use moment tensor catalogues, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their analysis. The 2012 May 20 Emilia main shock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. A variability of ˜0.5 units in magnitude leads to a controversial knowledge of the real size of the event and reveals how the solutions could be poorly constrained. In this work, we investigate the stability of the moment tensor solution for this earthquake, studying the effect of five different 1-D velocity models, the number and the distribution of the stations used in the inversion procedure. We also introduce a 3-D velocity model to account for structural heterogeneity. We finally estimate the uncertainties associated to the computed focal planes and the obtained Mw. We conclude that our reliable source solutions provide a moment magnitude that ranges from 5.87, 1-D model, to 5.96, 3-D model, reducing the variability of the literature to ˜0.1. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, requires coming out with disclosed assumptions and explicit processing workflows. Finally and, probably more important, when moment tensor solution is used for secondary analyses it has to be combined with the same main boundary conditions (e.g. wave-velocity propagation model) to avoid conflicting results.

  4. Estimating earthquake-induced failure probability and downtime of critical facilities.

    PubMed

    Porter, Keith; Ramer, Kyle

    2012-01-01

    Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways. PMID:22576139

  5. The range split-spectrum method for ionosphere estimation applied to the 2008 Kyrgyzstan earthquake

    NASA Astrophysics Data System (ADS)

    Gomba, Giorgio; Eineder, Michael

    2015-04-01

    L-band remote sensing systems, like the future Tandem-L mission, are disrupted by the ionized upper part of the atmosphere called ionosphere. The ionosphere is a region of the upper atmosphere composed by gases that are ionized by the solar radiation. The extent of the effects induced on a SAR measurement is given by the electron density integrated along the radio-wave paths and on its spatial variations. The main effect of the ionosphere on microwaves is to cause an additional delay, which introduces a phase difference between SAR measurements modifying the interferometric phase. The objectives of the Tandem-L mission are the systematic monitoring of dynamic Earth processes like Earth surface deformations, vegetation structure, ice and glacier changes and ocean surface currents. The scientific requirements regarding the mapping of surface deformation due to tectonic processes, earthquakes, volcanic cycles and anthropogenic factors demand deformation measurements; namely one, two or three dimensional displacement maps with resolutions of a few hundreds of meters and accuracies of centimeter to millimeter level. Ionospheric effects can make impossible to produce deformation maps with such accuracy and must therefore be estimated and compensated. As an example of this process, the implementation of the range split-spectrum method proposed in [1,2] will be presented and applied to an example dataset. The 2008 Kyrgyzstan Earthquake of October 5 is imaged by an ALOS PALSAR interferogram; a part from the earthquake, many fringes due to strong ionospheric variations can also be seen. The compensated interferogram shows how the ionosphere-related fringes were successfully estimated and removed. [1] Rosen, P.A.; Hensley, S.; Chen, C., "Measurement and mitigation of the ionosphere in L-band Interferometric SAR data," Radar Conference, 2010 IEEE , vol., no., pp.1459,1463, 10-14 May 2010 [2] Brcic, R.; Parizzi, A.; Eineder, M.; Bamler, R.; Meyer, F., "Estimation and

  6. A simple approach to estimate earthquake magnitude from the arrival time of the peak acceleration amplitude

    NASA Astrophysics Data System (ADS)

    Noda, S.; Yamamoto, S.

    2014-12-01

    In order for Earthquake Early Warning (EEW) to be effective, the rapid determination of magnitude (M) is important. At present, there are no methods which can accurately determine M even for extremely large events (ELE) for EEW, although a number of the methods have been suggested. In order to solve the problem, we use a simple approach derived from the fact that the time difference (Top) from the onset of the body wave to the arrival time of the peak acceleration amplitude of the body wave scales with M. To test this approach, we use 15,172 accelerograms of regional earthquakes (most of them are M4-7 events) from the K-NET, as the first step. Top is defined by analyzing the S-wave in this step. The S-onsets are calculated by adding the theoretical S-P times to the P-onsets which are manually picked. As the result, it is confirmed that logTop has high correlation with Mw, especially for the higher frequency band (> 2Hz). The RMS of residuals between Mw and M estimated in this step is less than 0.5. In case of the 2011 Tohoku earthquake, M is estimated to be 9.01 at 150 seconds after the initiation of the event.To increase the number of the ELE data, we add the teleseismic high frequency P-wave records to the analysis, as the second step. According to the result of various back-projection analyses, we consider the teleseismic P-waves to contain information on the entire rupture process. The BHZ channel data of the Global Seismographic Network for 24 events are used in this step. 2-4Hz data from the stations in the epicentral distance range of 30-85 degrees are used following the method of Hara [2007]. All P-onsets are manually picked. Top obtained from the teleseimic data show good correlation with Mw, complementing the one obtained from the regional data. We conclude that the proposed approach is quite useful for estimating reliable M for EEW, even for the ELE.

  7. Early magnitude estimation for the MW7.9 Wenchuan earthquake using progressively expanded P-wave time window

    PubMed Central

    Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong

    2014-01-01

    More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude. PMID:25346344

  8. Early magnitude estimation for the MW7.9 Wenchuan earthquake using progressively expanded P-wave time window.

    PubMed

    Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong

    2014-01-01

    More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude. PMID:25346344

  9. Re-estimated fault model of the 17th century great earthquake off Hokkaido using tsunami deposit data

    NASA Astrophysics Data System (ADS)

    Ioki, Kei; Tanioka, Yuichiro

    2016-01-01

    Paleotsunami researches revealed that a great earthquake occurred off eastern Hokkaido, Japan and generated a large tsunami in the 17th century. Tsunami deposits from this event have been found at far inland from the Pacific coast in eastern Hokkaido. Previous study estimated the fault model of the 17th century great earthquake by comparing locations of lowland tsunami deposits and computed tsunami inundation areas. Tsunami deposits were also traced at high cliff near the coast as high as 18 m above the sea level. Recent paleotsunami study also traced tsunami deposits at other high cliffs along the Pacific coast. The fault model estimated from previous study cannot explain the tsunami deposit data at high cliffs near the coast. In this study, we estimated the fault model of the 17th century great earthquake to explain both lowland widespread tsunami deposit areas and tsunami deposit data at high cliffs near the coast. We found that distributions of lowland tsunami deposits were mainly explained by wide rupture area at the plate interface in Tokachi-Oki segment and Nemuro-Oki segment. Tsunami deposits at high cliff near the coast were mainly explained by very large slip of 25 m at the shallow part of the plate interface near the trench in those segments. The total seismic moment of the 17th century great earthquake was calculated to be 1.7 ×1022 Nm (Mw 8.8). The 2011 great Tohoku earthquake ruptured large area off Tohoku and very large slip amount was found at the shallow part of the plate interface near the trench. The 17th century great earthquake had the same characteristics as the 2011 great Tohoku earthquake.

  10. Crustal parameters estimated from P-waves of earthquakes recorded at a small array

    USGS Publications Warehouse

    Murdock, J.N.; Steppe, J.A.

    1980-01-01

    The P-arrival times of local and regional earthquakes that are outside of a small network of seismometers can be used to interpret crustal parameters beneath the network by employing the time-term technique. Even when the estimate of the refractor velocity is poorly determined, useful estimates of the station time-terms can be made. The method is applied to a 20 km diameter network of eight seismic stations which was operated near Castaic, California, during the winter of 1972-73. The stations were located in sedimentary basins. Beneath the network, the sedimentary rocks of the basins are known to range from 1 to more than 4 km in thickness. Relative time-terms are estimated from P-waves assumed to be propagated by a refractor in the mid-crust, and again from P-waves propagated by a refractor in the upper basement. For the range of velocities reported by others, the two sets of time-terms are very similar. They suggest that both refractors dip to the southwest, and the geology also indicates that the basement dips in this direction. In addition, the P-wave velocity estimated for the refractor of mid-crustal depths, roughly 6.7 km/sec, agrees with values reported by others. Thus, even in this region of complicated geologic structure, the method appears to give realistic results. ?? 1980 Birkha??user Verlag.

  11. Estimation of earthquake source parameters by the inversion of waveform data: synthetic waveforms

    USGS Publications Warehouse

    Sipkin, S.A.

    1982-01-01

    Two methods are presented for the recovery of a time-dependent moment-tensor source from waveform data. One procedure utilizes multichannel signal-enhancement theory; in the other a multichannel vector-deconvolution approach, developed by Oldenburg (1982) and based on Backus-Gilbert inverse theory, is used. These methods have the advantage of being extremely flexible; both may be used either routinely or as research tools for studying particular earthquakes in detail. Both methods are also robust with respect to small errors in the Green's functions and may be used to refine estimates of source depth by minimizing the misfits to the data. The multichannel vector-deconvolution approach, although it requires more interaction, also allows a trade-off between resolution and accuracy, and complete statistics for the solution are obtained. The procedures have been tested using a number of synthetic body-wave data sets, including point and complex sources, with satisfactory results. ?? 1982.

  12. Simultaneous estimation of earthquake source parameters and crustal Q value from broadband data of selected aftershocks of the 2001 M w 7.7 Bhuj earthquake

    NASA Astrophysics Data System (ADS)

    Saha, A.; Lijesh, S.; Mandal, P.

    2012-12-01

    This paper presents the simultaneous estimation of source parameters and crustal Q values for small to moderate-size aftershocks ( M w 2.1-5.1) of the M_{w }7.7 2001 Bhuj earthquake. The horizontal-component S-waves of 144 well located earthquakes (2001-2010) recorded at 3-10 broadband seismograph sites in the Kachchh Seismic Zone, Gujarat, India are analyzed, and their seismic corner frequencies, long-period spectral levels and crustal Q values are simultaneously estimated by inverting the horizontal component of the S-wave displacement spectrum using the Levenberg-Marquardt nonlinear inversion technique, wherein the inversion scheme is formulated based on the ω-square source spectral model. The static stress drops (Δ σ) are then calculated from the corner frequency and seismic moment. The estimated source parameters suggest that the seismic moment ( M 0) and source radius ( r) of aftershocks are varying from 1.12 × 1012 to 4.00 × 1016 N-m and 132.57 to 513.20 m, respectively. Whereas, estimated stress drops (Δ σ) and multiplicative factor ( E mo) values range from 0.01 to 20.0 MPa and 1.05 to 3.39, respectively. The corner frequencies are found to be ranging from 2.36 to 8.76 Hz. The crustal S-wave quality factor varies from 256 to 1882 with an average of 840 for the Kachchh region, which agrees well with the crustal Q value of the seismically active New Madrid region, USA. Our estimated stress drop values are quite large compared to the other similar size Indian intraplate earthquakes, which can be attributed to the presence of crustal mafic intrusives and aqueous fluids in the lower crust as revealed by the earlier tomographic study of the region.

  13. Estimation of slip rate and fault displacement during shallow earthquake rupture in the Nankai subduction zone

    NASA Astrophysics Data System (ADS)

    Hamada, Yohei; Sakaguchi, Arito; Tanikawa, Wataru; Yamaguchi, Asuka; Kameda, Jun; Kimura, Gaku

    2015-03-01

    Enormous earthquakes repeatedly occur in subduction zones, and the slips along megathrusts, in particular those propagating to the toe of the forearc wedge, generate ruinous tsunamis. Quantitative evaluation of slip parameters (i.e., slip velocity, rise time and slip distance) of past slip events at shallow, tsunamigenic part of the fault is critical to characterize such earthquakes. Here, we attempt to quantify these parameters of slips that may have occurred along the shallow megasplay fault and the plate boundary décollement in the Nankai Trough, off southwest Japan. We apply a kinetic modeling to vitrinite reflectance profiles on the two fault rock samples obtained from Integrated Ocean Drilling Program (IODP). This approach constitutes two calculation procedures: heat generation and numerical profile fitting of vitrinite reflectance data. For the purpose of obtaining optimal slip parameters, residue calculation is implemented to estimate fitting accuracy. As the result, the measured distribution of vitrinite reflectance is reasonably fitted with heat generation rate (dot{Q}) and slip duration ( t r ) of 16,600 J/s/m2 and 6,250 s, respectively, for the megasplay and 23,200 J/s/m2 and 2,350 s, respectively, for the frontal décollement, implying slow and long-term slips. The estimated slip parameters are then compared with previous reports. The maximum temperature, Tmax, for the Nankai megasplay fault is consistent with the temperature constraint suggested by a previous work. Slow slip velocity, long-term rise time, and large displacement are recognized in these fault zones (both of the megasplay, the frontal décollement). These parameters are longer and slower than typical coseismic slip, but are rather consistent with rapid afterslip.

  14. Mass Loss and Surface Displacement Estimates in Greenland from GRACE

    NASA Astrophysics Data System (ADS)

    Jensen, Tim; Forsberg, Rene

    2015-04-01

    The estimation of ice sheet mass changes from GRACE is basically an inverse problem, the solution is non-unique and several procedures for determining the mass distribution exists. We present Greenland mass loss results from two such procedures, namely a direct spherical harmonic inversion procedure possible through a thin layer assumption, and a generalized inverse masscon procedure. These results are updated to the end of 2014, including the unusual 2013 mass gain anomaly, and show a good agreement when taking into account leakage from the Canadian Icecaps. The GRACE mass changes are further compared to GPS uplift data on the bedrock along the edge of the ice sheet. The solid Earth deformation is assumed to consist of an elastic deformation of the crust and an anelastic deformation of the underlying mantle (GIA). The crustal deformation is due to current surface loading effects and therefore contains a strong seasonal component of variation, superimposed on a secular trend. The majority of the anelastic GIA deformation of the mantle is believed to be approximately constant. An accelerating secular trend and seasonal changes, as seen in Greenland, is therefore assumed to be due to elastic deformation from changes in surface mass loading from the ice sheet. The GRACE and GPS comparison is only valid by assuring that the signal content of the two observables are consistent. The GPS receivers are measuring movement at a single point on the bedrock surface, and therefore sensitive to a limited loading footprint, while the GRACE satellites on the other hand measures a filtered, attenuated gravitational field, at an altitude of approximately 500 km, making it sensitive to a much larger area. Despite this, the seasonal loading signal in the two observables show a reasonably good agreement.

  15. Estimation of co-seismic stress change of the 2008 Wenchuan Ms8.0 earthquake

    SciTech Connect

    Sun Dongsheng; Wang Hongcai; Ma Yinsheng; Zhou Chunjing

    2012-09-26

    In-situ stress change near the fault before and after a great earthquake is a key issue in the geosciences field. In this work, based on the 2008 Great Wenchuan earthquake fault slip dislocation model, the co-seismic stress tensor change due to the Wenchuan earthquake and the distribution functions around the Longmen Shan fault are given. Our calculated results are almost consistent with the before and after great Wenchuan earthquake in-situ measuring results. The quantitative assessment results provide a reference for the study of the mechanism of earthquakes.

  16. Ground-motion modeling of the 1906 San Francisco Earthquake, part II: Ground-motion estimates for the 1906 earthquake and scenario events

    USGS Publications Warehouse

    Aagaard, B.T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; McCandless, K.; Nilsson, S.; Petersson, N.A.; Rodgers, A.; Sjogreen, B.; Zoback, M.L.

    2008-01-01

    We estimate the ground motions produce by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  17. BEAM LOSS ESTIMATES AND CONTROL FOR THE BNL NEUTRINO FACILITY.

    SciTech Connect

    WENG, W.-T.; LEE, Y.Y.; RAPARIA, D.; TSOUPAS, N.; BEEBE-WANG, J.; WEI, J.; ZHANG, S.Y.

    2005-05-16

    The requirement for low beam loss is very important both to protect the beam component, and to make the hands-on maintenance possible. In this report, the design considerations to achieving high intensity and low loss will be presented. We start by specifying the beam loss limit at every physical process followed by the proper design and parameters for realizing the required goals. The process considered in this paper include the emittance growth in the linac, the H{sup -} injection, the transition crossing, the coherent instabilities and the extraction losses.

  18. Estimation of earthquake source parameters from GRACE observations of changes in Earth's gravitational potential field using normal modes

    NASA Astrophysics Data System (ADS)

    Sterenborg, G.; Simons, F. J.; Welch, E.; Morrow, E.; Mitrovica, J. X.

    2013-12-01

    Since its launch in 2002, the Gravity Recovery and Climate Experiment (GRACE) has yielded tremendous insights into the spatio-temporal changes of mass redistribution in the Earth system. Such changes occur on widely varying spatial and temporal scales and take place both on Earth's surface, e.g., atmospheric mass fluctuations and the exchange of water, snow and ice, as well as in its interior, e.g., glacial isostatic adjustment and earthquakes. Each of these processes causes changes in the Earth's gravitational potential field which GRACE observes. One example is the Antarctic and Greenland ice mass changes inferred from GRACE observations of the changing geopotential as well as the associated time rate of change of its degree 2 and 4 zonal harmonics observed by satellite laser ranging. Deforming the Earth's surface and interior both co- and post-seismically, with some of the deformation permanent, earthquakes can affect the geopotential at a spatial scale up to thousands of kilometers and at temporal scales from seconds to months. Traditional measurements of earthquakes, e.g., by seismometers, GPS and inSAR, observe the co- and post-seismic surface displacements and are invaluable in understanding earthquake triggering mechanisms, slip distributions, rupture dynamics and slow post-seismic changes. Space-based observations of geopotential changes can add a whole new dimension to this as such observations are also sensitive to changes in the Earth's interior, over a larger area affected by the earthquake, over longer timescales, beyond that of Earth's longest period normal mode, and because they have global sensitivity including over sparsely instrumented oceanic domains. We use a joint seismic and gravitational normal-mode formalism to quantify changes in the gravitational potential due to different types of earthquakes, comparing them to predictions from dislocation models. We discuss the inverse problem of estimating the source parameters of large earthquakes

  19. The source model and recurrence interval of Genroku-type Kanto earthquakes estimated from paleo-shoreline data

    NASA Astrophysics Data System (ADS)

    Sato, Toshinori; Higuchi, Harutaka; Miyauchi, Takahiro; Endo, Kaori; Tsumura, Noriko; Ito, Tanio; Noda, Akemi; Matsu'ura, Mitsuhiro

    2016-02-01

    In the southern Kanto region of Japan, where the Philippine Sea plate is descending at the Sagami trough, two different types of large interplate earthquakes have occurred repeatedly. The 1923 (Taisho) and 1703 (Genroku) Kanto earthquakes characterize the first and second types, respectively. A reliable source model has been obtained for the 1923 event from seismological and geodetical data, but not for the 1703 event because we have only historical records and paleo-shoreline data about it. We developed an inversion method to estimate fault slip distribution of interplate repeating earthquakes from paleo-shoreline data on the idea of crustal deformation cycles associated with subduction-zone earthquakes. By applying the inversion method to the present heights of the Genroku and Holocene marine terraces developed along the coasts of the southern Boso and Miura peninsulas, we estimated the fault slip distribution of the 1703 Genroku earthquake as follows. The source region extends along the Sagami trough from the Miura peninsula to the offing of the southern Boso peninsula, which covers the southern two thirds of the source region of the 1923 Kanto earthquake. The coseismic slip takes the maximum of 20 m at the southern tip of the Boso peninsula, and the moment magnitude (Mw) is calculated as 8.2. From the interseismic slip-deficit rates at the plate interface obtained by GPS data inversion, assuming that the total slip deficit is compensated by coseismic slip, we can roughly estimate the average recurrence interval as 350 years for large interplate events of any type and 1400 years for the Genroku-type events.

  20. Stable isotope values in coastal sediment estimate subsidence near Girdwood during the 1964 great Alaska earthquake

    NASA Astrophysics Data System (ADS)

    Bender, A. M.; Witter, R. C.; Rogers, M.; Saenger, C. P.

    2013-12-01

    Subsidence during the Mw 9.2, 1964 great Alaska earthquake lowered Turnagain Arm near Girdwood, Alaska by ~1.5m and caused rapid relative sea-level (RSL) rise that shifted estuary mud flats inland over peat-forming wetlands. Sharp mud-over-peat contacts record these environment shifts at sites along Turnagain Arm including Bird Point, 11km west of Girdwood. Transfer functions based on changes in intertidal microfossil populations across these contacts accurately estimate earthquake subsidence at Girdwood, but poor preservation of microfossils hampers this method at other sites in Alaska. We test a new method that employs compositions of stable carbon and nitrogen isotopes in intertidal sediments as proxies for elevation. Because marine sediment sources are expected to have higher δ13C and δ15N than terrestrial sources, we hypothesize that these values should decrease with elevation in modern intertidal sediment, and should also be more positive in estuarine mud above sharp contacts that record RSL rise than in peaty sediment below. We relate δ13C and δ15N values above and below the 1964 mud/peat contact to values in modern sediment of known elevation, and use these values qualitatively to indicate sediment source, and quantitatively to estimate the amount of RSL rise across the contact. To establish a site-specific sea level datum, we deployed a pressure transducer and compensatory barometer to record a 2-month tide series at Bird Point. We regressed the high tides from this series against corresponding NOAA verified high tides at Anchorage (~50km west of Bird Point) to calculate a high water datum within ×0.14m standard error (SE). To test whether or not modern sediment isotope values decrease with elevation, we surveyed a 60-m-long modern transect, sampling surface sediment at ~0.10m vertical intervals. Results from this transect show a decrease of 4.64‰ in δ13C and 3.97‰ in δ15N between tide flat and upland sediment. To evaluate if δ13C and δ15N

  1. The energy radiated by the 26 December 2004 Sumatra-Andaman earthquake estimated from 10-minute P-wave windows

    USGS Publications Warehouse

    Choy, G.L.; Boatwright, J.

    2007-01-01

    The rupture process of the Mw 9.1 Sumatra-Andaman earthquake lasted for approximately 500 sec, nearly twice as long as the teleseismic time windows between the P and PP arrival times generally used to compute radiated energy. In order to measure the P waves radiated by the entire earthquake, we analyze records that extend from the P-wave to the S-wave arrival times from stations at distances ?? >60??. These 8- to 10-min windows contain the PP, PPP, and ScP arrivals, along with other multiply reflected phases. To gauge the effect of including these additional phases, we form the spectral ratio of the source spectrum estimated from extended windows (between TP and TS) to the source spectrum estimated from normal windows (between TP and TPP). The extended windows are analyzed as though they contained only the P-pP-sP wave group. We analyze four smaller earthquakes that occurred in the vicinity of the Mw 9.1 mainshock, with similar depths and focal mechanisms. These smaller events range in magnitude from an Mw 6.0 aftershock of 9 January 2005 to the Mw 8.6 Nias earthquake that occurred to the south of the Sumatra-Andaman earthquake on 28 March 2005. We average the spectral ratios for these four events to obtain a frequency-dependent operator for the extended windows. We then correct the source spectrum estimated from the extended records of the 26 December 2004 mainshock to obtain a complete or corrected source spectrum for the entire rupture process (???600 sec) of the great Sumatra-Andaman earthquake. Our estimate of the total seismic energy radiated by this earthquake is 1.4 ?? 1017 J. When we compare the corrected source spectrum for the entire earthquake to the source spectrum from the first ???250 sec of the rupture process (obtained from normal teleseismic windows), we find that the mainshock radiated much more seismic energy in the first half of the rupture process than in the second half, especially over the period range from 3 sec to 40 sec.

  2. Exploration of deep sedimentary layers in Tacna city, southern Peru, using microtremors and earthquake data for estimation of local amplification

    NASA Astrophysics Data System (ADS)

    Yamanaka, Hiroaki; Gamero, Mileyvi Selene Quispe; Chimoto, Kosuke; Saguchi, Kouichiro; Calderon, Diana; La Rosa, Fernándo Lázares; Bardales, Zenón Aguilar

    2016-01-01

    S-wave velocity profiles of sedimentary layers in Tacna, southern Peru, based on analysis of microtremor array data and earthquake records, have been determined for estimation of site amplification. We investigated vertical component of microtremors in temporary arrays at two sites in the city for Rayleigh wave phase velocity. A receiver function was also estimated from existing earthquake data at a strong motion station near one of the microtremor exploration sites. The phase velocity and the receiver function were jointly inverted to S-wave velocity profiles. The depths to the basement with an S-wave velocity of 2.8 km/s at the two sites are similar as about 1 km. The top soil at the site in a severely damaged area in the city had a lower S-wave velocity than that in a slightly damaged area during the 2001 southern Peru earthquake. We subsequently estimate site amplifications from the velocity profiles and find that amplification is large at periods from 0.2 to 0.8 s at the damaged area indicating possible reasons for the differences in the damage observed during the 2001 southern Peru earthquake.

  3. Study of an image restoration method based on Poisson-maximum likelihood estimation method for earthquake ruin scene

    NASA Astrophysics Data System (ADS)

    Song, Yanxing; Yang, Jingsong; Cheng, Lina; Liu, Shucong

    2014-09-01

    An image restoration method based on Poisson-maximum likelihood estimation method (PMLE) for earthquake ruin scene is proposed in this paper. The PMLE algorithm is introduced at first, and automatic acceleration method is used in the algorithm to accelerate the iterative process, then an image of earthquake ruin scene is processed with this image restoration method. The spectral correlation method and PSNR (peak signal-to-noise ratio) are chosen respectively to validate the restoration effect of the method, the simulation results show that iterations in this method will effect the PSNR of the processed image and operation time, and this method can restore image of earthquake ruin scene effectively and has a good practicability.

  4. Using safety inspection data to estimate shaking intensity for the 1994 Northridge earthquake

    USGS Publications Warehouse

    Thywissen, K.; Boatwright, J.

    1998-01-01

    We map the shaking intensity suffered in Los Angeles County during the 17 January 1994, Northridge earthquake using municipal safety inspection data. The intensity is estimated from the number of buildings given red, yellow, or green tags, aggregated by census tract. Census tracts contain from 200 to 4000 residential buildings and have an average area of 6 km2 but are as small as 2 and 1 km2 in the most densely populated areas of the San Fernando Valley and downtown Los Angeles, respectively. In comparison, the zip code areas on which standard MMI intensity estimates are based are six times larger, on average, than the census tracts. We group the buildings by age (before and after 1940 and 1976), by number of housing units (one, two to four, and five or more), and by construction type, and we normalize the tags by the total number of similar buildings in each census tract. We analyze the seven most abundant building categories. The fragilities (the fraction of buildings in each category tagged within each intensity level) for these seven building categories are adjusted so that the intensity estimates agree. We calibrate the shaking intensity to correspond with the modified Mercalli intensities (MMI) estimated and compiled by Dewey et al. (1995); the shapes of the resulting isoseismals are similar, although we underestimate the extent of the MMI = 6 and 7 areas. The fragility varies significantly between different building categories (by factors of 10 to 20) and building ages (by factors of 2 to 6). The post-1940 wood-frame multi-family (???5 units) dwellings make up the most fragile building category, and the post-1940 wood-frame single-family dwellings make up the most resistant building category.

  5. Estimation of slip parameters associated with frictional heating during the 1999 Taiwan Chi-Chi earthquake by vitrinite reflectance geothermometry

    NASA Astrophysics Data System (ADS)

    Maekawa, Yuka; Hirono, Tetsuro; Yabuta, Hikaru; Mukoyoshi, Hideki; Kitamura, Manami; Ikehara, Minoru; Tanikawa, Wataru; Ishikawa, Tsuyoshi

    2014-12-01

    To estimate the slip parameters and understand the fault lubrication mechanism during the 1999 Taiwan Chi-Chi earthquake, we applied vitrinite reflectance geothermometry to samples retrieved from the Chelungpu fault. We found a marked reflectance anomaly of 1.30% ± 0.21% in the primary slip zone of the earthquake, whereas the reflectances in the surrounding deformed and host rocks were 0.45% to 0.77%. By applying a kinetic model of vitrinite thermal maturation together with a one-dimensional heat and thermal diffusion equation, we determined the shear stress and peak temperature in the slip zone during the earthquake to be 1.00 ± 0.04 MPa and 626°C ± 25°C, respectively. Taking into account the probable overestimation of the temperature owing to a mechanochemically enhanced reaction or flash heating at grain contacts, this temperature should be considered an upper limit. The lower limit was previously constrained to 400°C by studies of fluid-mobile trace-element concentrations and magnetic minerals. Therefore, we inferred that the peak temperature during the Chi-Chi earthquake was 400°C to 626°C, corresponding to an apparent friction coefficient of 0.01 to 0.06. Such low friction and the previous evidence of a high-temperature fluid suggest that thermal pressurization likely contributed to dynamic weakening during the Chi-Chi earthquake.

  6. Effects of tag loss on direct estimates of population growth rate

    USGS Publications Warehouse

    Rotella, J.J.; Hines, J.E.

    2005-01-01

    The temporal symmetry approach of R. Pradel can be used with capture-recapture data to produce retrospective estimates of a population's growth rate, lambda(i), and the relative contributions to lambda(i) from different components of the population. Direct estimation of lambda(i) provides an alternative to using population projection matrices to estimate asymptotic lambda and is seeing increased use. However, the robustness of direct estimates of lambda(1) to violations of several key assumptions has not yet been investigated. Here, we consider tag loss as a possible source of bias for scenarios in which the rate of tag loss is (1) the same for all marked animals in the population and (2) a function of tag age. We computed analytic approximations of the expected values for each of the parameter estimators involved in direct estimation and used those values to calculate bias and precision for each parameter estimator. Estimates of lambda(i) were robust to homogeneous rates of tag loss. When tag loss rates varied by tag age, bias occurred for some of the sampling situations evaluated, especially those with low capture probability, a high rate of tag loss, or both. For situations with low rates of tag loss and high capture probability, bias was low and often negligible. Estimates of contributions of demographic components to lambda(i) were not robust to tag loss. Tag loss reduced the precision of all estimates because tag loss results in fewer marked animals remaining available for estimation. Clearly tag loss should be prevented if possible, and should be considered in analyses of lambda(i), but tag loss does not necessarily preclude unbiased estimation of lambda(i).

  7. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  8. Estimation of human heat loss in five Mediterranean regions.

    PubMed

    Bilgili, M; Simsek, E; Sahin, B; Yasar, A; Ozbek, A

    2015-10-01

    This study investigates the effects of seasonal weather differences on the human body's heat losses in the Mediterranean region of Turkey. The provinces of Adana, Antakya, Osmaniye, Mersin and Antalya were chosen for the research, and monthly atmospheric temperatures, relative humidity, wind speed and atmospheric pressure data from 2007 were used. In all these provinces, radiative, convective and evaporative heat losses from the human body based on skin surface and respiration were analyzed from meteorological data by using the heat balance equation. According to the results, the rate of radiative, convective and evaporative heat losses from the human body varies considerably from season to season. In all the provinces, 90% of heat loss was caused by heat transfer from the skin, with the remaining 10% taking place through respiration. Furthermore, radiative and convective heat loss through the skin reached the highest values in the winter months at approximately between 110 and 140W/m(2), with the lowest values coming in the summer months at roughly 30-50W/m(2). PMID:26025784

  9. Strong near-trench locking and its temporal change in the rupture area of the 2011 Tohoku-oki earthquake estimated from cumulative slip and slip vectors of interplate earthquakes

    NASA Astrophysics Data System (ADS)

    Uchida, N.; Hasegawa, A.; Matsuzawa, T.

    2012-12-01

    The 2011 Mw 9.0 Tohoku-oki earthquake is characterized by large near-trench slip that excited disastrous Tsunami. It is of great importance to estimate the coupling state near the trench to understand temporal evolution of interplate coupling near the earthquake source as well as for the assessment of tsunami risk along the trench. However, the coupling states at the near trench areas far from the land are usually not well constrained. The cumulative offset of small repeating earthquakes reflects the in situ slip history on a fault and the slip vectors of interplate earthquakes reflect heterogeneous distribution of coupling on the plate boundary. In this study, we use the repeating earthquake and slip vector data to estimate spatio-temporal change in slip and coupling in and around the source area of the Tohoku-oki earthquake near the Japan trench. The repeating earthquake data for 27 years before the Tohoku-oki earthquake show absence of repeating earthquake groups in the large-coseismic-slip area and low and variable slip rates in the moderate-coseismic-slip region surrounding the large-slip. The absence of repeaters itself could have been explained by both models with very weak coupling and very strong coupling. However, the rotation of slip vectors of interplate earthquakes at the deeper extension of the large-coseismic-slip suggest the plate boundary was locked in the near-trench area before the earthquake, which is consistent with the estimation by Hasegawa et al. (2012) based on stress tensor analysis of the upper plate events near the trench axis. The repeating earthquake data, on the other hand, show small but distinct increases in the slip rate in the 3-5 years before the earthquake near the area of large coseismic slip suggesting preseismic unfastening of the locked area in the last stage of the earthquake cycle. After the Tohoku-oki earthquake, repeating earthquakes activity in the main rupture area disappeared almost completely and slip vectors of

  10. The tsunami source area of the 2003 Tokachi-oki earthquake estimated from tsunami travel times and its relationship to the 1952 Tokachi-oki earthquake

    USGS Publications Warehouse

    Hirata, K.; Tanioka, Y.; Satake, K.; Yamaki, S.; Geist, E.L.

    2004-01-01

    We estimate the tsunami source area of the 2003 Tokachi-oki earthquake (Mw 8.0) from observed tsunami travel times at 17 Japanese tide gauge stations. The estimated tsunami source area (???1.4 ?? 104 km2) coincides with the western-half of the ocean-bottom deformation area (???2.52 ?? 104 km2) of the 1952 Tokachi-oki earthquake (Mw 8.1), previously inferred from tsunami waveform inversion. This suggests that the 2003 event ruptured only the western-half of the 1952 rupture extent. Geographical distribution of the maximum tsunami heights in 2003 differs significantly from that of the 1952 tsunami, supporting this hypothesis. Analysis of first-peak tsunami travel times indicates that a major uplift of the ocean-bottom occurred approximately 30 km to the NNW of the mainshock epicenter, just above a major asperity inferred from seismic waveform inversion. Copyright ?? The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences.

  11. Uncertainty in Climatology-Based Estimates of Soil Water Infiltration Losses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Local climatology is often used to estimate infiltration losses at the field scale. The objective of this work was to assess the uncertainty associated with such estimates. We computed infiltration losses from the water budget of a soil layer from monitoring data on water flux values at the soil su...

  12. Estimates of stress drop and crustal tectonic stress from the 27 February 2010 Maule, Chile, earthquake: Implications for fault strength

    USGS Publications Warehouse

    Luttrell, K.M.; Tong, X.; Sandwell, D.T.; Brooks, B.A.; Bevis, M.G.

    2011-01-01

    The great 27 February 2010 Mw 8.8 earthquake off the coast of southern Chile ruptured a ???600 km length of subduction zone. In this paper, we make two independent estimates of shear stress in the crust in the region of the Chile earthquake. First, we use a coseismic slip model constrained by geodetic observations from interferometric synthetic aperture radar (InSAR) and GPS to derive a spatially variable estimate of the change in static shear stress along the ruptured fault. Second, we use a static force balance model to constrain the crustal shear stress required to simultaneously support observed fore-arc topography and the stress orientation indicated by the earthquake focal mechanism. This includes the derivation of a semianalytic solution for the stress field exerted by surface and Moho topography loading the crust. We find that the deviatoric stress exerted by topography is minimized in the limit when the crust is considered an incompressible elastic solid, with a Poisson ratio of 0.5, and is independent of Young's modulus. This places a strict lower bound on the critical stress state maintained by the crust supporting plastically deformed accretionary wedge topography. We estimate the coseismic shear stress change from the Maule event ranged from-6 MPa (stress increase) to 17 MPa (stress drop), with a maximum depth-averaged crustal shear-stress drop of 4 MPa. We separately estimate that the plate-driving forces acting in the region, regardless of their exact mechanism, must contribute at least 27 MPa trench-perpendicular compression and 15 MPa trench-parallel compression. This corresponds to a depth-averaged shear stress of at least 7 MPa. The comparable magnitude of these two independent shear stress estimates is consistent with the interpretation that the section of the megathrust fault ruptured in the Maule earthquake is weak, with the seismic cycle relieving much of the total sustained shear stress in the crust. Copyright 2011 by the American

  13. Optimized sensor location for estimating story-drift angle for tall buildings subject to earthquakes

    NASA Astrophysics Data System (ADS)

    Ozawa, Sayuki; Mita, Akira

    2016-04-01

    Structural Health Monitoring (SHM) is a technology that can evaluate the extent of the deterioration or the damage of the building quantitatively. Most SHM systems utilize only a few sensors and the sensors are placed equally including the roof. However, the location of the sensors has not been verified. Therefore, in this study, the optimal location of the sensors is studied for estimating the inter-story drift angle which is used in immediate diagnosis after an earthquake. This study proposes a practical optimal sensor location method after testing all the possible sensor location combinations. From the simulation results of all location patterns, it was proved that placing the sensor on the roof is not always optimal. This result is practically useful as it is difficult to place the sensor on the roof in most cases. Modal Assurance Criterion (MAC) is one of the practical optimal sensor location methods. I proposed MASS Modal Assurance Criterion (MAC*) which incorporate the mass matrix of the building into the MAC. Either the mass matrix or the stiffness matrix needs to be considered for the orthogonality of the mode vectors, normal MAC does not consider this condition. The location of sensors determined by MAC* was superior to the previous method, MAC. In this study, an important knowledge of the location of sensors was provided for implementing SHM systems.

  14. Earthquake Analysis.

    ERIC Educational Resources Information Center

    Espinoza, Fernando

    2000-01-01

    Indicates the importance of the development of students' measurement and estimation skills. Analyzes earthquake data recorded at seismograph stations and explains how to read and modify the graphs. Presents an activity for student evaluation. (YDS)

  15. Loss of Information in Estimating Item Parameters in Incomplete Designs

    ERIC Educational Resources Information Center

    Eggen, Theo J. H. M.; Verelst, Norman D.

    2006-01-01

    In this paper, the efficiency of conditional maximum likelihood (CML) and marginal maximum likelihood (MML) estimation of the item parameters of the Rasch model in incomplete designs is investigated. The use of the concept of F-information (Eggen, 2000) is generalized to incomplete testing designs. The scaled determinant of the F-information…

  16. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Earle, Paul; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  17. Cascadia's Staggering Losses

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Vogt, B.

    2001-05-01

    Recent worldwide earthquakes have resulted in staggering losses. The Northridge, California; Kobe, Japan; Loma Prieta, California; Izmit, Turkey; Chi-Chi, Taiwan; and Bhuj, India earthquakes, which range from magnitudes 6.7 to 7.7, have all occurred near populated areas. These earthquakes have resulted in estimated losses between \\3 and \\300 billion, with tens to tens of thousands of fatalities. Subduction zones are capable of producing the largest earthquakes. The 1939 M7.8 Chilean, the 1960 M9.5 Chilean, the 1964 M9.2 Alaskan, the 1970 M7.8 Peruvian, the 1985 M7.9 Mexico City and the 2001 M7.7 Bhuj earthquakes are damaging subduction zone quakes. The Cascadia fault zone poses a tremendous hazard in the Pacific Northwest due to the ground shaking and tsunami inundation hazards combined with the population. To address the Cascadia subduction zone threat, the Oregon Department of Geology and Mineral Industries conducted a preliminary statewide loss study. The 1998 Oregon study incorporated a M8.5 quake, the influence of near surface soil effects and default building, social and economic data available in FEMA's HAZUS97 software. Direct financial losses are projected at over \\$12 billion. Casualties are estimated at about 13,000. Over 5,000 of the casualties are estimated to result in fatalities from hazards relating to tsunamis and unreinforced masonry buildings.

  18. Source parameters of the 2008 Bukavu-Cyangugu earthquake estimated from InSAR and teleseismic data

    NASA Astrophysics Data System (ADS)

    D'Oreye, Nicolas; González, Pablo J.; Shuler, Ashley; Oth, Adrien; Bagalwa, Louis; Ekström, Göran; Kavotha, Déogratias; Kervyn, François; Lucas, Celia; Lukaya, François; Osodundu, Etoy; Wauthier, Christelle; Fernández, José

    2011-02-01

    Earthquake source parameter determination is of great importance for hazard assessment, as well as for a variety of scientific studies concerning regional stress and strain release and volcano-tectonic interaction. This is especially true for poorly instrumented, densely populated regions such as encountered in Africa, where even the distribution of seismicity remains poorly documented. In this paper, we combine data from satellite radar interferometry (InSAR) and teleseismic waveforms to determine the source parameters of the Mw 5.9 earthquake that occurred on 2008 February 3 near the cities of Bukavu (DR Congo) and Cyangugu (Rwanda). This was the second largest earthquake ever to be recorded in the Kivu basin, a section of the western branch of the East African Rift (EAR). This earthquake is of particular interest due to its shallow depth and proximity to active volcanoes and Lake Kivu, which contains high concentrations of dissolved carbon dioxide and methane. The shallow depth and possible similarity with dyking events recognized in other parts of EAR suggested the potential association of the earthquake with a magmatic intrusion, emphasizing the necessity of accurate source parameter determination. In general, we find that estimates of fault plane geometry, depth and scalar moment are highly consistent between teleseismic and InSAR studies. Centroid-moment-tensor (CMT) solutions locate the earthquake near the southern part of Lake Kivu, while InSAR studies place it under the lake itself. CMT solutions characterize the event as a nearly pure double-couple, normal faulting earthquake occurring on a fault plane striking 350° and dipping 52° east, with a rake of -101°. This is consistent with locally mapped faults, as well as InSAR data, which place the earthquake on a fault striking 355° and dipping 55° east, with a rake of -98°. The depth of the earthquake was constrained by a joint analysis of teleseismic P and SH waves and the CMT data set, showing that

  19. Source Process of the 2010 Great Chile Earthquake (Mw8.8) Estimated Using Observed Tsunami Waveforms

    NASA Astrophysics Data System (ADS)

    Tanioka, Y.; Gusman, A. R.

    2010-12-01

    The great earthquake, Mw 8.8, occurred in Chile on 27 February, 2010 at 06:34:14 UTC. The number of casualties by this earthquake was reached 800, and more than 500 people among that were killed by tsunamis. The large tsunami was generated by the earthquake and propagated through Pacific and reached along the coast of Pacific include Hawaii, Japan, and Alaska. The maximum run-up height of the tsunami was 28 m in Chile. The tsunami was observed at DART real-time tsunami monitoring systems installed in the Pacific by NOAA-PMEL and also tide gauges around Pacific. In this paper, the tsunami waveforms observed at 9 DART stations, 32412, 51406, 51426, 54401, 43412, 46412, 46409, 46403, and 21413, are used to estimate the slip distribution of the 2010 Chile earthquake. The source area of 500km x 150km is divided into 30 subfaults of 50 km x 50 km. The Global CMT solution shows the focal mechanism of the earthquake, strike=18degree, dip=18degree, rake=112degree. Those fault parameters are assumed for all subfaults. The tsunami is numerically computed on actual bathymetry. The finite-difference computation for the linear long-wave equations are carried out in the whole Pacific. The grid size is 5 minutes, about 9km. Tsunami waveforms at 9 DART stations are computed from each subfault with a unit amount of slip, and used as the Green’s function for the inversion. The result of the tsunami inversion indicates that the large slip amount of more than 10m is estimated in the source area from about 150 km northeast of the epicenter to about 200 km southwest of the epicenter. The maximum slip amount is estimated to be 19 m at a subfault located at the southwest of the epicenter. The total length of the rupture length is found to be about 400-350 km. The result also indicates the bilateral rupture process of the great Chile earthquake. The total seismic moment calculated from the slip distribution is 2.6 x 10^{22} Nm (Mw 8.9) by assuming the rigidity of 4 x 10^{10} N/m^{2}. This

  20. Loss estimation and damage forecast using database provided

    NASA Astrophysics Data System (ADS)

    Pyrchenko, V.; Byrova, V.; Petrasov, A.

    2009-04-01

    There is a wide spectrum of development of natural hazards is observed in Russian territory. It the necessity of investigation of numerous events of dangerous natural processes, researches of mechanisms of their development and interaction with each other (synergetic amplification or new hazards emerging) with the purpose of the forecast of possible losses. Employees of Laboratory of the analysis of geological risk IEG RAS have created a database about displays of natural hazards in territory of Russia, which contains the information on 1310 cases of their display during 1991 - 2008. The wide range of the used sources has determined certain difficulties in creation of Database and has demanded to develop a special new technique of unification of the information received at different times. One of points of this technique is classification of negative consequences of display of the natural hazards, considering a death-roll, wounded mans, victims and direct economic damage. This Database has allowed to track dynamics of natural hazards and the emergency situations caused by them (ES) for the considered period, and also to define laws of their development in territory of Russia in time and space. It gives the chance to create theoretical, methodological and methodical bases of forecasting of possible losses with a certain degree of probability for territory of Russia and for its separate regions that guarantees in the future maintenance of adequate, operative and efficient pre-emptive decision-making.

  1. A teleseismic study of the 2002 Denali fault, Alaska, earthquake and implications for rapid strong-motion estimation

    USGS Publications Warehouse

    Ji, C.; Helmberger, D.V.; Wald, D.J.

    2004-01-01

    Slip histories for the 2002 M7.9 Denali fault, Alaska, earthquake are derived rapidly from global teleseismic waveform data. In phases, three models improve matching waveform data and recovery of rupture details. In the first model (Phase I), analogous to an automated solution, a simple fault plane is fixed based on the preliminary Harvard Centroid Moment Tensor mechanism and the epicenter provided by the Preliminary Determination of Epicenters. This model is then updated (Phase II) by implementing a more realistic fault geometry inferred from Digital Elevation Model topography and further (Phase III) by using the calibrated P-wave and SH-wave arrival times derived from modeling of the nearby 2002 M6.7 Nenana Mountain earthquake. These models are used to predict the peak ground velocity and the shaking intensity field in the fault vicinity. The procedure to estimate local strong motion could be automated and used for global real-time earthquake shaking and damage assessment. ?? 2004, Earthquake Engineering Research Institute.

  2. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    PubMed

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  3. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    PubMed Central

    Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  4. Workshop on continuing actions to reduce potential losses from future earthquakes in the Northeastern United States: proceedings of conference XXI

    SciTech Connect

    Hays, W.W.; Gori, P.L.

    1983-01-01

    This workshop was designed to define the earthquake threat in the eastern United States and to improve earthquake preparedness. Four major themes were addressed: (1) the nature of the earthquake threat in the northeast and what can be done to improve the state of preparedness; (2) increasing public awareness and concern for the earthquake hazard in the northeast; (3) improving the state of preparedness through scientific, engineering, and social science research; and (4) possible functions of one or more seismic safety organizations. Papers have been abstracted separately. (ACR)

  5. Combined UAVSAR and GPS Estimates of Fault Slip for the M 6.0 South Napa Earthquake

    NASA Astrophysics Data System (ADS)

    Donnellan, A.; Parker, J. W.; Hawkins, B.; Hensley, S.; Jones, C. E.; Owen, S. E.; Moore, A. W.; Wang, J.; Pierce, M. E.; Rundle, J. B.

    2014-12-01

    Combined UAVSAR and GPS Estimates of Fault Slip for the M 6.0 South Napa Earthquake Andrea Donnellan, Jay Parker, Brian Hawkins, Scott Hensley, Cathleen Jones, Susan Owen, Angelyn Moore Jet Propulsion Laboratory, California Institute of Technology Marlon Pierce, Jun Wang Indiana University John Rundle University of California, Davis The South Napa to Santa Rosa area has been observed with NASA's UAVSAR since late 2009 as part of an experiment to monitor areas identified as having a high probability of an earthquake. The M 6.0 South Napa earthquake occurred on 24 August 2014. The area was flown 29 May 2014 preceeding the earthquake, and again on 29 August 2014, five days after the earthquake. The UAVSAR results show slip on a single fault at the south end of the rupture near the epicenter of the event. The rupture branches out into multiple faults further north near the Napa area. A combined inversion of rapid GPS results and the unwrapped UAVSAR interferogram indicate nearly pure strike slip motion. Using this assumption, the UAVSAR data show horizontal right-lateral slip across the fault of 19 cm at the south end of the rupture and increasing to 70 cm northward over a distance of 6.5 km. The joint inversion indicates slip of ~30 cm on a network of sub-parallel faults is concentrated in a zone about 17 km long. The lower depths of the faults are 5-8.5 km. The eastern two sub-parallel faults break the surface, while three faults to the west are buried at depths ranging from 2-6 km with deeper depths to the north and west. The geodetic moment release is equivalent to a M 6.1 event. Additional ruptures are observed in the interferogram, but the inversions suggest that they represent superficial slip that does not contribute to the overall moment release.

  6. Earthquake related VLF activity and Electron Precipitation as a Major Agent of the Inner Radiation Belt Losses

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, Georgios C.; Sidiropoulos, Nikolaos; Barlas, Georgios

    2015-04-01

    The radiation belt electron precipitation (RBEP) into the topside ionosphere is a phenomenon which is known for several decades. However, the inner radiation belt source and loss mechanisms have not still well understood, including PBEP. Here we present the results of a systematic study of RBEP observations, as obtained from the satellite DEMETER and the series of POES satellites, in comparison with variation of seismic activity. We found that a type of RBEP bursts lasting for ~1-3 min present special characteristics in the inner region of the inner radiation belt before large (M >~7, or even M>~5) earthquakes (EQs), as for instance: characteristic (a) flux-time profiles, (b) energy spectrum, (c) electron flux temporal evolution, (d) spatial distributions (e) broad band VLF activity, some days before an EQ and (f) stopping a few hours before the EQ occurrence above the epicenter. In this study we present results from both case and statistical studies which provide significant evidence that, among EQs-lightings-Earth based transmitters, strong seismic activity during a substorm makes the main contribution to the long lasting (~1-3 min) RBEP events at middle latitudes.

  7. Defeating Earthquakes

    NASA Astrophysics Data System (ADS)

    Stein, R. S.

    2012-12-01

    The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by

  8. Equations for estimating horizontal response spectra and peak acceleration from western North American earthquakes: A summary of recent work

    USGS Publications Warehouse

    Boore, D.M.; Joyner, W.B.; Fumal, T.E.

    1997-01-01

    In this paper we summarize our recently-published work on estimating horizontal response spectra and peak acceleration for shallow earthquakes in western North America. Although none of the sets of coefficients given here for the equations are new, for the convenience of the reader and in keeping with the style of this special issue, we provide tables for estimating random horizontal-component peak acceleration and 5 percent damped pseudo-acceleration response spectra in terms of the natural, rather than common, logarithm of the ground-motion parameter. The equations give ground motion in terms of moment magnitude, distance, and site conditions for strike-slip, reverse-slip, or unspecified faulting mechanisms. Site conditions are represented by the shear velocity averaged over the upper 30 m, and recommended values of average shear velocity are given for typical rock and soil sites and for site categories used in the National Earthquake Hazards Reduction Program's recommended seismic code provisions. In addition, we stipulate more restrictive ranges of magnitude and distance for the use of our equations than in our previous publications. Finally, we provide tables of input parameters that include a few corrections to site classifications and earthquake magnitude (the corrections made a small enough difference in the ground-motion predictions that we chose not to change the coefficients of the prediction equations).

  9. Revisiting borehole strain, typhoons, and slow earthquakes using quantitative estimates of precipitation-induced strain changes

    NASA Astrophysics Data System (ADS)

    Hsu, Ya-Ju; Chang, Yuan-Shu; Liu, Chi-Ching; Lee, Hsin-Ming; Linde, Alan T.; Sacks, Selwyn I.; Kitagawa, Genshio; Chen, Yue-Gau

    2015-06-01

    Taiwan experiences high deformation rates, particularly along its eastern margin where a shortening rate of about 30 mm/yr is experienced in the Longitudinal Valley and the Coastal Range. Four Sacks-Evertson borehole strainmeters have been installed in this area since 2003. Liu et al. (2009) proposed that a number of strain transient events, primarily coincident with low-barometric pressure during passages of typhoons, were due to deep-triggered slow slip. Here we extend that investigation with a quantitative analysis of the strain responses to precipitation as well as barometric pressure and the Earth tides in order to isolate tectonic source effects. Estimates of the strain responses to barometric pressure and groundwater level changes for the different stations vary over the ranges -1 to -3 nanostrain/millibar(hPa) and -0.3 to -1.0 nanostrain/hPa, respectively, consistent with theoretical values derived using Hooke's law. Liu et al. (2009) noted that during some typhoons, including at least one with very heavy rainfall, the observed strain changes were consistent with only barometric forcing. By considering a more extensive data set, we now find that the strain response to rainfall is about -5.1 nanostrain/hPa. A larger strain response to rainfall compared to that to air pressure and water level may be associated with an additional strain from fluid pressure changes that take place due to infiltration of precipitation. Using a state-space model, we remove the strain response to rainfall, in addition to those due to air pressure changes and the Earth tides, and investigate whether corrected strain changes are related to environmental disturbances or tectonic-original motions. The majority of strain changes attributed to slow earthquakes seem rather to be associated with environmental factors. However, some events show remaining strain changes after all corrections. These events include strain polarity changes during passages of typhoons (a characteristic that is

  10. An Optimum Model to Estimate Path Losses for 400 MHz Band Land Mobile Radio

    NASA Astrophysics Data System (ADS)

    Miyashita, Michifumi; Terada, Takashi; Serizawa, Yoshizumi

    It is difficult to estimate path loss for land mobile radio using a single path loss model such as diffraction model or Okumura model individually when mobile radio is utilized in widespread area. Furthermore, high accuracy of the path loss estimation is needed when the radio system is digitized because degradation of CNR due to interference deteriorates communications. In this paper, conventional path loss models, i.e. the diffraction model, Okumura model and two-ray model, were evaluated with 400 MHz land mobile radio field measurements, and a method of improving path loss estimation by using each of these conventional models selectively was proposed. The ratio of error between -10 dB and +10 dB for the method applying the correction factors derived from our field measurements was 71.41 %, while the ratios for the conventional diffraction and Okumura models without any correction factors were 26.71 % and 49.42 %, respectively.

  11. New Method for Estimating Landslide Losses for Major Winter Storms in California.

    NASA Astrophysics Data System (ADS)

    Wills, C. J.; Perez, F. G.; Branum, D.

    2014-12-01

    We have developed a prototype system for estimating the economic costs of landslides due to winter storms in California. This system uses some of the basic concepts and estimates of the value of structures from the HAZUS program developed for FEMA. Using the only relatively complete landslide loss data set that we could obtain, data gathered by the City of Los Angeles in 1978, we have developed relations between landslide susceptibility and loss ratio for private property (represented as the value of wood frame structures from HAZUS). The landslide loss ratios estimated from the Los Angeles data are calibrated using more generalized data from the 1982 storms in the San Francisco Bay area to develop relationships that can be used to estimate loss for any value of 2-day or 30-day rainfall averaged over a county. The current estimates for major storms are long projections from very small data sets, subject to very large uncertainties, so provide a very rough estimate of the landslide damage to structures and infrastructure on hill slopes. More importantly, the system can be extended and improved with additional data and used to project landslide losses in future major winter storms. The key features of this system—the landslide susceptibility map, the relationship between susceptibility and loss ratio, and the calibration of estimates against losses in past storms—can all be improved with additional data. Most importantly, this study highlights the importance of comprehensive studies of landslide damage. Detailed surveys of landslide damage following future storms that include locations and amounts of damage for all landslides within an area are critical for building a well-calibrated system to project future landslide losses. Without an investment in post-storm landslide damage surveys, it will not be possible to improve estimates of the magnitude or distribution of landslide damage, which can range up to billions of dollars.

  12. Estimation of furrow irrigation sediment loss using an artificial neural network

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The area irrigated by furrow irrigation in the U.S. has been steadily decreasing but still represents about 20% of the total irrigated area in the U.S. Furrow irrigation sediment loss is a major water quality issue and a method for estimating sediment loss is needed to quantify the environmental imp...

  13. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  14. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Because advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  15. Estimation of soil loss by water erosion in the Chinese Loess Plateau using Universal Soil Loss Equation and GRACE

    NASA Astrophysics Data System (ADS)

    Schnitzer, S.; Seitz, F.; Eicker, A.; Güntner, A.; Wattenbach, M.; Menzel, A.

    2013-06-01

    For the estimation of soil loss by erosion in the strongly affected Chinese Loess Plateau we applied the Universal Soil Loss Equation (USLE) using a number of input data sets (monthly precipitation, soil types, digital elevation model, land cover and soil conservation measures). Calculations were performed in ArcGIS and SAGA. The large-scale soil erosion in the Loess Plateau results in a strong non-hydrological mass change. In order to investigate whether the resulting mass change from USLE may be validated by the gravity field satellite mission GRACE (Gravity Recovery and Climate Experiment), we processed different GRACE level-2 products (ITG, GFZ and CSR). The mass variations estimated in the GRACE trend were relatively close to the observed sediment yield data of the Yellow River. However, the soil losses resulting from two USLE parameterizations were comparatively high since USLE does not consider the sediment delivery ratio. Most eroded soil stays in the study area and only a fraction is exported by the Yellow River. Thus, the resultant mass loss appears to be too small to be resolved by GRACE.

  16. Combining MODIS and Landsat imagery to estimate and map boreal forest cover loss

    USGS Publications Warehouse

    Potapov, P.; Hansen, M.C.; Stehman, S.V.; Loveland, T.R.; Pittman, K.

    2008-01-01

    Estimation of forest cover change is important for boreal forests, one of the most extensive forested biomes, due to its unique role in global timber stock, carbon sequestration and deposition, and high vulnerability to the effects of global climate change. We used time-series data from the MODerate Resolution Imaging Spectroradiometer (MODIS) to produce annual forest cover loss hotspot maps. These maps were used to assign all blocks (18.5 by 18.5??km) partitioning the boreal biome into strata of high, medium and low likelihood of forest cover loss. A stratified random sample of 118 blocks was interpreted for forest cover and forest cover loss using high spatial resolution Landsat imagery from 2000 and 2005. Area of forest cover gross loss from 2000 to 2005 within the boreal biome is estimated to be 1.63% (standard error 0.10%) of the total biome area, and represents a 4.02% reduction in year 2000 forest cover. The proportion of identified forest cover loss relative to regional forest area is much higher in North America than in Eurasia (5.63% to 3.00%). Of the total forest cover loss identified, 58.9% is attributable to wildfires. The MODIS pan-boreal change hotspot estimates reveal significant increases in forest cover loss due to wildfires in 2002 and 2003, with 2003 being the peak year of loss within the 5-year study period. Overall, the precision of the aggregate forest cover loss estimates derived from the Landsat data and the value of the MODIS-derived map displaying the spatial and temporal patterns of forest loss demonstrate the efficacy of this protocol for operational, cost-effective, and timely biome-wide monitoring of gross forest cover loss. ?? 2008 Elsevier Inc.

  17. Estimating the similarity of earthquake focal mechanisms from waveform cross-correlation in regions of minimal local azimuthal station coverage

    NASA Astrophysics Data System (ADS)

    Kilb, D. L.; Martynov, V.; Bowen, J.; Vernon, F.; Eakins, J.

    2002-12-01

    In the Xinjiang province of China, ~2000 earthquakes were recorded by the Tien Shan network during 1997-1999 that exhibit a clear spatial progression of seismicity. This progression, which is confined to a 50 km diameter region, is undetectable in other data catalogs, both global (e.g., REB, PDE, CMT) and local (KIS). The two largest earthquakes in this sequence were the M6.1 August 2, 1998, and the M6.2 August 27, 1998, earthquakes. According to the Harvard moment tensor solutions, both events ruptured faults that trend parallel to the geologic structures in the region (~N55W). However, the August 27 event was a vertical strike slip event while the August 2 event ruptured a dipping fault and had a normal component of slip. These slip directions are counter to what we expect for this fold-and-thrust-belt, which typically has earthquakes with thrust mechanisms. Often seismological researchers make the assumption that aftershocks have the same focal mechanism as their associated mainshocks and/or assume all aftershock fault planes are similarly oriented. We test this assumption by examining the similarity of aftershock mechanisms from the August 2nd and 27th mainshocks. It is difficult to determine focal mechanisms from inversions of full seismic waveforms because the velocity model in the Tien Shan region is so complicated a 3D velocity model would be required. Also, the azimuthal station coverage is poor. Alternative, it impossible to determine accurate focal mechanisms from first motion data because the closest seismic stations have weak and complicated first arrivals. Our approach more easily determines the similarity of earthquake focal mechanisms using waveform cross-correlation. In this way information from the full waveform is utilized, and there is no need to make estimates of the complicated velocity structure. In general, we find there is minimal correlation between pairs of event waveforms (filter 1-8 Hz) within each aftershock sequence. For example, at

  18. Impact-based earthquake alerts with the U.S. Geological Survey's PAGER system: what's next?

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Garcia, D.; So, E.; Hearne, M.

    2012-01-01

    In September 2010, the USGS began publicly releasing earthquake alerts for significant earthquakes around the globe based on estimates of potential casualties and economic losses with its Prompt Assessment of Global Earthquakes for Response (PAGER) system. These estimates significantly enhanced the utility of the USGS PAGER system which had been, since 2006, providing estimated population exposures to specific shaking intensities. Quantifying earthquake impacts and communicating estimated losses (and their uncertainties) to the public, the media, humanitarian, and response communities required a new protocol—necessitating the development of an Earthquake Impact Scale—described herein and now deployed with the PAGER system. After two years of PAGER-based impact alerting, we now review operations, hazard calculations, loss models, alerting protocols, and our success rate for recent (2010-2011) events. This review prompts analyses of the strengths, limitations, opportunities, and pressures, allowing clearer definition of future research and development priorities for the PAGER system.

  19. Fuzzy Discrimination Analysis Method for Earthquake Energy K-Class Estimation with respect to Local Magnitude Scale

    NASA Astrophysics Data System (ADS)

    Mumladze, T.; Gachechiladze, J.

    2014-12-01

    The purpose of the present study is to establish relation between earthquake energy K-class (the relative energy characteristic) defined as logarithm of seismic waves energy E in joules obtained from analog stations data and local (Richter) magnitude ML obtained from digital seismograms. As for these data contain uncertainties the effective tools of fuzzy discrimination analysis are suggested for subjective estimates. Application of fuzzy analysis methods is an innovative approach to solving a complicated problem of constracting a uniform energy scale through the whole earthquake catalogue, also it avoids many of the data collection problems associated with probabilistic approaches; and it can handle incomplete information, partial inconsistency and fuzzy descriptions of data in a natural way. Another important task is to obtain frequency-magnitude relation based on K parameter, calculation of the Gutenberg-Richter parameters (a, b) and examining seismic activity in Georgia. Earthquake data files are using for periods: from 1985 to 1990 and from 2004 to 2009 for area j=410 - 430.5, l=410 - 470.

  20. GPS estimates of microplate motions, northern Caribbean: evidence for a Hispaniola microplate and implications for earthquake hazard

    NASA Astrophysics Data System (ADS)

    Benford, B.; DeMets, C.; Calais, E.

    2012-09-01

    We use elastic block modelling of 126 GPS site velocities from Jamaica, Hispaniola, Puerto Rico and other islands in the northern Caribbean to test for the existence of a Hispaniola microplate and estimate angular velocities for the Gônave, Hispaniola, Puerto Rico-Virgin Islands and two smaller microplates relative to each other and the Caribbean and North America plates. A model in which the Gônave microplate spans the whole plate boundary between the Cayman spreading centre and Mona Passage west of Puerto Rico is rejected at a high confidence level. The data instead require an independently moving Hispaniola microplate between the Mona Passage and a likely diffuse boundary within or offshore from western Hispaniola. Our updated angular velocities predict 6.8 ± 1.0 mm yr-1 of left-lateral slip along the seismically hazardous Enriquillo-Plantain Garden fault zone of southwest Hispaniola, 9.8 ± 2.0 mm yr-1 of slip along the Septentrional fault of northern Hispaniola and ˜14-15 mm yr-1 of left-lateral slip along the Oriente fault south of Cuba. They also predict 5.7 ± 1 mm yr-1 of fault-normal motion in the vicinity of the Enriquillo-Plantain Garden fault zone, faster than previously estimated and possibly accommodated by folds and faults in the Enriquillo-Plantain Garden fault zone borderlands. Our new and a previous estimate of Gônave-Caribbean plate motion suggest that enough elastic strain accumulates to generate one to two Mw˜ 7 earthquakes per century along the Enriquillo-Plantain Garden and nearby faults of southwest Hispaniola. That the 2010 M= 7.0 Haiti earthquake ended a 240-yr-long period of seismic quiescence in this region raises concerns that it could mark the onset of a new earthquake sequence that will relieve elastic strain that has accumulated since the late 18th century.

  1. Toward Reconciling Magnitude Discrepancies Estimated from Paleoearthquake Data: A New Approach for Predicting Earthquake Magnitudes from Fault Segment Lengths

    NASA Astrophysics Data System (ADS)

    Carpenter, N. S.; Payne, S. J.; Schafer, A. L.

    2011-12-01

    We recognize a discrepancy in magnitudes estimated for several Basin and Range faults in the Intermountain Seismic Belt, U.S.A. For example, magnitudes predicted for the Wasatch (Utah), Lost River (Idaho), and Lemhi (Idaho) faults from fault segment lengths, Lseg, where lengths are defined between geometrical, structural, and/or behavioral discontinuities assumed to persistently arrest rupture, are consistently less than magnitudes calculated from displacements, D, along these same segments. For self-similarity, empirical relationships (e.g. Wells and Coppersmith, 1994) should predict consistent magnitudes (M) using diverse fault dimension values for a given fault (i.e. M ~ Lseg, should equal M ~ D). Typically, the empirical relationships are derived from historical earthquake data and parameter values used as input into these relationships are determined from field investigations of paleoearthquakes. A commonly used assumption - grounded in the characteristic-earthquake model of Schwartz and Coppersmith (1984) - is equating Lseg with surface rupture length, SRL. Many large historical events yielded secondary and/or sympathetic faulting (e.g. 1983 Borah Peak, Idaho earthquake) which are included in the measurement of SRL and used to derive empirical relationships. Therefore, calculating magnitude from the M ~ SRL relationship using Lseg as SRL leads to an underestimation of magnitude and the M ~ Lseg and M ~ D discrepancy. Here, we propose an alternative approach to earthquake magnitude estimation involving a relationship between moment magnitude, Mw, and length, where length is Lseg instead of SRL. We analyze seven historical, surface-rupturing, strike-slip and normal faulting earthquakes for which segmentation of the causative fault and displacement data are available and whose rupture included at least one entire fault segment, but not two or more. The preliminary Mw ~ Lseg results are strikingly consistent with Mw ~ D calculations using paleoearthquake data for

  2. Coseismic Fault Slip of the September 16, 2015 Mw 8.3 Illapel, Chile Earthquake Estimated from InSAR Data

    NASA Astrophysics Data System (ADS)

    Zhang, Yingfeng; Zhang, Guohong; Hetland, Eric A.; Shan, Xinjian; Wen, Shaoyan; Zuo, Ronghu

    2016-04-01

    The complete surface deformation of 2015 Mw 8.3 Illapel, Chile earthquake is obtained using SAR interferograms obtained for descending and ascending Sentinel-1 orbits. We find that the Illapel event is predominantly thrust, as expected for an earthquake on the interface between the Nazca and South America plates, with a slight right-lateral strike slip component. The maximum thrust-slip and right-lateral strike slip reach 8.3 and 1.5 m, respectively, both located at a depth of 8 km, northwest to the epicenter. The total estimated seismic moment is 3.28 × 1021 N.m, corresponding to a moment magnitude Mw 8.27. In our model, the rupture breaks all the way up to the sea-floor at the trench, which is consistent with the destructive tsunami following the earthquake. We also find the slip distribution correlates closely with previous estimates of interseismic locking distribution. We argue that positive coulomb stress changes caused by the Illapel earthquake may favor earthquakes on the extensional faults in this area. Finally, based on our inferred coseismic slip model and coulomb stress calculation, we envision that the subduction interface that last slipped in the 1922 Mw 8.4 Vallenar earthquake might be near the upper end of its seismic quiescence, and the earthquake potential in this region is urgent.

  3. Napa Earthquake impact on water systems

    NASA Astrophysics Data System (ADS)

    Wang, J.

    2014-12-01

    South Napa earthquake occurred in Napa, California on August 24 at 3am, local time, and the magnitude is 6.0. The earthquake was the largest in SF Bay Area since the 1989 Loma Prieta earthquake. Economic loss topped $ 1 billion. Wine makers cleaning up and estimated the damage on tourism. Around 15,000 cases of lovely cabernet were pouring into the garden at the Hess Collection. Earthquake potentially raise water pollution risks, could cause water crisis. CA suffered water shortage recent years, and it could be helpful on how to prevent underground/surface water pollution from earthquake. This research gives a clear view on drinking water system in CA, pollution on river systems, as well as estimation on earthquake impact on water supply. The Sacramento-San Joaquin River delta (close to Napa), is the center of the state's water distribution system, delivering fresh water to more than 25 million residents and 3 million acres of farmland. Delta water conveyed through a network of levees is crucial to Southern California. The drought has significantly curtailed water export, and salt water intrusion reduced fresh water outflows. Strong shaking from a nearby earthquake can cause saturated, loose, sandy soils liquefaction, and could potentially damage major delta levee systems near Napa. Napa earthquake is a wake-up call for Southern California. It could potentially damage freshwater supply system.

  4. Reassessment of liquefaction potential and estimation of earthquake- induced settlements at Paducah Gaseous Diffusion Plant, Paducah, Kentucky. Final report

    SciTech Connect

    Sykora, D.W.; Yule, D.E.

    1996-04-01

    This report documents a reassessment of liquefaction potential and estimation of earthquake-induced settlements for the U.S. Department of Energy (DOE), Paducah Gaseous Diffusion Plant (PGDP), located southwest of Paducah, KY. The U.S. Army Engineer Waterways Experiment Station (WES) was authorized to conduct this study from FY91 to FY94 by the DOE, Oak Ridge Operations (ORO), Oak Ridge, TN, through Inter- Agency Agreement (IAG) No. DE-AI05-91OR21971. The study was conducted under the Gaseous Diffusion Plant Safety Analysis Report (GDP SAR) Program.

  5. Volcano-tectonic earthquakes: A new tool for estimating intrusive volumes and forecasting eruptions

    NASA Astrophysics Data System (ADS)

    White, Randall; McCausland, Wendy

    2016-01-01

    We present data on 136 high-frequency earthquakes and swarms, termed volcano-tectonic (VT) seismicity, which preceded 111 eruptions at 83 volcanoes, plus data on VT swarms that preceded intrusions at 21 other volcanoes. We find that VT seismicity is usually the earliest reported seismic precursor for eruptions at volcanoes that have been dormant for decades or more, and precedes eruptions of all magma types from basaltic to rhyolitic and all explosivities from VEI 0 to ultraplinian VEI 6 at such previously long-dormant volcanoes. Because large eruptions occur most commonly during resumption of activity at long-dormant volcanoes, VT seismicity is an important precursor for the Earth's most dangerous eruptions. VT seismicity precedes all explosive eruptions of VEI ≥ 5 and most if not all VEI 4 eruptions in our data set. Surprisingly we find that the VT seismicity originates at distal locations on tectonic fault structures at distances of one or two to tens of kilometers laterally from the site of the eventual eruption, and rarely if ever starts beneath the eruption site itself. The distal VT swarms generally occur at depths almost equal to the horizontal distance of the swarm from the summit out to about 15 km distance, beyond which hypocenter depths level out. We summarize several important characteristics of this distal VT seismicity including: swarm-like nature, onset days to years prior to the beginning of magmatic eruptions, peaking of activity at the time of the initial eruption whether phreatic or magmatic, and large non-double couple component to focal mechanisms. Most importantly we show that the intruded magma volume can be simply estimated from the cumulative seismic moment of the VT seismicity from: Log10 V = 0.77 Log ΣMoment - 5.32, with volume, V, in cubic meters and seismic moment in Newton meters. Because the cumulative seismic moment can be approximated from the size of just the few largest events, and is quite insensitive to precise locations

  6. Applying a fuzzy-set-based method for robust estimation of coupling loss factors

    NASA Astrophysics Data System (ADS)

    Nunes, R. F.; Ahmida, K. M.; Arruda, J. R. F.

    2007-10-01

    Finite element models have been used by many authors to provide accurate estimations of coupling loss factors. Although much progress has been achieved in this area, little attention has been paid to the influence of uncertain parameters in the finite element model used to estimate these factors. It is well known that, in the mid-frequency range, uncertainty is a major issue. In this context, a spectral element method combined with a special implementation of a fuzzy-set-based method, which is called the transformation method, is proposed as an alternative to compute coupling loss factors. The proposed technique is applied to a frame-type junction, which can consist of two beams connected at an arbitrary angle. In this context, two problems are investigated. In the first one, the influence of the confidence intervals of the coupling loss factors on the estimated energy envelopes assuming a unit power input is considered. In the other problem the influence of the envelope of the input power obtained considering the confidence intervals of the coupling loss factors is also taken into account. The estimates of the intervals are obtained by using the spectral element method combined with a fuzzy-set-based method. Results using a Monte Carlo analysis for the estimation of the coupling loss factors under the influence of uncertain parameters are shown for comparison and verification of the fuzzy method.

  7. Probabilistic estimation of earthquake-induced tsunami occurrences in the Adriatic and northern Ionian seas

    NASA Astrophysics Data System (ADS)

    Armigliato, Alberto; Tinti, Stefano

    2010-05-01

    In the framework of the EU-funded project TRANSFER (Tsunami Risk ANd Strategies For the European Region we faced the problem of assessing quantitatively the tsunami hazard in the Adriatic and north Ionian Seas. Tsunami catalogues indicate that the Ionian Sea coasts has been hit by several large historical tsunamis, some of which of local nature (especially along eastern Sicily, eastern Calabria and the Greek Ionian Islands), while others had trans-basin relevance, like those generated in correspondence with the western Hellenic Trench. In the Adriatic Sea the historical tsunami activity is indeed lower, but not negligible: the most exposed regions on the western side of the basin are Romagna-Marche, Gargano and southern Apulia, while in the eastern side the Dalmatian and Albanian coastlines show the largest tsunami exposure. To quantitatively assess the exposure of the selected coastlines to tsunamis we used a hybrid statistical-deterministic approach, already applied in the recent past to the southern Tyrrhenian and Ionian coasts of Italy. The general idea is to base the tsunami hazard analyses on the computation of the probability of occurrence of tsunamigenic earthquakes, which is appropriate in basins where the number of known historical tsunamis is too scarce to be used in reliable statistical analyses, and the largest part of the tsunamis had tectonic origin. The approach is based on the combination of two steps of different nature. The first step consists in the creation of a single homogeneous earthquake catalogue starting from suitably selected catalogues pertaining to each of the main regions facing the Adriatic and north Ionian basins (Italy, Croatia, Montenegro, Greece). The final catalogue contains 6619 earthquakes with moment magnitude ranging from 4.5 to 8.3 and focal depth lower than 50 km. The limitations in magnitude and depth are based on the assumption that earthquakes of magnitude lower than 4.5 and depth greater than 50 km have no significant

  8. Postpartum blood loss: visual estimation versus objective quantification with a novel birthing drape

    PubMed Central

    Lertbunnaphong, Tripop; Lapthanapat, Numporn; Leetheeragul, Jarunee; Hakularb, Pussara; Ownon, Amporn

    2016-01-01

    INTRODUCTION Immediate postpartum haemorrhage (PPH) is the most common cause of maternal mortality worldwide. Most recommendations focus on its prevention and management. Visual estimation of blood loss is widely used for the early detection of PPH, but the most appropriate method remains unclear. This study aimed to compare the efficacy of visual estimation and objective measurement using a sterile under-buttock drape, to determine the volume of postpartum blood loss. METHODS This study evaluated patients aged ≥ 18 years with low-risk term pregnancies, who delivered vaginally. Immediately after delivery, a birth attendant inserted the drape under the patient’s buttocks. Postpartum blood loss was measured by visual estimation and then compared with objective measurement using the drape. All participants received standard intra- and postpartum care. RESULTS In total, 286 patients with term pregnancies were enrolled. There was a significant difference in postpartum blood loss between visual estimation and objective measurement using the under-buttock drape (178.6 ± 133.1 mL vs. 259.0 ± 174.9 mL; p < 0.0001). Regarding accuracy at 100 mL discrete categories of postpartum blood loss, visual estimation was found to be inaccurate, resulting in underestimation, with low correspondence (27.6%) and poor agreement (Cohen’s kappa coefficient 0.07; p < 0.05), compared with objective measurement using the drape. Two-thirds of cases of immediate PPH (65.4%) were misdiagnosed using visual estimation. CONCLUSION Visual estimation is not optimal for measurement of postpartum blood loss in PPH. This method should be withdrawn from standard obstetric practice and replaced with objective measurement using the sterile under-buttock drape. PMID:27353510

  9. The use of streambed temperatures to estimate transmission losses on an experimental channel.

    SciTech Connect

    Ramon C. Naranjo; Michael H. Young; Richard Niswonger; Julianne J. Miller; Richard H. French

    2001-10-18

    Quantifying channel transmission losses in arid environments is important for a variety of reasons, from engineering design of flood control structures to evaluating recharge. To quantify the losses in an alluvial channel, an experiment was performed on a 2-km reach of an alluvial fan located on the Nevada Test Site. The channel was subjected to three separate flow events. Transmission losses were estimated using standard discharge monitoring and subsurface temperature modeling approach. Four stations were equipped to continuously monitor stage, temperature, and water content. Streambed temperatures measured at 0, 30, 50 and 100 cm depths were used to calibrate VS2DH, a two-dimensional, variably saturated flow model. Average losses based on the difference in flow between each station indicate that 21 percent, 27 percent, and 53 percent of the flow was reduced downgradient of the source. Results from the temperature monitoring identified locations with large thermal gradients, suggesting a conduction-dominated heat transfer on streambed sediments where caliche-cemented surfaces were present. Transmission losses at the lowermost segment corresponded to the smallest thermal gradient, suggesting an advection-dominated heat transfer. Losses predicted by VS2DH are within an order of magnitude of the estimated losses based on discharge measurements. The differences in losses are a result of the spatial extent to which the modeling results are applied and lateral subsurface flow.

  10. Earthquake-triggered liquefaction in Southern Siberia and surroundings: a base for predictive models and seismic hazard estimation

    NASA Astrophysics Data System (ADS)

    Lunina, Oksana

    2016-04-01

    The forms and location patterns of soil liquefaction induced by earthquakes in southern Siberia, Mongolia, and northern Kazakhstan in 1950 through 2014 have been investigated, using field methods and a database of coseismic effects created as a GIS MapInfo application, with a handy input box for large data arrays. Statistical analysis of the data has revealed regional relationships between the magnitude (Ms) of an earthquake and the maximum distance of its environmental effect to the epicenter and to the causative fault (Lunina et al., 2014). Estimated limit distances to the fault for the Ms = 8.1 largest event are 130 km that is 3.5 times as short as those to the epicenter, which is 450 km. Along with this the wider of the fault the less liquefaction cases happen. 93% of them are within 40 km from the causative fault. Analysis of liquefaction locations relative to nearest faults in southern East Siberia shows the distances to be within 8 km but 69% of all cases are within 1 km. As a result, predictive models have been created for locations of seismic liquefaction, assuming a fault pattern for some parts of the Baikal rift zone. Base on our field and world data, equations have been suggested to relate the maximum sizes of liquefaction-induced clastic dikes (maximum width, visible maximum height and intensity index of clastic dikes) with Ms and local shaking intensity corresponding to the MSK-64 macroseismic intensity scale (Lunina and Gladkov, 2015). The obtained results make basis for modeling the distribution of the geohazard for the purposes of prediction and for estimating the earthquake parameters from liquefaction-induced clastic dikes. The author would like to express their gratitude to the Institute of the Earth's Crust, Siberian Branch of the Russian Academy of Sciences for providing laboratory to carry out this research and Russian Scientific Foundation for their financial support (Grant 14-17-00007).